id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
585602
pes2o/s2orc
v3-fos-license
Targeted redox inhibition of protein phosphatase 1 by Nox4 regulates eIF2α‐mediated stress signaling Abstract Phosphorylation of translation initiation factor 2α (eIF2α) attenuates global protein synthesis but enhances translation of activating transcription factor 4 (ATF4) and is a crucial evolutionarily conserved adaptive pathway during cellular stresses. The serine–threonine protein phosphatase 1 (PP1) deactivates this pathway whereas prolonging eIF2α phosphorylation enhances cell survival. Here, we show that the reactive oxygen species‐generating NADPH oxidase‐4 (Nox4) is induced downstream of ATF4, binds to a PP1‐targeting subunit GADD34 at the endoplasmic reticulum, and inhibits PP1 activity to increase eIF2α phosphorylation and ATF4 levels. Other PP1 targets distant from the endoplasmic reticulum are unaffected, indicating a spatially confined inhibition of the phosphatase. PP1 inhibition involves metal center oxidation rather than the thiol oxidation that underlies redox inhibition of protein tyrosine phosphatases. We show that this Nox4‐regulated pathway robustly enhances cell survival and has a physiologic role in heart ischemia–reperfusion and acute kidney injury. This work uncovers a novel redox signaling pathway, involving Nox4–GADD34 interaction and a targeted oxidative inactivation of the PP1 metal center, that sustains eIF2α phosphorylation to protect tissues under stress. MATERIALS AND METHODS Unless otherwise indicated, all chemicals were purchased from Sigma Aldrich or Calbiochem and were of analytical or higher purity grade. Cells. Primary cultures of neonatal rat cardiomyocytes were prepared using standard methods (Zhang et al, 2010). Rat H9c2 cardiomyoblasts, HEK293 and U2OS cells were from ATCC. MEFs were prepared from 13.5 day-old embryos of Nox4 -/and littermate WT mice, and immortalized with SV40 large T antigen. The dichroic mirror in the spinning-disk unit was a Di01-T405/488/568/647 from Semrock Inc. A Sutter instruments filter wheel with Chroma emission filters was used. Typically, a Z-stack of 11 steps over 3.0 micron was acquired and the maximum projection image was used for display and comparison of expression levels. The same acquisition and image contrast settings were used for control and treated cells. 3D Structural Illumination microscopy (SIM) was performed on a NIKON SIM system equipped with a 100 x 1.49 NA PlanApo oil immersion objective, an Andor EMCCD camera, and 488 and 567 nm diode lasers. Structured illumination image stacks were acquired with a z-distance of 100 nm and with 15 raw images per plane, 5 phases, 3 angles. The structured illumination raw data were computationally reconstructed using the SIM-module in the NIS Elements software (NIKON). Images displayed are reconstructions of one z plane. Real time-RT PCR. Total RNA was prepared using an RNAase kit (Qiagen), and mRNA expression levels were quantified using specific primers as listed in Table S2. Quantitative real time PCR was performed with SybrGreen on an Eppendorf PCR thermal cycler. Unless specified, β-actin was used for normalization. The relative fold change was calculated based on the Ct method. Chromatin immunoprecipitation assay (ChIP). H9c2 cells were cultured with or without tunicamycin in serum-free media for 4 hours. Cells were treated with 1% formaldehyde for 10 min followed by glycine to terminate the cross-linking reaction. Approximately 7.5 x10 6 cells were lysed with ChIP lysis buffer according to the manufacturer's instructions (SimpleChIP Enzymatic Chromatin IP kitmagnetic beads, Cell Signaling). The chromatin was fragmented by incubation with 2000 gel units micrococcal nuclease per 7.5 x10 6 cells for 20 min at 37°C. The nuclear membrane was broken by 15 sec of gentle sonication and 5 µg of chromatin in 500 µl CHIP buffer was incubated overnight at 4°C with 5 µg normal rabbit IgG (#2729, Cell Signaling) or 5 µg anti-ATF4 Ab (#11815, Cell Signaling). ChIP grade Protein G magnetic beads (#9006) were used to precipitate attached chromatin. Eluted samples were subjected to reverse cross-linking by incubating with 250 µg/ml proteinase K at 65°C for 2 h. DNA was isolated and concentrated using DNA purification spin columns and eluted in 50 µl of elution buffer. The immunoprecipitated DNA was analyzed by semi-quantitative PCR using primers to detect the putative ATF4-binding sites in the Nox4 promoter. ATF4 primers were: forward TGGTCCTGACTTTTCCATCAG, reverse TGGATGTTCGAGAAATTGACTG. PCR was performed under standard conditions for 40 cycles with an annealing temperature of 55°C. Preparation of membrane fractions. Cells grown in 100-mm dishes were washed with cold PBS and homogenized in lysis buffer (50 mM Tris, pH 7.4, containing 0.1 mM EDTA, 0.1 mM EGTA, protease inhibitor cocktail [Sigma,#P8340], and 2 µg/ml Mg132) by sonication (10 s of 3 cycles at 8 W). After centrifugation at 18,000 g for 15 min to separate mitochondria and nuclei, the supernatant was further centrifuged at 100,000 g for 1 h. The resulting supernatant formed the cytosolic fraction. The pellet containing the membrane-enriched fraction was washed two times with the same lysis buffer to remove any remaining cytosol contamination. Membrane fractions were used to assay Nox activity, PP1 activity or for immunoblotting. The membrane fraction was enriched in the ER marker calnexin while the cytosolic fraction was enriched in GAPDH (Fig. S4A). The samples were centrifuged at 35,000 g (4 o C, 18 h). Fractions 1-16 (F1-F16) were collected from the base of the column. Each fraction was split into two 200 µl aliquots, one for immunoblotting and the other for immunoprecipitation experiments. As a control for density gradient separation, a mix of proteins (Gel filtration molecular weight markers, Sigma-Aldrich) was added to the top of the sucrose gradient in a separate tube and centrifuged. The fractions obtained were submitted to SDS-PAGE and proteins were stained with Coomassie Blue. Immunoprecipitation. Cells grown in 6 well plates were scraped and transferred into tubes, then centrifuged at 2,800 g at 4 o C for 5 min. The cell pellet was resuspended in 200 µl lysis buffer. Samples were briefly sonicated (one 10 s cycle, 8 W). Protein concentration was normalized to 1 μg/μl and immunoprecipitation (IP) was performed using 500 μg of homogenate protein. Protein A/G Sepharose beads (Santa Cruz Technology) were pre-cleared with nonspecific IgG, and samples were then precipitated overnight at 4 o C with specific antibody. Samples with non-specific antibody were used as negative control. The next day, immunoprecipitates were washed 7 times with buffer and resuspended in sample buffer for immunoblotting. Samples were heated at 95 °C for 5 min. After cooling, reducing agent was added, samples were run on SDS-polyacrylamide gels, and then blotted onto nitrocellulose. Measurement of ROS. Nox activity (NADPH-stimulated ROS generation) was measured in membrane fractions preapared as described above, using HPLC-based Miklós Geiszt (Department of Physiology, Semmelweis University, Budapest, Hungary). The respective C199S mutant probes for HyPer-ER and HyPerRed, which are ROS-insensitive, were used as negative controls to exclude changes in pH. Cells were co-transfected with HyPer-ER and HyPerRed and kept in phenol red-free medium supplemented with 2 mM glutamine and antibiotics for 48 hours before treatment with tunicamycin (2 µg/ml for 4 hours) or control vehicle. Imaging was performed at 37° C / 5% CO 2 on an inverted Nikon Ti-E microscope equipped with a Yokogawa CSU-X1 spinning-disk confocal unit, an Andor Neo sCMOS camera and a Sutter filter wheel. A 60x Plan Apo VC NA 1.40 Nikon objective was used. HyPer-ER fluorescence emission was monitored at 525/50 nm following excitation at 405 nm and 488 nm, and the ratio of fluorescence intensity was quantified. HyPerRed fluorescence emission was monitored at 647/75 nm following excitation at 560/40 nm. Extracellular H 2 O 2 (200 nM) was added as a positive control and the HyPer-ER and HyPerRed signals acquired simultaneously. NIS Elements v.4.0 software (Nikon) was used for image analysis. Images were background-subtracted and thresholded. Changes in HyPer-ER fluorescence ratio (R) or HyPerRed fluorescence intensity (F) between the indicated time-points or treatments were quantified. The resulting images were displayed in pseudocolor. Recombinant PP1 expression and purification. A pCW vector expressing the untagged  isoform of the catalytic subunit of human PP1 (Alessi et al, 1993) was obtained from the MRC Protein Phosphorylation Unit (Dundee, UK). Protein expression and purification was carried out essentially as described (Barford and Keller, 1994;Egloff et al, 1995). Transformed E.coli DH5 cells were grown in Luria-Bertani (LB) medium supplemented with 2 mM MnCl 2 and 100 μg.ml −1 ampicillin at 30 °C until OD600 reached approximately 0.25. Protein expression was induced with 0.5 mM IPTG. Cells were harvested by centrifugation at 5000 g for 15 min at 4 °C and resuspended in buffer A (50 mM imidazole, 0.5 mM EDTA, 0.5 mM EGTA, 100 mM NaCl, 10% glycerol, 2 mM -mercaptoethanol, 2 mM MnCl 2 , pH 7.5) supplemented with Complete EDTA-free protease inhibitor cocktail (Roche), lysozyme (0.01 mg/ml) and DNAse (0.05 mg/ml). Cell lysis was accomplished by sonication or using a cell disruptor (Constant Systems Ltd). Insoluble material was sedimented by centrifugation at 19500 g for 1 h at 4 °C and the supernatant filtered using 0.22 μm prior to loading on a 5 mL heparin column equilibrated with buffer A. PP1 was eluted using a 100 ml gradient to 50% buffer A supplemented with 1M NaCl. Fractions were analysed on a 12% SDS-PAGE gel and those containing PP1 were pooled and diluted 10-fold with buffer C (50 mM imidazole, 0.5 mM EDTA, 0.5 mM EGTA, 10% glycerol, 5 mM -mercaptoethanol, 2 mM MnCl 2 , pH 7.2) for injection in a HiTrapQ HP (GE Healhcare) column. PP1 was eluted using a gradient to 40% buffer C supplemented with 1M NaCl. PP1 was further purified by sizeexclusion chromatography (SEC) using a Superdex 75 16/60 (GE Healthcare) column equilibrated with SEC buffer (50 mM imidazole, 0.5 mM EDTA, 0.5 mM EGTA, 300 mM NaCl, 10% glycerol, 5 mM -mercaptoethanol, 2 mM MnCl 2 , pH 7.5) for downstream applications. PP1 mutations (PP1 N124D and PP1 D64N) were introduced using the Q5  Site-Directed Mutagenesis Kit (New England Biolabs). All constructs were verified by sequencing. Expression and purification of PP1 variants were carried out as for wild-type PP1. Table S3. For assessment of cellular phosphatase activity, cell membrane extracts (Hubbard et al, 1990) were incubated with phosphopeptide substrate (0.1 mM) in the presence or absence of okadaic acid (10 nM), which does not inhibit PP1 at this concentration (Ishihara et al, 1989), and then phosphatase activity was estimated as described above. For each sample, incubation without the phosphopeptide substrate was used as a blank. PP1 activity was taken as the okadaic acid-resistant fraction and was normalized by protein content. Calyculin A (60 nM) (which inhibits both PP1 and PP2a) (Ishihara et al, 1989) was used as a control to confirm total PP activity. In some experiments, ascorbate (0.5 mM) was added to cells for 30 min before cell lysis. Electron paramagnetic resonance spectroscopy (EPR). EPR was used to measure ascorbyl radical generation and to assess the PP1 metal redox status. For ascorbyl detection, EPR spectra were recorded at room temperature in a Magnatech Miniscope MS2000 spectrometer. The instrument conditions were: microwave power 50 mW, modulation amplitude 1 Gauss (G), scan time 328 ms, with a gain of 9 × 10 2 . All spectra were the accumulation of 4 scans and were recorded 5 min after addition of H 2 O 2 . EPR instrument conditions were calibrated with 4-hydroxy-2, 2, 6, 6tetramethyl-1-piperidinyloxy (Tempol). The reaction was carried out in 0.1 mM Tris buffer at pH 7.0 and 37 o C under the different conditions described in Figure legends, and was transferred to a 50 µl flat cell immediately after the addition of ascorbate. The two line spectrum was consistent with an ascorbyl radical with a hyperfine splitting constant (a H = 1.8G) (Monteiro et al, 2007), as generated using the positive control ascorbate and H 2 O 2 . EPR at low temperature is a method to detect chemical species with unpaired electrons and is used for studies of transitional metal ion complexes in proteins (Cammack and Cooper, 1993;Ubbink et al, 2002). We used a Bruker EMX 300 spectrometer with a 3mm cavity and a helium cooling system. Purified PP1 (5 mg/ml) was studied at baseline and after treatment with H 2 O 2 (1 mM) in TrisHCl buffer pH 7.2 at 37 o C. The reaction mixture was transferred to a flat cell and frozen in liquid nitrogen. Spectrometer conditions were: temperature, 4 K; microwave frequency, 9.66 GHz; modulation amplitude, 2 G at 100 kHz; microwave power, 20mW. Cell viability. Cells were plated in 24-well plates at 70% confluence and Nox4 levels were manipulated as described in the Figure legends. Cells were then exposed to tunicamycin (2 µg/ml), guanabenz or clonidine (both dissolved in PBS), or salubrinal After 20 min equilibration, global ischemia was initiated for 25 min and the hearts were then reperfused for 100 min. Hearts were weighed, frozen and cut into 1 mm thick slices. Viable tissue was stained red with 1% 2,3,5-triphenyl-tetrazolium chloride (TTC) in phosphate buffer; sections were then immersed in formalin and scanned. The infarcted area was calculated as a proportion of the total left ventricular area using Image J Software. For immunobloting studies, hearts were reperfused for 30 min following ischemia and snap frozen for subsequent analyses. Some animals were injected with guanabenz (1.8 mg/kg body weight) 24 h prior to heart perfusion. The hearts of these animals were perfused with modified KH buffer containing 0.5 µM guanabenz. To induce ER stress-related AKI, animals were treated with tunicamycin (3 mg/kg/day i.p. for two days) (Zinszner et al, 1998). Some animals were pre-treated with guanabenz (1.8 mg/kg ip). After sacrifice, serum was collected and the plasma urea concentration was measured using a commercial Kit (Bioassay Systems). Kidneys were harvested for immunoblotting or were fixed and paraffin-embedded to assess apoptosis using TUNEL staining (Millipore S7110 Kit). Statistics. Data are presented as mean±SEM. Comparisons among groups were undertaken by Student's t test or one-way ANOVA, as appropriate. Kaplan Meier analysis was used to compare survival. Statistical analyses were performed on GraphPad-Prism (GraphPad-Software, San Diego, Ca). P<0.05 was considered significant. References Alessi DR, Street AJ, Cohen P, Cohen PT (1993) Inhibitor-2 functions like a chaperone to fold three expressed isoforms of mammalian protein phosphatase-1 into a conformation with the specificity and regulatory properties of the native enzyme. Eur D. Nox activity increased after tunicamycin treatment but was substantially reduced by the knockdown of Nox4. Nox activity was measured in membrane fractions isolated after 4h of Tn treatment, using HPLC-based detection of the dihydroethidium (DHE) oxidation products, 2-hydroxyethidium (EOH) and ethidium (E). Inset shows Nox4 protein levels. E. Effects of shRNA-mediated knockdown of Nox4 in H9c2 cells on the tunicamycin (Tn, 2 µg/ml)-induced changes in nuclear levels of ATF4 (red bar graph) and ATF6 (black bar graph); mRNA levels of Xbp1-s (blue bar graph); and protein levels of Grp78 (green bar graph). Representative blots and gels for this experiment are shown in Fig 1C. All data are mean ± SEM of n=3 per group except panel D, which is n=4 per group. *, significant compared to baseline. #, significant comparing Ad.shNox4 versus corresponding control (Ad.Ctl). Values above bar graphs denote the level of significance. antibody or with non-specific rabbit IgG, as indicated. Purified DNA was analyzed using primers specific for the rat Nox4 promoter comprising the putative ATF4 binding sites (see schematic). All data are mean ± SEM of 3 experiments/group apart from ATF6 protein levels in panel A which were n=4/group. *, significant compared to baseline. #, significant comparing siNox4, Ad.Nox4 or siATF4 versus corresponding controls. Values above bar graphs denote the level of significance. The maximum likelihood estimate for coordinates' uncertainty ( x ) derived from crystallographic refinement is 0.08 Å and 0.11 Å for the ascobate-treated (reduced) and H 2 O 2 -treated (oxidized) structures, respectively. The standard uncertainty on metal-ligand (M-L) distances is  d(M-L) = 2 (1/2)  x . As there are two PP1 molecules in the a.u., each distance is determined twice resulting in the standard estimate of the mean (s.e.m.) to be (s.e.m. d M-L ) = (2 (1/2)  x ) / 2 (1/2) =  x . Thus, s.e.m d (M-L)reduced = 0.08 Å and s.e.m d (M-L)oxidized = 0.11 Å. To evaluate whether a correlation exists between the change in metal-ligand coordination distance defined as  = d (M-L)oxidized -d (M-L)reduced determined by theoretical and X-ray experimental methods we plotted  X-ray against  theory for each of the twelve (M-L) distances. A perfect correlation would result in a straight line of unitary slope. Errors for  X-ray are calculated as  X-ray = [( x(oxidised) ) 2 + ( x(reduced) ) 2 ] (1/2) . The plot is shown in Fig. S5f. There is a good correlation between theory and experiment with 75% of the  values (black circles) lying on the diagonal within error whilst three values (red circles) can be considered outliers. The availability of X-ray data at higher resolution and the use of a more complex description of protein restraints in the theoretical calculations might improve the agreement even further. The correlation is statistically significant and quantified by a Spearman  coefficient of 0.706 for all twelve  values with a two-tailed p-value of 0.0124. The plot shows that most points lie on the lower-left quadrant. This implies a contraction of the average (M-L) distance upon metal oxidation. We tested whether the contraction of the average (M-L) distance observed experimentally is statistically significant. A Wilcoxon matched-pairs signed rank test gives a one-tailed p-value of 0.023 indicating statistical significance (p<0.05).
2017-05-13T09:19:07.624Z
2016-01-07T00:00:00.000
{ "year": 2016, "sha1": "d520ad366e729eed6178f15ebf481c4eb2c69f9a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15252/embj.201592394", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "40debd4120ed9e295ee5406ae519aaeddb7aa544", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
257552169
pes2o/s2orc
v3-fos-license
Examining the relationship between vegetation decline and precipitation in the national parks of the Greater Limpopo Transfrontier Conservation Area during the 21st century The Greater Limpopo Transfrontier Conservation Area (GLTFCA) of southeastern Southern Africa is home to five large national parks and is an important protected area crossing different geopolitical borders, but with the same conservation goals. However, even with similar management techniques, there have been concerning declines in vegetation observed across the last few decades. This study proposes that a larger driver, climate, is linked to this decline over time, and raises the point that these conservation areas are more important now than ever. Precipitation (annual and seasonal), the Normalized Difference Vegetation Index (NDVI, indicator of vegetation health), and Directional Persistence data (D, metric to measure trends in vegetation health over time compared to a baseline value) from 2000 to 2020 are used. Overall, there was a negative trend in precipitation during the 21st century in all seasons except the beginning of the wet season. Linked to this were negative trends in vegetation health both in absolute Normalized Difference Vegetation Index values and resultant D values. Overall, this study found a decline in precipitation, which was significantly linked to a decline in vegetation health across the majority of the year in the Greater Limpopo Transfrontier Conservation Area. This study supports literature on browning in sub-Saharan Africa and gives managers even more reason to work together towards a unified conservation strategy for this important region. Introduction In January of 2022, 150 years after the creation of the world's first national park, just 16% of the terrestrial world has been set aside to be used as a protected area (Protected Planet, 2019), despite a goal of 30% by 2030 (Dinerstein et al., 2019). In this everglobalizing world, it is becoming more important to work across geopolitical borders in order to most effectively manage protected areas (Home -Peace Parks Foundation, 2022). Part of these protected areas are savannas, which is a type of dryland that consists of a heterogeneous landscape with a mix of grasses, trees, and shrubs (Vogel and Strohbach, 2009;Hanan and Lehmann, 2010;Chapin et al., 2011). Savannas take up about 20% of all of Earth's land cover, and approximately 55% of southern Africa (Chapin et al., 2011). The 79 national parks across Southern Africa all contain some type of savanna (Herrero et al., 2020a). Peace Parks Foundation, founded by Nelson Mandela, Dr. Anton Rupert, and HRH Prince Bernhard of the Netherlands in 1997, is leading the effort to manage protected areas, largely consisting of savannas, across Africa. The management of savannas is vital because they are an important conservation landscape. They have a high level of biodiversity, the systems support large human populations, they contribute to the carbon cycle, and consist of 14% of the global net primary productivity (Monseroud et al., 1993;Ojima et al., 1993;Scholes and Archer, 1997;Houghton et al., 1999). A hotspot for savanna landscapes is the Greater Limpopo Transfrontier Conservation Area (GLTFCA) in southeastern Africa (Figure 1). The GLTFCA consists of five national parks that are all vital to conservation in the area. Each of the parks contain a majority of pixels which are classified as some type of savanna by the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5 global land cover product. Precipitation is a predominant factor driving landscape productivity and patterns throughout the GLTFCA, which also determines vegetation type (Scholes and Archer, 1997;Sankaran et al., 2005;Southworth et al., 2016). Grassland systems are expected in an areas with up to 750 mm of mean annual precipitation, and mixed systems are found between 750 and 950 mm of precipitation (Campo-Bescos et al., 2013a). Any landscape that receives more than 950 mm of precipitation will likely be dominated by trees (Scholes and Archer, 1997;Sankaran et al., 2005;Staver et al., 2011;Campo-Bescos et al., FIGURE 1 Study area map of the five National Parks and surrounding Great Limpopo Transfrontier Conservation Area (GLTFCA) in southern Africa across three countries: South Africa, Mozambique, and Zimbabwe. Frontiers in Environmental Science frontiersin.org 02 2013b). With the ever-changing climate, precipitation has been severely affected. These savanna systems may be seeing a decrease in precipitation causing a resultant change in vegetation. With decreasing precipitation, the expansion of shrub/scrub cover is encouraged (Sankaran et al., 2005;Southworth et al., 2016). High temperatures can cause a decrease in soil moisture within savannas, which can diminish vegetation health and productivity . While it is necessary to acknowledge the range of factors that could alter vegetation in the GLTFCA, given the large area and consistent management strategy, this analysis focuses on climate as a driver of change. Savanna ecosystems in national parks are difficult landscapes to monitor through remote sensing due to the transitional and variable vegetation cover. Yet even with this heterogeneity challenge, it is important for remote sensing techniques to advance in these areas because of the ecological and socioeconomic importance of savannas. The use of time series analysis of satellite imagery has become a well-known method for evaluating savannas . Additionally, NDVI (Normalized Difference Vegetation Index) can be used as a proxy to determine the biomass, health, and abundance of vegetation (Tucker, 1979;Carlson and Ripley, 1997;Lambin and Ehrlich, 1997;Wang et al., 2005;Jiang et al., 2006;Begue et al., 2011;Southworth et al., 2016). By using a time series analysis with NDVI, we gain a better understanding of trends and patterns in vegetation health over time (Waylen et al., 2014). Directional persistence (D) is a measure derived from NDVI, which has been useful in the study of drylands globally (Waylen et al., 2014;Southworth et al., 2016;Herrero et al., 2020b;Herrero et al., 2020a;Muir et al., 2021). D uses NDVI time series analysis to compare the greenness of vegetation in a selected season and year, to a baseline time period. By using D, the researcher can determine the spatial patterns of interannual changes of NDVI and identify landscape patterns of change, while also looking at the statistical significance of these patterns at a pixel level (Waylen et al., 2014;Southworth et al., 2016). This study focuses on the five National Parks within the GLTFCA and the health of its vegetation through the lens of climate. Conservation is carried out in these areas because within parks there is a focus of maintaining biodiversity while minimizing human impacts. A theory of landscape ecology suggests that larger parks (or transfrontier conservation areas) tend to have more success in their conservation goals due to their greater spatial extent and resources (Newmark, 2008;Herrero et al., 2020a). It is just recently that multiple country, cross border parks like the GLTFCA have been developed. These transfrontier conservation areas are based on the landscape ecology principle of connectivity, where when an area is more physically connected, it has a better chance of success for conservation (Saura et al., 2018). In addition, the use of the GLTFCA effectively eliminates or holds constant the variability in management techniques across countries so other variables may be investigated, such as climate as within this study (Home -Peace Parks Foundation, 2022). Based on the study (Herrero et al., 2020b), the GLTFCA was identified as an area of concern for negative vegetation trends, so further evaluation of this area is needed. This research seeks to evaluate if the GLTFCA is working in terms of vegetation health patterns and trends, and how climate and specifically precipitation, are influencing the vegetation health of the savannas across the landscape. NDVI and D will be used to determine the health of the vegetation, so trends can be established, and patterns identified. The research objectives of this study are to: 1) Examine the change in precipitation at both annual and seasonal periods in the GLTFCA 2) Evaluate significant spatio-temporal trends in absolute vegetation health in the GLTFCA, and 3) Assess the influence of precipitation on absolute vegetation health within each of the national parks comprising the GLTFCA. Study area The GLTFCA is located in southeastern Africa and spans across three countries-South Africa ( (Table 1), but the entirety of the GLTFCA, which connects and surrounds the national parks, acting as functional buffer zones, is almost 100,000 km 2 in area (Home -Peace Parks Foundation, 2022). The GLTFCA is home to more than 850 animal species as well as over 2,000 plant species, all of which are now carefully managed for conservation (Home -Peace Parks Foundation, 2022). This area is one of the earliest established and most successful examples worldwide of the conservation value of peace parks. Peace parks are defined as conservation areas that are managed cooperatively across geopolitical boundaries, and in this case specifically, are coordinated by a neutral third party, the Peace Parks Foundation (PPF) (Home -Peace Parks Foundation, 2022). The long-term goal of the PPF is to preserve large and functional ecosystems, such as the GLTFCA, so that the natural resources that humans depend on, such as food, medicine, clean water, and clean air are preserved for current and future generations (Home -Peace Parks Foundation, 2022). "Our dream is to reconnect Africa's wild spaces to create a future for man in harmony with nature" (Home -Peace Parks Foundation, 2022). Each of these ecosystems within the GLTFCA can be categorized as some type of savanna system, which range from grassland dominated to woodland dominated. One of the most important factors that influences the type of savanna is precipitation, and this whole area is categorized as dryland (Sankaran et al., 2005;Andela et al., 2013;Campo-Bescós et al., 2013b;Lehmann et al., 2014). Precipitation varies in these parks from Mean Annual Precipitation (MAP) of 439 mm in Bahine to 602 mm in Zinave (Figure 2), but as all these MAP values are below the well-published 650 mm threshold, the vegetation types are grass dominated savanna systems (Scholes and Archer, 1997;Sankaran et al., 2005;Staver et al., 2011;Campo-Bescos et al., 2013a). Southern Africa has two primary seasons, a rainy season from November to April and a dry season from May to October. Therefore, there is a high degree of seasonal fluctuation of precipitation across the year in these national parks ( Figure 2). Precipitation Precipitation data was a key part of this analysis, given its important relationship with savanna vegetation, and healthy vegetation is the basis of conservation for all other species in this study area (Sankaran et al., 2005;Andela et al., 2013;Campo-Bescós et al., 2013b;Lehmann et al., 2014). The precipitation data was acquired from the Climate Hazards Group InfraRed Precipitation with Station Data (CHIRPS) website (Funk et al., 2015). This gridded dataset is available at 0.05°spatial resolution beginning in 1981 to present (Funk et al., 2015). To account for the discrepancy in spatial resolution of the precipitation data and the size of the Frontiers in Environmental Science frontiersin.org 04 parks, a mean value for each park over time and by seasons was extracted using the park shapefile subset over the CHIRPS grids. In southern Africa, the dry season occurs from approximately May to October, and the wet season occurs from approximately November to April. Vegetation green-up lags 1 month behind precipitation occurrence (Zhu and Southworth, 2013). Triads were created to represent the seasons (beginning wet, end wet, beginning dry, end dry). Therefore, precipitation/vegetation triads were defined respectively as November-December-January (NDJ)/ December-January-February (DJF); February-March-April (FMA)/ March-April-May (MAM); May-June-July (MJJ)/June-July-August (JJA); August-September-October (ASO)/September-October-November (SON). The two time periods evaluated are 2000-2020 to match the beginning of the available satellite imagery, and 1981-2020 which is the longest available climate data to establish longer term trends for this region through CHIRPS. Evaluating longer-term climate trends using a continuous time period is a standard practice in savanna system research (Scholes and Archer, 1997;Sankaran et al., 2005;Staver et al., 2011;Herrero et al., 2016;Herrero et al., 2020b). CHIRPS data was downloaded at the monthly time scale and accumulated in the following ways. This includes: a) total annual precipitation (defined as February 2000 to January 2021 to match the start of satellite vegetation data in March 2000), b) total precipitation by each season from 1981 to 2020 for each park (where mean seasonal precipitation was also calculated), and c) similarly, total precipitation by each season over time from 1999 to 2020 by park. To determine any significant changes in precipitation over time, Z-scores were calculated to identify anomalies, and linear trendlines were creat ed (Herrero et al., 2020b). This was done in R and represents 40 statistical tests: the five national parks, each tested for significant increases or decreases in precipitation for each of the four triads at both time periods a) 1981-2020 and b) 1999-2020. Normalized difference vegetation index (NDVI) The normalized difference vegetation index (NDVI) is a measure of vegetation abundance and greenness and is often used in the context of evaluating the health of conservation areas Herrero et al., 2019;Herrero et al., 2020b;Herrero et al., 2020a;Blentlinger and Herrero, 2020). NDVI uses a ratio between the red band (R, absorptive in healthy vegetation) and the near infrared band (NIR, reflective in healthy vegetation) from multispectral satellite imagery to determine a pixel value between −1 and +1, where higher values indicate greener/denser vegetation and values less than 0 indicate no vegetation (Measuring Vegetation, 2000). The satellite imagery used for this analysis was an NDVI composite product from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. This product, MOD13Q1.006 Terra Vegetation Index, has a 250 m spatial resolution and is made from a 16-day composite of the maximum NDVI value per pixel. This data from Google Earth Engine comes atmospherically corrected as well as being masked for clouds, cloud shadows, heavy aerosols, and water (Google Earth Engine Explorer, 2021). This MODIS data was available from 2000 to 2020. The NDVI data was then aggregated into triads by mean value, hereafter referred to as seasons, to make it compatible with the precipitation regimes. The wet season is represented by DJF and MAM and the dry season is represented by JJA and SON, the spatial results of which are below in Figure 6. The mean NDVI value for each season was calculated by pixel and then averaged by park. This was plotted as a continuous time series by park from 2000 to 2020 to demonstrate annual phenology in these protected areas. The mean NDVI pixel value was also calculated by seasons and park for the periods 2000-2004, 2005-2008, 2009-2012, 2013-2016, and 2017-2020 to further identify differences over finer time periods (multiple years to account for any potential anomalies). These values were also presented spatially through maps. Directional persistence Greening studies have utilized remotely sensed time-series of vegetation indices, which included seasonality and serial autocorrelation. These studies have attempted to correct for trends using multiple statistical methods such as harmonic regression, time series using calendar days, linear models with nonparametric components for seasonality, etc. However, in a study by De Jong et al. (2012) they determined that greening or browning results varied significantly, depending on the method used in the analysis, and not necessarily the actual change on the landscape (De Jong et al., 2012). In response to these difficulties, the creation of a straightforward, repeatable, and statistically valid method was warranted (Southworth et al., 2023). NDVI time series can be used to study global vegetation change in several ways. D is a per pixel continuous measure of the direction of change in NDVI over time and identifies significant positive or negative cumulative change, as compared to a baseline condition (Waylen et al., 2014;Southworth et al., 2016;Herrero et al., 2020bHerrero et al., , 2020a. The NDVI product (MOD13Q1.006) used for calculating D is described above in Section 2.3. This novel technique allows for both the space and time components to be addressed in and around these protected areas at a pixel scale over large landscapes and for extended periods of time. This technique provides a spatially broad and temporally long metric to determine patterns in vegetation stability or change. The application of this technique for understanding dynamics of these critical conservation areas, such as the GLTFCA, can assist in their management and protection (Herrero et al., 2020b;Herrero et al., 2020a). To calculate the D statistic, a multi-year seasonal mean (2000)(2001)(2002)(2003)(2004) is used as a baseline of comparison. Including multiple years in the baseline accounts for any anomalous years (Waylen et al., 2014). Following this, each ensuing season/year is compared to the baseline, creating a random walk statistic for each pixel (Waylen et al., 2014). This indicates that there is an equal chance that a pixel will be above or below the mean. If there is a "success" (NDVI is higher than the baseline), that pixel is assigned a value of +1. If there is a "failure" (NDVI is lower than the baseline), that pixel is assigned a value of −1. These pixel values are summed through time producing a net D value; in this study comparing Frontiers in Environmental Science frontiersin.org 05 2005-2020 to the baseline creates N = 15. A pixel value of +15 would mean that every subsequent year was higher than the baseline (a very strong positive trend in vegetation health of the protected areas over time), and a pixel value of −15 would mean that every subsequent year was lower than the baseline (a very strong negative trend in vegetation health of the protected areas over time). Where the value of statistical significance lies is based on a hypergeometric distribution (Waylen et al., 2014). Given the Bahine Gonarezhou Kruger Limpopo Zinave 1981-2020 1999-2020 1981-2020 1999-2020 1981-2020 1999-2020 1981-2020 1999-2020 1981-2020 1999-2020 Frontiers in Environmental Science frontiersin.org nature of the application of this study and investigating how the vegetation health of these conservation areas have changed over time and how these changes could make a meaningful difference in the parks, the lowest conventionally accepted statistical significance of 0.10 was used to allow for examining more change. Therefore, any pixels with a D value of ±7 are deemed significant. Long-term precipitation In Figure 3, total seasonal precipitation is plotted by park over the total time period of 1981-2020. The long term trends in precipitation were important to study in order to establish the historical conditions and climatological norms of each park, as conservation efforts are heavily influenced by climate. This was further broken down in Figure 3 into 1999-2020 to compare seasonally specific precipitation trends with satellite-based vegetation data in the form of NDVI. Figure 3 shows that there are strong negative trends during the dry seasons (MJJ and especially ASO) over time across all parks. During the wet season, and especially the onset of the rainy season (NDJ) there was a slightly stronger positive trend over time. This is particularly true in Zinave, which has the greatest mean Frontiers in Environmental Science frontiersin.org annual and seasonal precipitation, Table 2. In FMA over the longer time period there was a slightly positive trend, however, in FMA from 2000 to 2020, there was a strong negative trend, especially in Zinave, indicating that there may be more variability in this park. These stronger trends at the beginning of the wet season and end of the dry season again support that precipitation is changing most likely in terms of temporal distribution, for example, early or late rains, etc., (Herrero et al., 2020b). Results presented in Table 3 show that there are statistically significant negative trends in both the longer time series of 1981-2020 and from 1999 to 2020. The parks with the greatest (Zinave) and least (Bahine) mean annual and seasonal precipitation, had the most significantly negative trends. In every park there was a significant negative trend in precipitation during the later part of the dry season (ASO), occurring across the whole time period. Similarly, in the beginning of the dry season (MJJ) there was a significant downward trend in either the period 1981-2020 or 1999-2020 in every park, except Kruger, which only exhibited significant change in precipitation during ASO 1981ASO -2020. In only one park, Zinave, during the beginning of the wet season (NDJ) over the whole time period of 1981-2020 was there any significant positive change in precipitation, which supports results in Figure 3. This further supports the theory that changes in precipitation most prominently occur at the edges of the seasons. In total, of the 40 statistical scenarios tested, there were 14 that were statistically significant 13 negative and 1 positive change in precipitation over time. Frontiers in Environmental Science frontiersin.org Normalized difference vegetation index (NDVI) When approaching the Normalized Difference Vegetation Index (NDVI) as a time series, Figure 4 shows that there is a great deal of intra-annual variability in NDVI, which follows the rains, i.e., there is greater abundance of vegetation during the rainy season and a lower abundance of healthy vegetation during the dry season. When looking at these seasonal trends over time, each park, buffer, and the total landscape has a negative trend in the abundance of healthy vegetation (Figure 4). However, Zinave had the least negative trend, which aligns with the results of wet season precipitation above. Figure 5 shows seasonal NDVI by season and park and depicts the expected variability between parks. During MAM and JJA, consistently across parks, the first and last time periods had the highest mean NDVI, with the lowest mean NDVI occurring during the second period. During SON, the first time period had the highest mean NDVI. In DJF, the third and fourth periods had the highest mean NDVI. Figure 6 represents the spatialized context of NDVI across southern Africa by season and time period; there is a clear trend of minimum NDVI values in the JJA and SON seasons. The latter part of the wet season (MAM) represents the highest values of the year, and the latter part of the dry season (SON), represents the lowest values of the year. Directional persistence There is a strong negative mean value of D (2000-2020, 2000-2004 baseline and 2005-2020 comparison to benchmark) across all five of the national parks and the buffer zone in all seasons (except for DJF) ranging from −4 to −12 (Figure 7). From the maps in Figure 8, it can be identified that the strongest negative trend is proximate to our study area, where South Africa, Mozambique, and Zimbabwe meet. Furthermore, their trends of negative persistence are far more extensive than positive persistence. Table 4 highlights that there are large portions of the National Parks and buffer zone in the GLTFCA with significant negative persistence across all seasons, whereas there are only very small portions of the National Parks in the GLTFCA with significant positive persistence across the seasons, again, except for DJF, where about 25% of the landscape has a significant positive persistence value. Spatialized D shows a very significant negative pattern in vegetation health across the study area in all seasons except those in the onset of the rainy season (DJF), where there are more neutral values (Figure 8). Figure 9 demonstrates that there is a positive relationship between NDVI and precipitation within the five national parks Frontiers in Environmental Science frontiersin.org 13 4 Discussion and conclusion Normalized difference vegetation index (NDVI) regressed against precipitation The GLTFCA serves to conserve ecosystems by effectively eliminating management policy differences across countries. So, with this major variable eliminated, it allows this study to focus on larger scale drivers, such as precipitation. This study supports literature that precipitation is a major driver in savanna systems and is therefore critical to understand for major conservation areas like the GLTFCA (Scholes and Archer, 1997). From this research we can conclude that there has been a change in precipitation across seasons. This has occurred concurrently with a decline in vegetation health between 2000 and 2020 as evidenced by NDVI and D within the National Parks of the GLTFCA. The precipitation results presented in Table 3 showed that seasons examined from a longer-term climatic period of 1981-2020 had a greater number of statistically significant negative trends than the period of 1999-2020. This indicates that during the latter half of our time period, there may not have been as great of a decline in precipitation, or that the time period was not long enough to establish as strong of a climatic trend. We also found stronger trends in the beginning of the wet season (positive) and the end of the dry season (negative), which exemplifies the need to monitor savanna vegetation during seasonal transitions in the precipitation. For example, dry seasons in southern Africa have been starting earlier, which may help explain why negative trends in NDVI have kept increasing (Andela et al., 2013). This may be related to a change in timing of green-up and senescence in these grassland dominated landscapes, which has been demonstrated in other studies conducted by this research group (Herrero et al., 2020b). Several studies have assessed changing precipitation and vegetation dynamics and their interactions, as well as the broader impacts within savanna ecosystems. Jung et al., 2010 found a global decline in precipitation led to reduced soil moisture and evapotranspiration, which negatively affects vegetation health (Jung et al., 2010). Some remote sensing studies have found an overall global greening when assessing satellite-derived proxies of vegetation trends (Mitchard et al., 2009;Piao et al., 2011;De Jong et al., 2012;Guay et al., 2014;Brandt et al., 2015;Schimel et al., 2015;Garonna et al., 2016;Bastin et al., 2017). However, our study finds negative directional persistence and supports literature that has suggested that browning hotspots are rising in subequatorial Africa (De Jong et al., 2012;Herrero et al., 2020b). With the overall negative trends of vegetation health based on NDVI and correspondence with precipitation, especially in the southwest portions of the GLTFCA, we can determine that there is a strong link between precipitation and vegetation biomass in savannas in this area, which may override management techniques at larger spatio-temporal scales. There was a positive relationship between precipitation and NDVI in each of the five parks throughout the time period, although the explanatory power of precipitation on NDVI did vary. Kruger serves as an example of the strong, real-world relationship between precipitation and vegetation, where 79% of NDVI was explained by precipitation, again, overriding potential management interventions. There was an overall decrease in precipitation and NDVI across the landscape. From this research we can determine that the GLTFCA and its parks have seen a decline in healthy vegetation despite cohesive management. The purpose of the GLTFCA and other protected areas are to conserve vegetation through maintaining a well-functioning ecosystem, although currently there is a strong negative trend in vegetation health, when compared to the total landscape, the park is performing somewhat better. This is supported by other studies, which show that these protected areas can still be used as an effective method for vegetation conservation relative to the rest of the landscape (Newmark, 1996;Newmark and Hough, 2000;Newmark, 2008;Herrero et al., 2020a). The results of this study can inform local management practices within the GLTFCA to assist with possible conservation strategies, within an already changing environment. Using the spatialized D product, the teams within the conservation area can identify the presence of regions vulnerable to vegetation decline (specifically where South Africa, Mozambique, and Zimbabwe meet, as evidenced by Figure 8). When looking at the entire landscape, there is a strong negative D and decreasing NDVI values that are linked with a decline in precipitation, and a changing of arrival of the rains, which is beyond the scale of managers, but must be acknowledged and ultimately, adapted to. The total landscape had similar trends in vegetation health to the national parks, although slightly more negative in NDVI and D. With these spatialized results, the Peace Parks Foundation that oversees the GLTFCA can better target conservation efforts on its landscape, in addition to the work they are doing to help preserve the plant and animal species within the parks and buffer zones. However, even with different conservation techniques, this research highlights how the PPF may have difficulty accounting for the larger scale precipitation driver to achieve its goal of preserving natural resources within the area for future generations, while also retaining harmony across its geopolitical borders (Scholes and Archer, 1997). This study was limited by the use of regionally aggregated measures of precipitation represented by a mean value for each park and time period, rather than spatialized precipitation data. This may be a future point for improvement to link with our spatialized vegetation data. One other limitation is that the D values account for the relative direction, but not the magnitude of change, i.e., we do not know how much more positive or negative a pixel value is from the deadline, only that the occurrence of years with higher or lower NDVI, as compared to the baseline, was more frequent than expected. Linking such analysis with actual NDVI trends and analysis, as was done in this work, does however, help minimize this limitation. The methods used in this study, where a time series NDVI analysis is used to evaluate a measure of vegetation health (D) and the connection to precipitation, can be applied globally. Of course, there are many savanna systems within Africa where these methods would be extremely useful, but the focus of this analysis is on transfrontier conservation areas because this was identified as being a significant variable in the prediction of vegetation health in parks by our previous paper (Herrero et al., 2020a), though the GLTFCA was identified as an area of significant concern for D when evaluating a map of all of southern Africa. As discussed above, conservation exists within parks because of the focus of maintaining biodiversity, but in addition to minimizing human impacts, we must be aware of the natural state of vegetation across these landscapes Frontiers in Environmental Science frontiersin.org (through indices like NDVI or D) so we can effectively manage these areas and resources. This study concludes that there have been significant declines in vegetation around the GLTFCA over the 21st century. There was a decline in the NDVI metric and a strong significant decline in the D metric, which can be linked to an overall decline in precipitation. The quantifications of these spatial and temporal changes across this transfrontier study area can facilitate the continuation of collaboration among managers and promote multi-stakeholder efforts to target specific areas of the GLTFCA to conserve natural resources, with data and information that is both timely and spatially targeted, as needed by land managers globally. Data availability statement The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. Author contributions Conceptualization, HH and JS; methodology, HH, JS, and CM; software, RK; validation, HH; formal analysis, HH, RK, and CM; investigation, HH and JS; resources, RK; data curation, HH and RK; writing-original draft preparation, HH and SI; writing-review and editing, JS, RK, CM, and SI; visualization, HH and RK; supervision, JS; project administration, JS; funding acquisition, JS. All authors have read and agreed to the published version of the manuscript.
2023-03-16T15:31:44.081Z
2023-03-14T00:00:00.000
{ "year": 2023, "sha1": "98956364bd92a0f1476bb944348ba0a5b44bb855", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fenvs.2023.1106849/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "b7f9b84f6e83c2b87260717d8048b637e3e17df2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
237934719
pes2o/s2orc
v3-fos-license
Controlled Anchoring of (Phenylureido)sulfonamide-Based Receptor Moieties: An Impact of Binding Site Multiplication on Complexation Properties The repetition of urea-based binding units within the receptor structure does not only lead to monomer properties multiplication. As confirmed by spectroscopic studies, UV-Vis and 1H-NMR in classical or competitive titration mode, the attachment to a carrier allocates the active moieties to mutual positions predetermining the function of the whole receptor molecule. Bivalent receptors form self-aggregates. Dendritic receptors with low dihydrogen phosphate loadings offer a cooperative complexation mode associated with a positive dendritic effect. In higher dihydrogen phosphate concentrations, the dendritic branches act independently and the binding mode changes to 1:1 anion: complexation site. Despite the anchoring, the dendritic receptors retain the superior efficiency and selectivity of a monomer, paving the way to recyclable receptors, desirable for economic and ecological reasons. Introduction Shortly after synthesis of the first compounds with dendritic structure [1,2], features accompanying the defined multiplication of structural subunits were recognized. The fascination of the scientific community with observed unexpected generation-dependent properties of dendrimers led to a broad range of reports on so-called "dendritic effect" [3][4][5]. After a deeper examination of its origin, Tomalia published a study relating structure parameters to prediction of dendrimer properties [6,7]. Then, advantageously, the concept of formation of microenvironment with locally modified characteristics was employed in drug delivery [8,9], catalysis [10][11][12] or material [13,14] and supramolecular chemistry [15,16]. Although all these applications are based on an interaction of the dendritic molecule with a guest, the studies concerning dendritic effects in anion recognition are rather rare. In this context, Astruc et al. observed an enhancement in affinity of ferrocene-based dendritic molecules toward anions in comparison with monovalent receptors [17,18]. Dendritic ureas and thioureas prepared by Boas showed an increase in carboxylate binding affinity with increasing number of binding units [19]. Vögtle's ureas in higher generation showed positive dendritic effect in ATP binding [20]. In our recent work we studied dihydrogen phosphate recognition in a series of carbosilane dendrimers with isophthalamide-based binding sites and we observed an enhanced, generation-dependent complexation ability at higher receptor concentrations [21]. The dendrimers prepared by Losada et al. with amidoferrocenyl units attached poly(propylene imine) dendrimers showed generation-dependent electrochemical responses to inorganic anions [22,23]. Here we report a systematic study of dendritic effects associated with anion recognition by receptor series with increasing number of the active units. We focused on the molecules carrying potent binding sites based on urea moieties conjugated to sulfonamide group. Such moiety has a superior affinity to anions with preferential 1:1 binding stoichiometry [24], even when two binding sites are present in close proximity within one molecule [25]. To induce a multiplication effect, the number of complexation sites within the receptor molecule was gradually increased, finally using their attachment to the periphery of carbosilane dendrimers. This study attempts to exploit the phenomenon of the multivalency in design of potent and possibly recyclable receptors. Results The influence of binding sites accumulation on anion complexation was studied in a series of receptors differing in the number of receptor moieties. A receptor moiety 4 was synthesized, comprising following structural elements: (i) a urea-based complexation site, capable of donation of highly directional hydrogen bonds, (ii) a sulfonamide group, promoting an electron withdrawal from the complexation sites and (iii) a triple bond, serving as an anchoring group. Although the synthesis of amine 3 was previously described [26], we used an alternative method (Scheme 1). poly(propylene imine) dendrimers showed generation-dependent electrochemical responses to inorganic anions [22,23]. Here we report a systematic study of dendritic effects associated with anion recognition by receptor series with increasing number of the active units. We focused on the molecules carrying potent binding sites based on urea moieties conjugated to sulfonamide group. Such moiety has a superior affinity to anions with preferential 1:1 binding stoichiometry [24], even when two binding sites are present in close proximity within one molecule [25]. To induce a multiplication effect, the number of complexation sites within the receptor molecule was gradually increased, finally using their attachment to the periphery of carbosilane dendrimers. This study attempts to exploit the phenomenon of the multivalency in design of potent and possibly recyclable receptors. Results The influence of binding sites accumulation on anion complexation was studied in a series of receptors differing in the number of receptor moieties. A receptor moiety 4 was synthesized, comprising following structural elements: (i) a urea-based complexation site, capable of donation of highly directional hydrogen bonds, (ii) a sulfonamide group, promoting an electron withdrawal from the complexation sites and (iii) a triple bond, serving as an anchoring group. Although the synthesis of amine 3 was previously described [26], we used an alternative method (Scheme 1). Scheme 1. Synthesis of receptor moiety 4 and its crystal structure; (a) CH3NH2, pyridine, 93 %; (b) propargyl bromide, K2CO3, 33 %; (c) SnCl2 2H2O, EtOH, quantitative, (d) PhNCO, 42 %. 4-Nitrophenylsulfonylchloride was reacted with methylamine in water in the presence of pyridine providing 1 in almost quantitative yield. The propargyl group was introduced to the molecule via alkylation with propargyl bromide in the presence of base resulting in the formation of 2. The nitro group was reduced using tin(II) chloride in ethanol, leaving the triple bond intact. The ureido group, serving as a complexation site, was formed in the reaction of the amino group of 3 with phenyl isocyanate providing compound 4 as orange crystals suitable for single crystal X-ray diffraction analysis. Apart from the structure approval, the crystallography confirmed the L-shape of 4, typical for Scheme 1. Synthesis of receptor moiety 4 and its crystal structure; (a) CH 3 NH 2 , pyridine, 93 %; (b) propargyl bromide, K 2 CO 3 , 33 %; (c) SnCl 2 2H 2 O, EtOH, quantitative, (d) PhNCO, 42 %. 4-Nitrophenylsulfonylchloride was reacted with methylamine in water in the presence of pyridine providing 1 in almost quantitative yield. The propargyl group was introduced to the molecule via alkylation with propargyl bromide in the presence of base resulting in the formation of 2. The nitro group was reduced using tin(II) chloride in ethanol, leaving the triple bond intact. The ureido group, serving as a complexation site, was formed in the reaction of the amino group of 3 with phenyl isocyanate providing compound 4 as orange crystals suitable for single crystal X-ray diffraction analysis. Apart from the structure approval, the crystallography confirmed the L-shape of 4, typical for sulfonamides. The hydrogen-binding capability of the ureido moiety is suggested by a non-covalently bound molecule of dimethylsulfoxide (DMSO). The carriers for attachment of the receptor moieties were based on carbosilane molecules differing in the number of azido groups within the structure (Scheme 2). The respective starting azides were prepared according to published procedures [27][28][29], except 7 and 9a. The synthesis of compound 7 started with reaction of butyllithium with chloro(3chloropropyl)dimethylsilane, giving 5 in 91% yield. The silane 5 was either converted to iodo-derivative 6 by Finkelstein reaction or directly reacted with sodium azide to obtain 7. The compound 9a was obtained in two steps from (3-chloromethyl)dimethyl silane. Its Ir-catalyzed addition to allylchloride gave 8a together with 8b in approx. 1:1 ratio. The subsequent treatment of the obtained mixture with sodium azide under the conditions of nucleophilic substitution yielded a mixture of the corresponding azides 9a and 9b. sulfonamides. The hydrogen-binding capability of the ureido moiety is suggested by a non-covalently bound molecule of dimethylsulfoxide (DMSO). The carriers for attachment of the receptor moieties were based on carbosilane molecules differing in the number of azido groups within the structure (Scheme 2). The respective starting azides were prepared according to published procedures [27][28][29], except 7 and 9a. The synthesis of compound 7 started with reaction of butyllithium with chloro(3-chloropropyl)dimethylsilane, giving 5 in 91% yield. The silane 5 was either converted to iodo-derivative 6 by Finkelstein reaction or directly reacted with sodium azide to obtain 7. The compound 9a was obtained in two steps from (3chloromethyl)dimethyl silane. Its Ir-catalyzed addition to allylchloride gave 8a together with 8b in approx. 1:1 ratio. The subsequent treatment of the obtained mixture with sodium azide under the conditions of nucleophilic substitution yielded a mixture of the corresponding azides 9a and 9b. Scheme 2. Synthesis of azide carriers 7, 9a and 9b. The prepared azide-derivatized carriers were reacted with intermediate 4 providing receptors 11-14 (Scheme 3). In all cases, the copper(I) catalyzed click reaction was employed [30,31]. In the case of bivalent carriers, the mixture of 9a and 9b was subjected to click reaction with 4. The resulting mixture of receptors 11 and 12 in 1:1 ratio was easily separable by preparative thin layer chromatography (TLC). The prepared azide-derivatized carriers were reacted with intermediate 4 providing receptors 11-14 (Scheme 3). In all cases, the copper(I) catalyzed click reaction was employed [30,31]. In the case of bivalent carriers, the mixture of 9a and 9b was subjected to click reaction with 4. The resulting mixture of receptors 11 and 12 in 1:1 ratio was easily separable by preparative thin layer chromatography (TLC). The insight into the complexation behavior of prepared receptors when exposed to anions was provided by UV-Vis and/or NMR titration experiments [32,33]. To determine the selectivity and efficiency of the concerned complexation site, the monovalent receptor 10 offering the simplest complexation mode with 1:1 stoichiometry was titrated with a The insight into the complexation behavior of prepared receptors when exposed to anions was provided by UV-Vis and/or NMR titration experiments [32,33]. To determine the selectivity and efficiency of the concerned complexation site, the monovalent receptor 10 offering the simplest complexation mode with 1:1 stoichiometry was titrated with a series of anions in the form of their tetrabutylammonium (TBA + ) salts using 1 H-NMR technique; the association constants [34] with higher uncertainty were specified also by UV-Vis titration. (Table 1). The influence of binding site multiplication on anion complexation was studied on the series of receptor stock solutions with the same concentration of binding sites. Based on the above-mentioned preliminary study, the 1 H-NMR titration experiments were performed with TBA + H 2 PO 4 − , as a strongly bound anion, and TBA + Cl − , as a weakly bound one. All receptors showed significant complexation-caused shifts of signals in 1 H-NMR spectra, which were used to construct binding isotherms. In the case of bivalent receptors 11 and 12, the binding isotherms did not have smooth shapes (see Supporting Material, Figures S52 and S53), revealing the presence of multiple ongoing processes. The signals of NH groups were located unexpectedly high (10.60 and 10.14 for 12, compare to 9.43 and 9.06 for 10) and shifted with dilution; with 10-fold dilution the NH signals of 12 moved to 9.36 and 9.19, respectively. The receptor 11 showed similar behavior. On the other hand, in the case of receptors 10, 13, and 14 the dilution-induced changes in respective 1 H-NMR spectra were negligible. As the complexation constants of systems with higher stoichiometry are not conveniently available, individual systems were compared considering the progress of complexation (expressed as percentage of maximal complexation induced shift (CIS)) at the same level of added anion with respect to concentration of available binding sites. The value of CIS for strong dihydrogen phosphate complexes was easily determined as a shift value reached in equilibrium and was similar for all the tested compounds. For weaker complexes with chloride, the value of CIS was approximated based on the calculation of association constant for 10/Cl − and adjusted to 600 Hz for NH signal ( Figure 1). To further investigate the given systems, we used a competitive approach recently introduced by Haav et al. [35]. The stock solutions were prepared, containing 10/13, and 10/14, with equal amount of monovalent and dendrimer-bound complexation sites, respectively. These mixtures were titrated with an anion according to the standard procedure. The complexation properties of both involved receptors were compared within the same titration experiment, excluding many of the possible uncertainty sources. The shifts of selected signals belonging to receptor 10 in the mixture were followed during the titration experiment and the results were compared to the receptor responses observed during standard titration. In the case of weakly bound chloride, the differences between curves measured for 10 alone and 10 in the mixture with 13 or 14 are negligible. When 10 is titrated by dihydrogen phosphate in the presence of 13 or 14 ( Figure 2), notable differences can be found when compared to sole 10. At lower concentrations of H 2 PO 4 − , 10 seems to have less than one half of the whole amount of anions available for complexation. At higher levels of a guest, the receptor 10 returns to the binding isotherm obtained by the standard methodology or − , beyond the concentration of available binding sites leads to the same equilibria in all the monitored cases. spectra were negligible. As the complexation constants of systems with higher stoichiometry are not conveniently available, individual systems were compared considering the progress of complexation (expressed as percentage of maximal complexation induced shift (CIS)) at the same level of added anion with respect to concentration of available binding sites. The value of CIS for strong dihydrogen phosphate complexes was easily determined as a shift value reached in equilibrium and was similar for all the tested compounds. For weaker complexes with chloride, the value of CIS was approximated based on the calculation of association constant for 10/Cl − and adjusted to 600 Hz for NH signal ( Figure 1). Figure S64). To further investigate the given systems, we used a competitive approach recently introduced by Haav et al. [35]. The stock solutions were prepared, containing 10/13, and 10/14, with equal amount of monovalent and dendrimer-bound complexation sites, respectively. These mixtures were titrated with an anion according to the standard procedure. The complexation properties of both involved receptors were compared within the same titration experiment, excluding many of the possible uncertainty sources. The shifts of selected signals belonging to receptor 10 in the mixture were followed during the titration experiment and the results were compared to the receptor responses observed during standard titration. In the case of weakly bound chloride, the differences between curves measured for 10 alone and 10 in the mixture with 13 or 14 are negligible. When 10 is titrated by dihydrogen phosphate in the presence of 13 or 14 (Figure 2), notable differences can be found when compared to sole 10. At lower concentrations of H2PO4 − , 10 seems to have less than one half of the whole amount of anions available for complexation. At higher levels Figure S64). For clarity, lines are used to connect the experimental data points. Discussion The receptor 10 showed a remarkable selectivity toward dihydrogen phosphate anion and carboxylates over halides, as expressed by two orders of magnitude difference in binding constants. Addition of the second complexation site to the structure of receptor Figure S64). For clarity, lines are used to connect the experimental data points. Discussion The receptor 10 showed a remarkable selectivity toward dihydrogen phosphate anion and carboxylates over halides, as expressed by two orders of magnitude difference in binding constants. Addition of the second complexation site to the structure of receptor and formation of bivalent hosts 11 and 12 significantly altered the complexation behavior. From the high positions of NH signals in 1 H-NMR spectra, which changed with dilution, the intermolecular hydrogen bonding can be deduced. Such behavior cannot be attributed to the presence of disiloxane oxygen in 12, as the receptor 11 formed similar aggregates. The inclusion of two urea moieties into the structure of 11 or 12 probably enables the development of enthalpically preferred folded structures, where both the binding sites are involved in the intramolecular interaction. As the anion complexation has to compete with this self-association, the bivalent receptors 11 and 12 were excluded from further studies. The dilution-induced changes in 1 H-NMR spectra of 10, 13, and 14 were negligible from structural reasons; in monovalent 10, these interactions should be much weaker than in 11 or 12, due to the absence of the second interacting moiety. In the case of 13 and 14, the steric hindrance plays the role preventing the molecules from mutual interaction. Therefore, all the changes in 1 H-NMR signal positions during titration experiments can be attributed to complexation. The comparable changes of the signal positions for receptors 10, 13 and 14 at low concentrations of dihydrogen phosphate anion indicate that the role of lower homogeneity of binding sites or salt accumulation in the localized areas is negligible and the establishment of equilibria is quick (Figure 1). In the case of 14, a significantly different behavior can be observed compared to 10; after addition of 0.1 equiv. of H 2 PO 4 − , the % CIS corresponds to 20% binding sites occupancy. This implies that until the concentration of anion reaches one half of the available complexation sites, two sites cooperate on binding of one anion. This effect is not so pronounced for the dendrimer 13, but the stoichiometry of the H 2 PO 4 − complexation also exceeds the 1:1 binding site:anion ratio. This behavior is evident, although the cooperation between two binding sites comprising the sulfonamide group conjugated to urea is reported to be quite unusual [25]. The assumption of cooperative binding of dihydrogen phosphate in 13 and 14 is supported by the shape of binding isotherms (see Supporting Material, Figure S43) exhibiting a plateau after addition of two or four receptor concentration equivalents of H 2 PO 4 − , respectively. Moreover, in the 1 H-NMR spectra of 13 and 14, the signals of triazole proton and the nearby protons move significantly with the complexation to the higher field, indicating an increased shielding caused by dendritic branches coming closer upon complexation. Naturally, no such shielding is visible in the case of 10 (Appendix A). Considering the uncertainty in CIS determination, the titration of receptors 10, 13 and 14 with TBA + Cl − led to very similar results in all cases, regardless the count of binding sites per one molecule (Figure 1). The weakly bound chloride probably does not have a convenient shape and size for cooperative bonding to the dendritic receptors and no effects similar to those observed for dihydrogen phosphate can be followed. The same results were obtained by the competitive experiments; the binding isotherms for sole 10 did not differ much from those measured for 10 in the mixture with a dendritic receptor, confirming low influence of binding sites cumulation on chloride complexation. On the contrary, in the case of dihydrogen phosphate at low loadings, the dendritic molecules offer a possibility of advantageous cooperative binding of one H 2 PO 4 − anion to two branches. This is consistent with competitive study results, where complexation of H 2 PO 4 − by 10 was initially hindered by the presence of dendritic molecule (Figure 2). This feature, although also present in 13, is particularly pronounced in the case of 14, which is in accordance with the titration data obtained for the sole receptors. At higher levels of a guest the complexation sites of the dendrimer offering possibility of 1:2 binding mode become saturated. Adding more anion, the dendritic complexes undergo a dissociation step, where the hydrogen bonds of a branch toward the anion are disrupted and another anion is bound forming 1:1 complex. Receptor 10 does not undergo such a process and profits from the dendritic receptor binding mode change, using more than a half of total present H 2 PO 4 − . When the concentration of H 2 PO 4 − exceeds the concentration of available binding sites, both the monovalent and dendrimer-bound complexes are equally stable. To regenerate the dendritic receptors, the methodology of nanofiltration was used. After addition of methanol, which destabilized the complexes and lowered the viscosity of feed solutions, the pressure of nitrogen was used to filtrate the mother liqueurs through the corresponding membranes; 1 kDa cut-off for 13 and 3 kDa for 14, with 60-70% receptor recovery. In conclusion, the (phenylureido)sulfonamidic moieties were successfully anchored to carbosilane carriers providing multivalent receptors for anions. Notwithstanding the bivalent receptors, which formed self-aggregates in DMSO solution, the multivalent host molecules proved to retain the selectivity and efficiency of monovalent ones. Moreover, the attachment to dendritic scaffold fits the binding sites into a suitable position for cooperative binding of H 2 PO 4 − anion, particularly pronounced in the second generation of dendrimer. At low molar fractions of dihydrogen phosphate, a positive dendritic effect was observed due to advantageous formation of 2:1 complexes between anchored receptor moieties and dihydrogen phosphate. Despite the weak negative dendritic effect observed at H 2 PO 4 − molar fractions between 0.5 and 1 caused by the change of binding mode, the overall affinity and selectivity of the binding sites is not affected by the anchoring. No dendritic effects were observed for weakly bound chloride and all binding sites work similarly, regardless the carrier. Such knowledge opens access to potent receptors of anions suitable for multiple use, considering the feasible recycling by nanofiltration. The 1 H (400.1 MHz), 13 C (100.6 MHz) and 29 Si (79.5 MHz) NMR spectra were recorded using a Bruker Avance 400 spectrometer (Bruker Biospin, Rheinstetten, Germany) at 25 • C. Used solvents (DMSO-d 6 , chloroform-d) were stored over molecular sieves. The 1 H and 13 C-NMR spectra were referenced to the line of the solvent (δ/ppm; δ H /δ C : DMSO-d 6 radiation (Bruker AXS, Karlsruhe, Germany). The crystal structure of 4 was solved by charge flipping methods in Superflip [38] and refined by full-matrix least-squares on F 2 values in Crystals [39]. All non-hydrogen atoms were refined anisotropically. All hydrogen atoms could be localized from electron density maps, but according to common practice, hydrogen atoms bonded to carbon were repositioned geometrically, initially refined with soft restraints and afterwards refined using riding constraints. Hydrogen atoms bonded to nitrogen were localized from electron density maps and refined with restrained geometry. Mercury [40] was used for structure visualization. The crystallographic data have been deposited in the Cambridge Crystallographic Data Centre as a supplementary publication. These data are provided free of charge by the joint Cambridge Crystallographic Data Centre and Fachinformationszentrum Karlsruhe Access Structures service www.ccdc.cam.ac.uk/structures (accessed on 12 August 2021). Nanofiltration: for purification and recycling of dendritic receptors, a solvent resistant stirred cell (Merck Millipore, Burlington, MA, USA) equipped with 1 kDa or 3 kDa MWCO regenerated cellulose ultrafiltration membrane disk (Merck Millipore) and sealed with fluorinated ethylene propylene (FEP) coated O-rings (Eriks, Rychnov nad Kněžnou,ČR) was used. The filtration was driven by nitrogen under a pressure of 4.5 bar. Solutions of complexes were diluted by methanol to 20 mL total volume and filtered to a target volume of 1 mL of retentate; this run was repeated five times. Evaporation of the retentate gave the dendritic receptor. Determination of Complexation Constants Complexation constants were measured in DMSO-d 6 by standard 1 H-NMR titration methodology. A solution of tetrabutylammonium salt of a selected anion (purchased from commercial sources, stored in a dry box) was gradually added in aliquots into a solution of a given receptor to reach at least a 1:5 ratio of binding group to anion. Concentrations of respective receptors were about 4 mmol/L and were kept constant during the titrations to avoid the effects of dilution. The corresponding complexation constants were calculated based on the analysis of binding isotherms obtained from the complexation induced shifts of NH or aromatic protons. For non-linear curve fitting of experimental data, the freely available software Bindfit [34] was used. The competition experiments were carried out in the mixtures of receptors with concentration of binding sites of 4 mmol/L of monovalent 10 and 4 mmol/L of respective dendritic molecule 13 or 14, i.e., the mixture contained 4-fold molar excess of 10 compared to 13 or 8-fold molar excess of 10 compared to 14, respectively. The concentration of TBA + salt of an anion was twofold higher than in the case of a sole receptor. The UV-Vis titrations were performed in DMSO using double beam UV-1800 spectrophotometer Shimadzu. All UV spectra were taken in the wavelength region from 260 to 800 nm, with steps of 1 nm using cuvettes with pathlength of 1 mm. The stability constants of the resulting complexes were evaluated by employing the freeware program Bindfit using the whole parts of the absorption curves where the changes in absorbance were the most significant. Synthesis of Precursors N-methyl-4-nitrobenzene sulfonamide 1 p-Nitrophenylsulfonylchloride (1 g, 4.5 mmol) was added into a stirred solution of methylamine (0.4 mL, 40% aqueous solution) in pyridine (20 mL). The reaction was stirred for 2 days at ambient temperature. After this time period, the reaction mixture was poured into aqueous HCl (5 M, 100 mL) and the product was extracted to ethyl acetate (2 × 50 mL). The combined organic layers were dried over magnesium sulfate, filtered and the filtrate was evaporated to give a title compound as a yellow solid (0.9 g, 93% yield). N-methyl-N-propargyl-4-nitrobenzene sulfonamide 2 N-methyl-4-nitrobenzene sulfonamide 1 (0.8 g, 3.6 mmol) was dissolved in 30 mL of acetonitrile and 0.6 g (4.3 mmol) of anhydrous potassium carbonate was added. The mixture was stirred for 0.5 h before propargyl bromide (0.6 mL, 4.3 mmol, 60% solution in toluene) was added. Resulting mixture was refluxed for 2 days. After cooling, the solvent was removed in vacuo. The residue was taken up with 50 mL of ethyl acetate, aqueous HCl (1 M, 200 mL) was added and the product was extracted into ethyl acetate (3 × 50 mL). The combined organic layers were dried over magnesium sulfate, filtered and the filtrate was evaporated to give a title compound as a yellow solid (0.3 g, 33% yield). N-methyl-N-propargyl-4-aminobenzene sulfonamide 3 N-methyl-N-propargyl-4-nitrobenzene sulfonamide 2 (0.3 g, 1.2 mmol) was suspended in ethanol (50 mL). Tin(II) chloride dihydrate (2.65 g, 12 mmol) was added and the reaction mixture was refluxed overnight. After cooling, the solvent was removed in vacuo, the residue was taken up with ethyl acetate (100 mL) and aqueous potassium hydroxide (5 M, 100 mL) was added. The product was extracted to ethyl acetate (2 × 50 mL). The combined organic layers were washed with brine (100 mL) and, after separation, dried over magnesium sulfate. After filtration, the solvent was evaporated to give a title compound (0.27 g, quantitative yield) as a yellow syrup. To a stirred solution of N-methyl-N-propargyl-4-aminobenzene sulfonamide 3 (0.27 g, 1.2 mmol) in dichloromethane phenylisocyanate (0.2 mL, 1.8 mmol) was added dropwise. The reaction mixture was stirred overnight and then poured into water. The product was extracted to chloroform and organic phase was dried over magnesium sulfate, filtered and the filtrate was evaporated. The crude product was filtered through the silica (20 g) where the impurities were eluted by ethyl acetate/hexane 1/4 (3 column volumes). The product was then eluted using ethyl acetate. After evaporation, the title compound (0.17 g, 42%) was obtained as a yellow slowly crystalizing syrup. Butyl(3-Chloropropyl)dimethylsilane 5 Chloro(3-chloropropyl)dimethylsilane (2 g, 11.7 mmol) was dissolved in dry pentane (10 mL) under argon atmosphere in Schlenk tube and cooled to −78 • C. To this solution was added 2.5 M solution of n-butyllithium in hexane solution (5.14 mL, 12.8 mmol). The reaction mixture was stirred at cooling for 30 min. After this time period, the reaction mixture was warmed to room temperature, stirred overnight and then poured into ice-cold saturated aqueous NH 4 Cl. The aqueous layer was extracted twice with diethyl ether, the combined organic layers were washed twice with water, once with saturated aqueous NaCl and dried over anhydrous MgSO 4 . The volatiles were removed on the rotary evaporator and finally under high vacuum giving title compound (2.05 g, 91%) as a yellow liquid. Butyl(3-iodopropyl)dimethylsilane 6 Mixture of butyl(3-chloropropyl)dimethylsilane 5 (1.00 g, 5.19 mmol) and sodium iodide (3.11 g, 20.75 mmol) in 30 mL of butan-2-one was heated to reflux for 2 days. The reaction mixture was then cooled to room temperature. The product was extracted to diethyl ether (2 × 30 mL), the extract filtered through silica gel and dried over anhydrous Preparation of Receptors-A General Procedure In the 10 mL microwave vial was contacted an azide-carrier molecule with intermediate 4 (1.2 equivalents per azide group) in DMF (6 mL) in the presence of copper(I) iodide (0.5 equiv. per azide group) and DIPEA (10 equiv. per azide group). The vial was sealed and irradiated by microwaves for 1.5 h at 120 • C. After cooling, the contents of vial were poured to 1M HCl and the mixture was extracted with ethyl acetate (3 × 30 mL). The combined organic layers were dried over magnesium sulfate, filtered and the filtrate was evaporated giving the crude products. Receptor 10 The receptor 10 was prepared following the general procedure. The crude product was purified using preparative TLC (ethyl acetate/hexane 1/1) giving the receptor 10 as a yellow semi-solid in 70% yield. 1 Receptor 11 The receptor 11 was prepared form the reaction mixture of intermediates 9a and 9b following the general procedure. The crude product was purified using preparative TLC (ethyl acetate/hexane 1/3) giving the receptor 11 as a more polar fraction in the form of a yellow syrup in 35% yield. ppm. 13 Receptor 12 The receptor 12 was prepared form the reaction mixture of intermediates 9a and 9b following the general procedure. The crude product mixture was purified using preparative TLC (ethyl acetate/hexane 1/3) giving the receptor 12 as a less polar fraction in the form of orange syrup in 42% yield. 1 Receptor 13 The receptor 13 was prepared following the general procedure, except form extraction. Due to low solubility of 13 in majority of organic solvents, after pouring the reaction mixture to aq. HCl, the solids were filtered off. The filtrate was depleted and the filtration obtained solids were sonicated in DMSO for 15 min. The insoluble matter was filtered off and to the filtrate was added Chelex ® 100 sodium (Sigma Aldrich) to remove the residual Cu. After stirring overnight, Chelex ® was removed by filtration and the solution was concentrated by lyophilization. The crude product was purified by nanofiltration (MWCO 1 kDa, MeOH) giving the receptor 13 as a brownish syrup in 84% yield. The receptor can be recovered from the complexes with anions by nanofiltration (MWCO 1 kDa, MeOH) in 60-70% yields. The receptor 14 was prepared following the general procedure, except from extraction. Due to low solubility of 14 in majority of organic solvents, after pouring the reaction mixture to aq. HCl, the solids were filtered off. The filtrate was depleted and the filtration obtained solids were sonicated in DMSO for 15 min. The insoluble matter was filtered off and to the filtrate was added Chelex ® 100 sodium (Sigma Aldrich) to remove the residual Cu. After stirring overnight, Chelex ® was removed by filtration and the solution was concentrated by lyophilization. The crude product was purified by nanofiltration (MWCO 3 kDa, MeOH) giving the receptor 14 as a brownish syrup in 75% yield. The receptor can be recovered from the complexes with anions by nanofiltration (MWCO 3 kDa, MeOH) in 60-70% yields. 13
2021-09-28T05:20:57.858Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "3edd5b3ea4eccf984b1df21539734094b68cca60", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/18/5670/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3edd5b3ea4eccf984b1df21539734094b68cca60", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
198190530
pes2o/s2orc
v3-fos-license
Limited clinical value of two consecutive post-transplant renal scintigraphy procedures Objectives Duration of delayed graft function (DGF) and length of hospital stay (LOS) are outcomes of interest in an era that warrants increased efficacy of transplant care whereas renal allografts originate increasingly from marginal donors. While earlier studies investigate the predictive capability of a single renal scintigraphy, this study focuses on the value for both DGF duration and LOS of consecutively performed scintigraphies. Methods From 2011 to 2014, renal transplant recipients referred for a Tc-99m MAG3 renal scintigraphy were included in a single-center retrospective study. Primary endpoints were DGF duration and LOS. Both the first (≤ 3 days) and second scintigraphies (3–7 days after transplantation) were analyzed using a 4-grade qualitative scale and quantitative indices (TFS, cTER, MUC10, average upslope). Results We evaluated 200 first and 108 (54%) consecutively performed scintigraphies. The Kaplan-Meier curves for DGF duration and qualitative grading of the first and second scintigraphy showed significant differences between the grades (p < 0.01). The Kaplan-Meier curve for the delta grades between these procedures (lower, equal, or higher grade) did not show significant differences (p = 0.18). Multivariate analysis showed a significant association between the qualitative grades, from the first and second scintigraphy, and DGF duration, HR 1.8 (1.4–2.2, p < 0.01) and 2.8 (1.8–4.3, p < 0.01), respectively. Conclusions Qualitative grades of single renal scintigraphies, performed within 7 days after transplantation, can be used to make a reliable image-guided decision on the need for dialysis and to predict LOS. A consecutive renal scintigraphy, however, did not show an additional value in the assessment of DGF. Key Points • Post-transplant renal scintigraphy procedures provide information to predict delayed graft function duration and length of hospital stay. • Performing two consecutive renal scintigraphy procedures within 1 week after transplantation does not strengthen the prediction of delayed graft function duration and length of hospital stay. • Single renal scintigraphy procedures can be used to provide clinicians and patients with a reliable indication of the need for dialysis after transplantation and the expected duration of hospitalization. Electronic supplementary material The online version of this article (10.1007/s00330-019-06334-1) contains supplementary material, which is available to authorized users. Introduction The duration of delayed graft function (DGF) and the length of hospital stay (LOS) are outcomes of interest in an era that warrants increased efficacy of transplant care whereas renal allografts originate increasingly from marginal donors, being allografts from extended criteria and donation after circulatory death (DCD) donors. DGF describes the failure of the renal transplant to function immediately after transplantation [1]. DGF is associated with renal allograft failure in the first year after donation after brain death (DBD) transplantation; however, allografts with DGF still provide survival benefit compared to maintenance dialysis [2]. Moreover, DGF is associated with a higher incidence of biopsyproven acute rejection and increased LOS [3]. The current trend of using marginal donors is associated with more DGF, longer hospital stay, and subsequently higher transplant-related costs [4][5][6]. Predicting the duration of DGF and LOS provides clinicians with the opportunity to optimize timing of renal biopsies and post-transplant dialysis. For this purpose, research focus has been on urinary and blood biomarkers for DGF, such as urinary tissue inhibitor of metalloproteinases-2 (TIMP-2), and quantitative/qualitative renal scintigraphy indices [7,8]. Renal scintigraphy is an imaging biomarker of renal function, reflecting perfusion, reabsorption, and excretion. It may help predicting DGF and LOS [9,10]. Results of renal scintigraphy can be interpreted qualitatively, differentiating in six-(Heaf and Iversen grading scale) or in four-curve types, and quantitatively, using several time-activity indices [11][12][13][14][15][16]. Several studies showed promising results for the use of renal scintigraphy to predict the course of DGF; however, these studies did not adjust for clinical variables associated with DGF [13,[17][18][19][20]. Moreover, previous studies focused primarily on the qualitative and quantitative interpretation of renal scintigraphy parameters from single procedures, whereas clinicians may focus more on consecutively performed imaging. In this center, Technetium-99m mercaptoacetyltriglycine (Tc-99m MAG3) renal scintigraphies were performed consecutively in the first week after transplantation in all patients with ongoing DGF, according to a standard post-transplant protocol. The present study was initiated to determine if two consecutive renal scintigraphies improved the prediction of DGF and LOS. Study design and participants We studied all patients receiving a renal transplant at the Leiden University Medical Center, between 2011 and 2014, who underwent a Tc-99m MAG3 renal scintigraphy within 3 days after transplantation. These patients are all part of a larger dual-center retrospective cohort, resulting in an earlier publication focusing on the predictive value of a single renal scintigraphy for the duration of DGF > 7 days after transplantation [21]. Patients were not included in case of receiving a dual renal transplant or both renal and pancreas transplants, and when under 18 years of age at the moment of transplantation. All clinical data for this study were retrieved from our national transplant research database, the Dutch Organ Transplant Registry (NOTR). Missing data and information on possible peri-and post-operative complications was retrieved by screening patients' charts retrospectively. Patient data were processed and electronically stored according to the Declaration of Helsinki Ethical principles for medical research involving human subjects, and approval for this study was given by the Leiden University Medical Center ethics committee. The clinical and research activities being reported are consistent with the Principles of the Declaration of Istanbul as outlined in the "Declaration of Istanbul on Organ Trafficking and Transplant Tourism." Outcome assessment We defined DGF as the need for dialysis after transplantation (dialysis-based DGF) and as the failure of serum creatinine to decrease with ≥ 10%/day during 3 consecutive days (functional DGF), which is in accordance with the majority of studies on DGF [22]. Based on these definitions, we described early transplant function using four groups, namely immediate graft function (IGF), a serum creatinine decrease of ≥ 10%/day during 3 consecutive days or no need for dialysis; slow graft function (SGF), DGF between day 3-6 after transplantation; delayed graft function (DGF), DGF for more than 7 days after transplantation; primary non-function (PNF), immediate graft failure with the need of dialysis. We defined LOS as the number of days between transplantation and initial discharge. Renal scintigraphy All included patients underwent renal scintigraphy for the analysis of DGF, discerning possible acute tubular necrosis from vascular or urological complications. In our center, a second renal scintigraphy was performed in case of ongoing DGF or suspicion of vascular/urological complications. Renal scintigraphies were performed using a bolus intravenous injection of 100 MBq Tc-99m MAG3. Two-phase digital dynamic images were obtained and processed using Syngo.via (Siemens Healthineers): (i) 1-s frames for 2 min; (ii) 20-s frames for 28 min. To calculate the renal scintigraphy timeactivity curves, renal transplant regions-of-interest (ROIs) were drawn manually surrounding the renal transplant and the background ROIs were drawn crescent-shaped, opposite of the renal vessels. The analysis of the renal scintigraphy data was performed blinded to all clinical variables by a single researcher. Qualitative analysis of the time-activity curves was performed using a four-curve type differentiation ( Fig. 1 and Supplement Fig. 1A and B), with a normal renal function with fast uptake and excretion (grade 1), a normal uptake with flat excretion curve (grade 2), a rising curve without excretion phase (grade 3), and a reduced absolute uptake without excretion phase (grade 4). Furthermore, renal scintigraphy results were stratified into four groups, namely peri-transplant fluid collections, vascular complications, urological complications, and no complications. Quantitative analysis was performed using four indices reflecting renal perfusion, reabsorption, and excretion. The tubular function slope (TFS) is a linear fit of the Tc-99m MAG3 curve between 50 and 110 s, reflecting the tracer uptake by renal tubular cells (counts/s) [13,24]. MUC10 reflects the uptake within the first 10 min, as a fraction of the injected dose (counts/s/MBq) [19]. The corrected tubular extraction rate (cTER) is the tracer uptake between procedure start and 2 min, corrected for the body surface (mL/min/1.73 m 2 ) [17]. The average upslope reflecting the curve during the upslope period (counts at 3 min − counts at 20 s)/160 s, in counts/s) [21]. Statistical analysis Baseline descriptive statistics and clinical characteristics are presented as mean ± SD or median (range) for continuous variables and counts with percentages for categorical variables. The Mann-Whitney test and one-way ANOVA were used to describe the variance of continuous variables between groups. Two-sided p values of less than 0.05 were considered to indicate statistical significance. Correlations were assessed by means of Pearson's or Spearman's analysis. Univariate and multivariate Cox proportional hazards analysis and the Kaplan-Meier curves with log-rank tests were used to examine the associations. The hazard ratios (HRs) and their corresponding 95% confidence intervals (CIs) are reported. The added value of renal scintigraphy indices was assessed by examining the change in − 2 log likelihood. We used Package for the Social Sciences (IBM © SPSS Statistics © version 22) for all statistical analyses and GraphPad Prism, version 5.04 (GraphPad Software), for graph presentation. For 161 (81%) patients, the indication for the first renal scintigraphy was a suspected acute tubular necrosis as cause of DGF. For 39 (19%) patients, the indication was a suspicion for fluid collections and vascular or urological complications. Only 3 out of these 39 patients experienced a vascular or urological complication needing a surgical intervention within 2 weeks after transplantation. The study population was stratified into four groups, based on early transplant function, as shown in Table 2. From the 131 patients experiencing either DGF or PNF, 108 patients underwent a second renal scintigraphy within 7 days after transplantation (Fig. 2). Qualitative grades and DGF duration The qualitative grades of the first renal scintigraphy did significantly (p < 0.01) differ between the groups of early graft dysfunction. DGF was observed in 75 (81%) out of 93 patients with grade 3 and in 35 (85%) out 41 patients with grade 4, while IGF was noticed in 16 (88%) out of 18 patients with grade 1 and in 19 (40%) out of 48 patients with grade 2 (Supplement Table 1). The Kaplan-Meier curves (Fig. 3) for the duration of DGF and qualitative grading of the first and second renal scintigraphy showed a significant difference in DGF duration between grade 2, grade 3, and grade 4 (p < 0.01) and between grade 3 and grade 4 (p < 0.01), respectively. The Kaplan-Meier curve for delta qualitative grades between the first and second renal scintigraphies did not show significant differences between grades (p = 0.18). Using the univariate Cox proportional hazards analysis, qualitative grades of both the first and second renal scintigraphies were significantly associated with the DGF duration. The delta qualitative grades between the first and second renal scintigraphies were not significantly associated with the duration of DGF (Table 4). Based on the qualitative grades, the anticipated moment of DGF ending was calculated in a subset of patients without IGF (Table 3): grades 1 and 2 of the first renal scintigraphy correspond with a median (IQR) of 5.0 (2.0-7.0) days DGF; grade 3 with 7.0 (6.3-10.0) days DGF; grade 4 with 11.0 (7.5-19.5) days DGF. Outcomes corresponding with the qualitative grading of the second renal scintigraphy are presented in Table 3. Quantitative indices and DGF duration Quantitative indices TFS, cTER, and average upslope of the first renal scintigraphy were significantly different between IGF and SGF, whereas MUC10 did not show a significant difference. All indices were significantly different between SGF and DGF, whereas no significant difference was observed between DGF and PNF (Supplement Table 1 and Supplement Fig. 2). For the first renal scintigraphy, there was a significant association between the quantitative indices and DGF duration: TFS, r = − 0.44, p < 0.01; MUC10, − 0.46, p < 0.01; cTER, − 0.44, p < 0.01; average upslope, − 0.45, p < 0.01. The analysis of the second renal scintigraphy showed a weaker, but still significant association between the quantitative indices and Using the univariate Cox proportional hazards analysis, the quantitative indices of both the first and second renal scintigraphies were significantly associated with the duration of DGF. The deltas of the quantitative indices TFS and cTER, between the first and second renal scintigraphies, were significantly associated with the duration of DGF, HR 0.4 (0.4-0.8, p < 0.01) and HR 1.0 (1.0-1.0, p < 0.01), respectively (Table 4). Grade 1 Fig. 1 Qualitative renal scintigraphy grading: grade 1, a normal renal function with fast uptake and excretion; grade 2, a normal uptake with flat excretion curve; grade 3, a rising curve without excretion phase; grade 4, a reduced absolute uptake without excretion phase [21] Tx kidney transplantation, DGF delayed graft function, DBD donation after brain death, DCD donation after circulatory death a n (%) b Mean ± standard deviation (SD) c Median (IQR) Qualitative grades and length of hospital stay Using the univariate Cox proportional hazards analysis, qualitative grades of the first renal scintigraphy were significantly associated with LOS (Supplement Table 3). Based on the qualitative grades, the anticipated LOS was calculated (Table 3) Outcomes corresponding with the qualitative grading of the second renal scintigraphy and the delta between the first and second renal scintigraphies are presented in Table 3. Table 3). In a multivariate analysis, including all other quantitative indices, the qualitative grading scale, and the clinical covariates, the association between the qualitative grading of the first renal scintigraphy and the duration of DGF was significant for grade 3, HR 2.3 (1.3-4.2, p < 0.01), and grade 4, HR 3.4 (1.7-7.1, p < 0.01). The association between the qualitative grading of the second renal scintigraphy and the duration of DGF was significant for grade 4, HR 4.1 (1.9-8.8, p < 0.01) ( Table 4). In 11 cases, a second RS was not performed for unknown reasons. Fig. 2 Flowchart of the included patients In a multivariate analysis, including all other quantitative indices, the qualitative grading scale, and the clinical covariates, the association between the qualitative grading of the first renal scintigraphy and LOS was HR 1.3 (1.0-1.6, p = 0.04). Multivariate analysis of the quantitative indices, the qualitative grading scale of the second renal scintigraphy, and LOS did not result in significant associations (Supplement Table 2). Predictive performance of qualitative grades for the duration of DGF When assessing the predictive performance of the clinical variables, the − 2 log likelihood improved significantly when including the qualitative grades from the first renal scintigraphy (1623.6 to 1583.7, p < 0.01). The predictive performance of the model with clinical variables did not show a significant improvement after including the qualitative grades of the second renal scintigraphy (766.0 to 737.9, p < 0.01). Discussion Our analysis of Tc-99m MAG3 renal scintigraphy indicates that qualitative grades of two separately analyzed procedures, at ≤ 3 and ≤ 7 days after transplantation respectively, are significantly associated with DGF duration and the LOS. However, the delta of qualitative grades and the changes of quantitative indices between these sequential performed renal scintigraphies are not associated with the duration of DGF and the LOS. These findings underline the strength of the qualitative analysis of a single renal scintigraphy for the prediction of DGF duration and LOS. Conversely, there is no additional value of performing repetitive renal scintigraphy procedures to assess DGF and LOS. Our study confirms the findings of previous studies, which indicated the applicability of the quantitative indices TFS, MUC10, cTER and average upslope and of the qualitative grading with four or six grades for the evaluation of DGF [12,13,[17][18][19][20]. A previous study, focusing on TFS at 48 h after transplantation showed the capability of this index to separate patients with DGF from patients with IGF [13]. In our study, TFS differed significantly between types of early transplant function and was associated with a longer duration of DGF, HR 0.5 (0.4-0.6, p < 0.01) and HR 0.6 (0.4-0.8, p < 0.01) respectively for the first and second renal scintigraphy. For MUC10 from a renal scintigraphy performed within 48 h after transplantation, a previous study showed significant differences between DGF and non-DGF patients, which is in line with the results of our analysis, showing a significant difference in MUC10 values between the SGF and DGF groups [19]. For cTER from a renal scintigraphy performed ≤ 4 days after transplantation, a previous study showed a significant correlation with the period of dialysis dependence (r = − 0.68, p < 0.01), which is slightly stronger than the correlation found in this study (r = − 0.44, p < 0.01) [17]. In a previous study, a four-grade index was introduced for renal Results based on a subset of the included patients, after exclusion of all patients with immediate graft function scintigraphy at ≤ 3 days after transplantation, using this fourgrade index, an independent association between a longer duration of DGF and the qualitative grades was shown, HR 1.8 (1.4-2.2, p < 0.01), which is consistent with the results of studies using both four-and six-grade indices [12,21]. Although previous studies have described the applicability of a first renal scintigraphy at ≤ 48 or 62 h after transplantation, this study is the first comprehensive analysis of a second renal scintigraphy at ≤ 7 days after transplantation. Quantitative indices of the second renal scintigraphy were associated with the duration of DGF, however, not when adjusted for clinical covariates. Multivariate analysis of the first and second renal scintigraphy showed an independent significant association between the qualitative grades and duration of DGF, HR 1.8 (1.4-2.2, p < 0.01) and HR 2.8 (1.8-4.3, p < 0.01), respectively. The delta qualitative grades between the procedures were not significantly associated with the duration of DGF in multivariate analysis. The presented results should be evaluated in light of nonimaging biomarkers for DGF, such as the urinary biomarker TIMP-2 and neutrophil gelatinase-associated lipocalin (NGAL). The predictive value of TIMP-2 was assessed in a population of DCD transplant recipients (n = 74), showing an area under the curve (AUC) of 0.89 (95% CI 0.78-0.99) for > 7 days functional DGF [7]. For urinary NGAL, the AUC for > 7 functional and dialysis-based DGF was 0.75 (95% CI 0.65-0.84) in a population of both DBD and DCD transplant recipients (n = 176) [8]. Renal scintigraphy, performed within 3 days post-transplantation to predict ≥ 7 functional and dialysis-based DGF, showed to have an 87% sensitivity and 65% specificity when analyzed qualitatively and an AUC of 0.82 (95% CI 0.78-0.86) when analyzed quantitatively [21]. Further prospective studies are needed to establish the clinical value of qualitative and quantitative renal scintigraphy analysis in light of emerging non-imaging biomarkers for DGF. Previous studies focusing on the use of renal scintigraphy after transplantation did not use LOS as one of the endpoints. However, this is important since with the increased use of renal allografts from extended criteria and DCD donors, a prolonged hospital stay, and subsequent higher transplant-related costs are reported [4]. In addition, an increased focus on patient-related outcome measures (PROMS) shows the significance of informing patients on the clinical path during and after hospitalization, urging for a reliable indication of moment of DGF ending and the LOS. The results of our study are in line with the literature, with 47% of the transplants coming from DCD donors, a median length of stay of 15 [11][12][13][14][15][16][17][18][19][20][21] days, and in 66% of patients a DGF duration of > 7 days, and provide insight in the expected moment of hospital discharge. Due to the retrospective design, a clinical selection bias resulted in a cohort of patients with a high incidence of DGF and a minimal number of patients with IGF, this selection further increased when analyzing patients with a second renal scintigraphy. On the contrary, the relatively large number of patients with a sequential renal scintigraphy ≤ 7 days after transplantation contributes to a reliable analysis. Thereby, we performed an extensive multivariate analysis to adjust for possible confounders, including all qualitative and quantitative scintigraphy indices in a single model. Presenting a single-center study with a small time frame for inclusion, we can expect a uniformity in transplant care and similarity in renal scintigraphy. Furthermore, analyzing the results quantitatively and qualitatively decreases the impact of inter-observer variability, while our blinded renal scintigraphy analysis decreases the risk of bias. Conclusion In conclusion, a reliable indication of the duration of DGF and the LOS can be provided by qualitative analysis of single renal scintigraphy, whereas the qualitative and quantitative change between sequentially performed renal scintigraphies does not strengthen the prediction of DGF duration and LOS. Qualitative grades of single renal scintigraphy can be used to provide clinicians and patients with a reliable indication of the need for dialysis after transplantation and the expected duration of hospitalization, while the additional value of performing a consecutive renal scintigraphy for the assessment of DGF was not found. Funding The authors state that this work has not received any funding. Compliance with ethical standards Guarantor The scientific guarantor of this publication is Stan Benjamens. Conflict of interest The authors of this manuscript declare no relationships with any companies whose products or services may be related to the subject matter of the article. Statistics and biometry One of the authors has significant statistical expertise. Informed consent Written informed consent was waived by the Institutional Review Board. Ethical approval Institutional Review Board approval was obtained.
2019-07-25T13:03:57.075Z
2019-07-23T00:00:00.000
{ "year": 2019, "sha1": "4d4be522297c94533c70fe93c2e7f7b37348d20a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00330-019-06334-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "965434a3bf630df316ede61f106db81b87d084b9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234561319
pes2o/s2orc
v3-fos-license
Estimation of Crystallite Size, Lattice strain and Micro Residual Stresses by FWHM method and impact of Feed rates on Residual stresses The prediction and control of residual stresses of machined components is necessary. The tribological properties such as wear and tear are get influenced by it. The analytical assessment of residual stress profile of mechanically micro machined specimen with XRD (X Ray diffraction) technique are proposed. Also the lattice strain and crystallite sizes for different specimens were assessed. Residual stresses and feed rates are found significantly correlated and gets affected for Low Carbon Steel Specimensduring micro milling. Introduction The material removal through WEDM (Wire Electrical discharge machining) involve erosion by electrical discharges ie. sparks occurring between metallic samples and electrode wire. The wire is separated from sample through a dielectric fluid and it is continuously feedto the zone of machining. It is a very important machining technique that is capable to machine almost everything which conducts electricity in spite of hardness from relatively common materials such as tool steel, aluminium, copper and graphite. During the machining with WEDM there is no significant physical pressure rise on the work piece whencompared with the machining done by grinding wheels or milling cutters. It also leaves no residual burrs on the work piece along with complete or partial elimination of the need for consequentconcluding operations. The wire electrical discharge machining, material removal rate and surface integrity etc may be affected by various process parameters such as wire electrode (type, diameter and feed of wire), workpiece material (structure, conductivity, thickness), dielectric liquid (type, impurities, flow rate, temperature), discharge current, gap voltage, pulse duration and frequency, polarity, feed control mechanism. Submerged WEDM enhances thermal stabilityalong with effectiveflushalthoughit generates a path of plasma between both the terminalsthat may turn thermal energy as high as 20000 o Cdue to which melting of material takes place. When power supply is switched off, the path of plasma discontinuedue to which aimpulsivedecline in temperature happens that allows the circulating dielectric fluid to beseech the plasma path and flush the molten particles from the pole surfaces in the form of microscopic debris. It also enables complex shapes and intricate geometries to be machined with extraordinarily high accuracy. Kunieda and Furudate tested the probability of conducting dry WEDM to advance the precision of the concluding operations. The experiment was conducted in the absence ICRDMSA 2020 IOP Conf. Series: Materials Science and Engineering 988 (2020) 012064 IOP Publishing doi:10.1088/1757-899X/988/1/012064 2 of dielectric mediumrather in a gas atmosphere. The deionised water is also used in WEDM as dielectric fluidwithin the sparking zone as an alternative for hydrocarbon oil but it is not appropriate for conventional EDM because it causes quick electrode wear even though its low viscosity and rapid cooling rate make it ideal for WEDM.Optimum machining parameters for WEDM areselected with significant amount of literature as inaccuracy may lead to short-circuit, wire breakage, surface damage of work piece. With axial depth of cut and spindle speed the experiment has been designed to study the residual stress generation during mechanical micro machining at different feed rates. Micromechanical machining is a tool based fabrication technique for creating miniature devices and components with features that range from tens of micrometers to a few millimeters in size."The surface finish and integrity of manufactured components is also a very important aspect which in turn gets affected by residual stresses. Hence it is important to estimate, predict and control residual stresses. Micro-Milling Setup Half immersion end-milling operations were performed using micromachining tool MIKROTOOLS DT-110 on the long edges of low carbon steel work pieces of size 50mm x 30mm x 10mm with a 500μm diameter end-mill at feed rates of 10mm/min and 15mm/min respectively. Following operations performed to conduct experiment successfully are listed as below: 20 30 Spindle speed (rpm) 1000 1500 Preparation of Secondary Sample Followed by the preparation of Micro milled low carbon steel specimen, the specimen further cut into small pieces for XRD testing. The following secondary samples were obtained as follows: Preparation of Annealed Sample Each one from the above said category of secondary specimens was annealed at 550 o C to 650 o C for one hourwith air cooling. Remaining other sample from each category is treated as cold worked. XRD (X Ray Diffraction) Testing After preparing the specimens, XRD testing was performed on the annealed as well as cold worked samples separately from each samples in order to understand the effect of feed rates on residual stresses effectively. Copper radiations of wavelength 1.541836 Angstrom were used during XRD testing. Results and Discussion Measurement profiles were obtained for each case and the data with all the relevant information about the peak location after performing XRD testing. FWHM (Full width at Half Maxima), intensity of peak was obtained in tabulated form. This tabulated data was further used in analysis for the determination of lattice strain, crystallite size and residual stresses. The specification of work piece on which XRD has been performed is as follows: Now the same procedure was followed and FWHM data was recorded for other sample 2, whose specifications are as follows: After performing XRD analysis and recording FWHM data for the second specimen and applying Scherrer formula following results were obtained. Figure 6. Plot between BtCosθ×10 -3 and Sinθ Based on the above experimental work the obtained results can be tabulated here. Here micro residual stresses are increasing while increasing the feed rate with increment in lattices strain. Conclusion The FWHM method was successfully incorporated to estimate the Micro Residual Stresses, Lattice strain and Crystallite Size by the Broadening Analysis of the peaks. The crystallite size was determined using Scherrer Formula. Lattice strain was calculated by the slope of graph obtained with the help of FWHM Analysis. It was observed that the slope of the graph always remains positive; hence only Residual Stresses of tensile nature are produced while performing Micro-Milling operation and in particular are not desirable from the perspective of Strength, Components life and Wear resistance etc. Increasing Feed rates were also having a substantial impact on the induced Residual stresses..
2020-12-24T09:12:34.314Z
2020-12-16T00:00:00.000
{ "year": 2020, "sha1": "282cb40a4425980b667563a92146bd45622d2ad6", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/988/1/012064", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7db6d9af9ca28506dcfa20de85f4afde5310c43d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
265156164
pes2o/s2orc
v3-fos-license
The combination of manogepix and itraconazole is synergistic and inhibits the growth of Madurella mycetomatis in vitro but not in vivo Abstract Mycetoma is a neglected tropical disease commonly caused by the fungus Madurella mycetomatis. Standard treatment consists of extensive treatment with itraconazole in combination with surgical excision of the infected tissue, but has a low success rate. To improve treatment outcomes, novel treatment strategies are needed. Here, we determined the potential of manogepix, a novel antifungal agent that targets the GPI-anchor biosynthesis pathway by inhibition of the GWT1 enzyme. Manogepix was evaluated by determining the minimal inhibitory concentrations (MICs) according to the CLSI-based in vitro susceptibility assay for 22 M. mycetomatis strains and by in silico protein comparison of the target protein. The synergy between manogepix and itraconazole was determined using a checkerboard assay. The efficacy of clinically relevant dosages was assessed in an in vivo grain model in Galleria mellonella larvae. MICs for manogepix ranged from <0.008 to >8 mg/l and 16/22 M. mycetomatis strains had an MIC ≥4 mg/ml. Differences in MICs were not related to differences observed in the GWT1 protein sequence. For 70% of the tested isolates, synergism was found between manogepix and itraconazole in vitro. In vivo, enhanced survival was not observed upon admission of 8.6 mg/kg manogepix, nor in combination treatment with 5.7 mg/kg itraconazole. MICs of manogepix were high, but the in vitro antifungal activity of itraconazole was enhanced in combination therapy. However, no efficacy of manogepix was found in an in vivo grain model using clinically relevant dosages. Therefore, the therapeutic potential of manogepix in mycetoma caused by M. mycetomatis seems limited. Introduction Recognized as a neglected tropical disease by the World Health Organization in 2016, mycetoma remains a major health concern in regions of Africa, Latin America, and Asia. 1 , 2 Mycetoma can be of either bacterial (actinomycetoma) or fungal (eumycetoma) origin.Globally, the fungus Madurella mycetomatis is the most common causative agent and is reported in over 70% of all eumycetoma cases. 3Eumycetoma is considered an implantation mycosis, in which the causative agent is implanted into the subcutaneous tissue via a minor trauma.The disease starts as a localized infection in the subcutaneous tissue, eventually forming debilitating masses, draining sinuses, and grains harboring the infectious agent.Mycetoma generally affects the feet, legs, and hands. 4reatment of fungal eumycetoma remains challenging due to the resilient nature of fungal infections and the limited therapeutic options associated with the close resemblance of the fungal and human cells. 5 , 6][9][10][11] At the moment, itraconazole is considered the drug of choice for mycetoma.Generally, antifungal treatment is started 6 months prior to surgical intervention.Then the lesion is surgically removed, and at least another 6 months of post-operative antifungal treatment are given.However, the post-operative recurrence rate varies from 25% to 50%. 12 In addition to high recurrence rates, loss of follow-up frequently occurs due to dissatisfaction with the therapeutic outcome, side effects, and the severe financial burden of the therapy on an average household. 6Therefore, to improve the M. mycetomatis therapy outcome, other promising antifungal agents should be explored. Manogepix, the active moiety of the prodrug fosmanogepix, is a new first-in-class antifungal agent currently in phase II clinical trials for the treatment of invasive fungal infections. 13 , 14Manogepix targets the glycosylphosphatidyli- nositol (GPI) anchor biosynthesis pathway by inhibition of the GWT1 protein, a conserved enzyme that catalyzes the inositol acylation. 15In turn, the maturation of GPI-anchored proteins is prevented, compromising the cell wall integrity, germ tube formation, and biofilm formation. 15Studies have shown that manogepix is active against different clinically relevant moulds, including azole-and echinocandin-resistant Aspergillus spp.and Candida spp. 14 , 16 , 17Given the potential of manogepix, the aim of our study was threefold.First, we wanted to establish whether the growth of M. mycetomatis could be inhibited.Second, we tried to determine if manogepix could enhance the activity of itraconazole.Third, we wanted to determine if manogepix showed in vivo efficacy in a Galleria mellonella mycetoma model . Drugs and fungal isolates Manogepix (APX001A) was provided by Pfizer (formerly Amplyx Pharmaceuticals, Inc.), and itraconazole was obtained from Janssen Pharmaceutical Products, Belgium.A total of 22 clinical isolates of M. mycetomatis were included in this study.The isolates originated from Sudan, Mali, Peru, Somalia, Ivory Coast, Algeria, and France.The origin of three isolates is unknown ( Table 1 ).All isolates were previously identified based on morphology, PCR, and sequencing of the internal transcribed spacer (ITS). 18 , 19Furthermore, genetic variability was determined by Mmy STR analysis, as described by Nyuykonge and associates. 20 GWT1 protein comparison The GWT1 encoding sequence was extracted from GenBank from the annotated M. mycetomatis MM55 genome (BioProject: PRJNA267680, Accession: LCTW00000000.2). 21The reference sequence (Accession: LCTW02000523.1)was used to retrieve the GWT1 sequences of four M. mycetomatis isolates (SO1, Peru72012, P1, and I11) from whole genome sequence data (data not published).The coding sequences were used to generate protein sequences using the built-in translation function in Molecular Evolutionary Genetics Analysis (MEGA, version X), and a multiple sequence alignment (MSA) was constructed of the five M. mycetomatis GWT1 sequences using Clustal Omega V1.2.4. 22 , 23 vitro susceptibility testing The antifungal susceptibility of M. mycetomatis against manogepix was determined according to a modified CLSI method using resazurin as a viability dye, as previously described. 24 , 25Within this modification, we generate hyphal fragments as inoculum instead of using conidia as stated in the guideline.In short, all isolates were cultured on Sabouraud Dextrose Agar (SDA) at 37 • C for 2-3 weeks prior to susceptibility testing.Mycelium was harvested and sonicated at 10 microns for 10 s (Soniprep 150 Plus, MSE), transferred into RPMI 1640 medium containing 0.35 gr/l l -glutamine and 1.98 m m 4-Morpholinepropanesulfonic acid (MOPS), and further incubated for 7 days at 37 • C. Mycelium was harvested by centrifugation, washed, and sonicated using the same settings described above.A standardized hyphal suspension of 70% ± 2% was prepared in RPMI 1640 medium containing 0.35 gr/l l -glutamine and 1.98 m m MOPS.A twofold dilution series for both manogepix and itraconazole was prepared in sterile dimethyl sulfoxide (DMSO) (Merck, Germany).The drugs were transferred to round bottom plates (Corning Fisher, the Netherlands) and further diluted in standardized hyphal suspension and resazurin.Final concentrations ranged from 0.008 mg/l to 16 mg/l for manogepix, 0.008 mg/l to 4 mg/l for itraconazole, and a final concentration of 37.5 mg/l for resazurin.The cultures were incubated for 7 days at 37 • C under 5% CO 2 conditions.After incubation, the absorbance of the supernatant was determined spectrophotometrically using the EPOCH 2 microplate reader (BioTek, USA), and the MIC was calculated according to the formula below, in which the reduction of resazurin to resorufin is determined by measuring the decrease in absorbance of resazurin.The MIC was determined as the lowest concentration where the metabolic activity was ≥80%. Percent age Met abolic act ivit y = [ Ab sorb ance o f negative control] − [ Ab sorb ance o f t est ] [ Ab sorb ance o f negative control] − [ Ab sorb ance o f growth control] * 100) . The minimal effective concentration (MEC) could not be determined because a hyphal inoculum was used as a starting inoculum.Instead, the recently reported surrogate maker of ≥50% was used. 26 Analysis of synergism between manogepix and itraconazole Synergism between manogepix and itraconazole was evaluated by checkerboard microdilution as previously described, using 3-(4,5-dimethylthiazol-2-yl)-5-(3carboxymethoxyphenyl)-2-(4-sulfophenyl)-2 H -tetrazolium (MTS) as a viability dye. 27Manogepix and itraconazole concentrations ranged from 0.0025 to 8 mg/l and 0.002 to 0.5 mg/l, respectively.The MIC was determined by calculating the percentage of metabolic inhibition using the formula below: The MIC was considered the first value in each row and column in which ≥80% reduction of fungal metabolic activity was found.To determine synergism between manogepix and itraconazole, the fractional inhibitory concentration (FIC) was determined.The FIC was calculated according to the formula below, determining up to three decimal places: The FIC was determined for all wells of the microtitration plates that correspond to an MIC in combinations, reporting both the minimum and maximum FIC values. 28A minimum FIC value < 0.5 indicates synergism, a maximum FIC > 4 indicates antagonism, and an FIC between 0.5 and 4 is interpreted as indifferent.Three biological replicates were performed for each respective isolate, and the median FIC min and FIC max were used to determine synergism or antagonism. Galleria mellonella grain model The in vivo efficacy of manogepix monotherapy and in combination with itraconazole was determined as described by Eadie and associates. 29In brief, M. mycetomatis strain MM55 was cultured on SDA at 37 • C for 3 weeks.Mycelium was harvested and sonicated at 10 microns for 30 s, transferred into RPMI 1640 medium containing 350 mg/l l -glutamine, 1.98 m m MOPS, and 100 mg/l chloramphenicol, and further incubated for 2 weeks at 37 • C. The fungal mycelium was harvested, sonicated at 10 microns for 2 min, and diluted to an inoculum size of 4 mg per larvae.Next, the larvae were injected with the fungal inoculum in the left lower proleg using an insulin 29-G U-100 needle (BD Diagnostics, Sparks, USA).At 4 h, 28 h, and 52 h after infection, larvae were treated with 20 μl of 0.21 mg/ml manogepix, 0.14 mg/ml itraconazole, or a combination of both.To reach this concentration, manogepix and itraconazole were first dissolved in DMSO and further diluted in PBS, so that the final concentration of DMSO did not exceed 5%, a concentration well tolerated by the larvae.This resulted in a final concentration of 8.57 mg/kg manogepix and 5.71 mg/kg itraconazole in the larvae.Dosages were based on clinically relevant dosages of 600 mg and 400 mg of manogepix and itraconazole, respectively, based on an average of 70 kg person. 30The survival of the larvae was monitored for 10 days after infection.The toxicity of the compounds was assessed separately by administering treatment to healthy, uninfected larvae equal to the infected groups.The toxicity of the drugs was monitored for up to 5 days after the initial treatment.Pupa formed during the monitored period were excluded from the analysis.Three biological replicates were performed. Manogepix inhibits M. Mycetomatis growth in vitro We determined the in vitro susceptibility of itraconazole and manogepix against 22 M. mycetomatis isolates.As shown in Figure 1 , the MIC of itraconazole ranged between 0.016 and 0.25 mg/l and of manogepix between < 0.008 and 16 mg/l.No minimal effective concentration could be observed due to the use of a hyphal inoculum as the starting method; therefore, a 50% reduction in metabolic activity was used as a surrogate marker ( Table 1 ).Using this surrogate marker, the same range of inhibitory concentrations was noted for manogepix, and also the same MIC50 was obtained.A complete overview of the included isolates and the corresponding MIC values for both manogepix and itraconazole are provided in Table 1 .As can be seen in this table, 16 of the 22 isolates had MICs for manogepix of 4 mg/l or higher. No genetic link between high and low MICs for manogepix To determine if there was a genetic link for the high diversity in MICs obtained for manogepix, the 22 included M. mycetomatis isolates were typed with the Mmy STR assay, and the translated DNA sequence of GWT1 of five isolates with diverse MICs was compared.All 22 isolates had a unique Mmy STR profile ( Fig. 2 A).No apparent clusters are observed between the genotype and the susceptibility to manogepix.For five M. mycetomatis isolates for which whole genome sequence data was available, the GWT1 protein sequence was compared.As shown in Figure 2 B, the GWT1 protein was conserved among these different isolates, and no differences in amino acids were observed.Of these five isolates, I11 and P1 had a very low MIC of < 0.008 mg/l, Peru72012 had an MIC of 1 mg/l, and MM55 and SO1 had a relatively high MIC of 4 mg/l.The Val-168 residue corresponding to the valine residue that has been linked to resistance in different isolates of C. albicans , C. glabrata , and S. cerevisiae is conserved among all isolates 31 ( Fig. 2 B). Manogepix and itraconazole are synergistic against M. Mycetomatis To determine if manogepix and itraconazole act synergistically and could potentially be combined in therapy, a checkerboard assay was performed on a smaller subset of 10 different M. mycetomatis isolates.As shown in Table 2 , combined exposure of manogepix and itraconazole for MM14, MM25, and P1 was found to be indifferent, with median FIC min values of 0.501, 0.547, and 0.563, respectively.This indicates that against these three isolates, manogepix and itraconazole do not act synergistically.For the other seven isolates, median FIC min values of ≤0.5 were found, indicating that manogepix and itraconazole act synergistically. Manogepix in mono-and combination therapy does not enhance larval survival Although relatively high MICs were obtained for manogepix, synergy was obtained with itraconazole.We, therefore, determined the in vivo efficacy of a clinically relevant dosage of manogepix as monotherapy and in combination with itraconazole, the current drug of choice for mycetoma therapy, in our M. mycetomatis G. mellonella grain model.For these selected dosages, no toxicity was observed in G. mellonella larvae (data not shown).As shown in Figure 3 , both 8.57 mg/kg manogepix as monotherapy and in combination with 5.71 mg/kg itraconazole did not significantly prolong the survival of M. mycetomatis -infected larvae. Discussion With the discovery of manogepix, numerous studies have been conducted on its effectiveness in treating invasive fungal diseases.In this study, we demonstrate growth inhibition of M. mycetomatis upon exposure to manogepix.According to the established method for susceptibility testing for M. mycetomatis , we report a colorimetric MIC 50 of 4 mg/l and an MIC 90 of 8 mg/l for manogepix, respectively, using both a 50% and an 80% inhibition cutoff vs. an itraconazole MIC 50 and MIC 90 of 0.063 mg/l and 0.25 mg/l.The broad range of MICs ranging from ≤0.008 to > 4 mg/l could not be explained by differences in the drug target enzyme, as this was conserved within the M. mycetomatis isolates analyzed. Manogepix has been reported to be effective in inhibiting the yeasts Candida spp.1][32] For Candida spp., reported MIC 90 values ranged from 0.008 mg/l (for C. albicans ) to 0.5 mg/l (for C. kefyr ), with the exception of C. krusei (MIC 90 > 0.5 mg/l reported). 31 , 32The higher MICs observed for M. mycetomatis were in the same range as those reported for other filamentous fungi.For Aspergillus spp., MIC values for manogepix were above 8 mg/l when read with the standard CLSI and European Committee on Antimicrobial Susceptibility Testing (EUCAST) methods for in vitro susceptibility testing of filamentous fungi. 33Using 2,3-Bis(2-Methoxy-4-Nitro-5-Sulfophenyl)-5-[(Phenylamino)Carbonyl]-2H-Tetrazolium Hydroxide (XTT) as a viability dye, MICs > 0.5 mg/l were obtained for A. fumigatus.However, similar to the echinocandins, alteration in growth for filamentous fungi was observed at a much lower concentration, and for A. fumigatus , only 0.06 mg/l manogepix was needed to alter the fungal growth. 26Therefore, similar to the echinocandins, for filamentous fungi, it is recommended to determine the concentration at which growth is visibly altered, defined as the MEC.The MEC 90 values for manogepix against A. flavus , A. fumigatus , A. niger , and A. terreus ranged between 0.03 mg/l and 0.125 mg/l. 16or other rare moulds, among which the genetically more closely related Fusarium spp.5][36][37][38] MEC values for filamentous fungi are determined visually as the first concentration where growth is altered.This is clearly seen when conidia or spores are used as a starting inoculum, although lab-to-lab differences in interpretation are noted.Since M. mycetomatis only sporulates on rare occasions, hyphal fragments are used as starting inoculum in standard in vitro susceptibility assays. 25Unfortunately, when hyphal fragments are used, it is not possible to reliably determine an MEC since no clear morphological differences are noted for manogepix and echinocandins. 39Recently, a colorimetric alternative for visual MEC determination was described for Aspergillus spp.In this alternative method, a lower starting inoculum and the viability dye XTT are used, and a 50% reduction in metabolic activity corresponded to the visual MEC. 26Upon implementing this 50% reduction threshold in our methodology, a 50% reduction in metabolic activity with our standard hyphal starting inoculum did not result in a significantly lower MIC value.A possible explanation may be the number of fungal cells within the starting inoculum, as this effect for Aspergillus was also only observed when a 1:1000 diluted starting inoculum was used or because hyphal fragments were used instead of conidia. 16Altogether, the determination of the MIC, and especially the MEC, of manogepix against M. mycetomatis provides limited insights due to the starting conditions used in this methodology, regardless of the inhibition threshold, and should be interpreted with care.Although for most strains, a concentration of 4 mg/l manogepix was needed to inhibit M. mycetomatis growth, we did observe synergy when manogepix was combined with itraconazole.This synergy was also observed for Candida , Cryptococcus , and Aspergillus . 40 , 41We, therefore, evaluated the in vivo efficacy of manogepix alone and in combination in an M. mycetomatis G. mellonella grain model.Unfortunately, neither for manogepix alone nor for the combination of manogepix and itraconazole prolonged larval survival was noted.This was in contrast to the efficacy shown for other fungal infections.In murine models of invasive candidiasis, 78 and 104 mg/kg fosmanogepix significantly reduced the fungal burden in the kidney, lung, and brain, resulting in enhanced murine survival. 42 , 435][46] Fosmanogepix alone was ineffective in a C. neoformans meningitis mice model, but in combination with fluconazole, the fungal burden in the brain was significantly reduced. 40This difference in efficacy for manogepix against M. mycetomatis might be due to a difference in formulation or dosage used.In most studies, the prodrug fosmanogepix is used instead of manogepix.Fosmanogepix is cleaved into manogepix by the activity of host phosphatases.Since G. mellonella is not a mammalian model and there was no data on the activity of the G. mellonella phosphatases in comparison to those in mouse models, we used the active moiety manogepix in the G. mellonella model instead of the prodrug.Furthermore, the concentration we used in our study, 8.57 mg/kg, was much lower than the concentrations used in other studies in mice.In mice, the concentrations ranged from 26 mg/kg to 264 mg/kg.Dosages used in humans in ongoing clinical trials are 600 mg/dosage, which, based on an average 70 kg adult, translates to 8.57 mg/kg. 30Another factor may be the difference in virulence and biofilm formation, as manogepix inhibits glycophosphatidylinositol biosynthesis.This pathway is directly linked to A. fumigatus morphogenesis and virulence and has been described to suppress biofilm formation in C. albicans . 15 , 47For M. mycetomatis , grains (biofilm-like structures), in particular, are challenging as these remain viable for extended periods of time, even upon prolonged treatment with itraconazole. 11In our grain model, grains are observed after 4 h, at which time point the first dosage is administered, meaning that inhibition of grain for-mation could not be assessed. 48Altogether, both the treatment time, selected dosage, and additional unknown factors on the clearance and bioavailability of manogepix could contribute to the contradictory efficacy results compared to other studies and thus be seen as shortcomings in our study.For a thorough in vivo evaluation of fosmanogepix in the treatment of eumycetoma, additional studies are needed, including dosefinding studies as well as studies in mammalian models. To conclude, we determined the potential of manogepix as a treatment for eumycetoma caused by M. mycetomatis .We found that relatively high concentrations of manogepix are needed to completely inhibit the growth of M. mycetomatis , which was similar to other filamentous fungi.In vitro , combining manogepix and itraconazole was synergistic, a finding not documented in vivo .Based on these findings, manogepix seems inferior to itraconazole in mycetoma treatment. Percent age Met abolic act ivit y = Ab sorb ance o f test − Ab sorb ance o f negative control Ab sorb ance o f growth control − Ab sorb ance o f negative control * 100) . Figure 1 . Figure 1.Antifungal susceptibility, expressed as MICs, of 22 Madurella mycetomatis isolates to manogepix and itraconazole.MIC distribution as determined by the resazurin assay for manogepix (MGX) and itraconazole (ITZ).The MIC is depicted as calculated using an 80% inhibition cutoff. Figure 2 . Figure 2. (A) Minimum spanning tree (MST) showing the genetic diversity of the panel of Madurella mycetomatis isolates included for in vitro susceptibilit y testing .Each circle represents a unique genot ype, and the siz e is proportional to the number of isolates of the respectiv e genotype.T he numbers on the connecting lines represent the number of different STR markers between the genotypes.Each color represents the MIC of manogepix against the respective isolate in mg/l.(B) Multiple sequence alignment of the GWT1 protein from M. mycetomatis isolates MM55, SO1, Peru72012, P1, and I11.The Val-168 residue previously identified in resistant isolates of C. albicans , C. glabrata , and S. cerevisiae is highlighted by the red box 30 . Figure 3 . Figure 3.In vivo efficacy of 8.57 mg/kg manogepix, 5.71 mg/kg itraconazole, a combination of manogepix and itraconazole (8.57and 5.71 mg/kg, respectively), or a 5% DMSO in PBS control within Galleria mellonella larvae.The full lines indicated infected larvae treated with the corresponding dosage at 4-, 28-, and 52 h post-infection.Dotted lines indicate uninfected larvae treated with a 5% DMSO in PBS, which were treated at the same time points as the infected groups.All larvae were followed up until 10 days after infection, and no significant difference was observed between any of the test groups compared to the infected PBS (5% DMSO) control. Table 1 . Ov ervie w of included Madurella m y cetomatis isolates and the respectiv e minimum inhibitory concentrations of manogepix and itraconaz ole. Table 2 . Ov ervie w c hec k erboard outcome.Synergism is indicated b y Synergy (S) or Indifferent (I).
2023-11-15T06:17:31.798Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "8a3ff13289ccd035694aa16e46cdc893fdd7678e", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/mmy/advance-article-pdf/doi/10.1093/mmy/myad118/53381394/myad118.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8cba578d3205e1459f180e3b96cf8f7334dc0221", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10492363
pes2o/s2orc
v3-fos-license
Knowledge Management as a Competitive Advantage to the Brazilian MVAS Ecosystem The mobile value added service (MVAS) is a method of differentiation in the mobile telephone market and represents approximately 30% of the mobile network operator (MNO)’s revenue. The Brazilian MVAS sector consists of the content provider, the MNO, and the integrator. This paper aims to examine this sector by analyzing two main stakeholders: the MNOs and the integrator. We conducted a case study in the main Brazilian integrator and administered a questionnaire to managers/consultants of four MNOs that represent 74.1% of the national market share. The results indicate that the integrator has developed knowledge management, intellectual capital and competitive intelligence, operating as a business enabler and creating competitive advantage for this sector. The analysis of the collected data has been more relevant than the capacity of the integration platform. These collaborative relationships have consolidated this market as an ecosystem that operates according to the concept of coopetition. Introduction The mobile telephone market has been increasing all over the world, connecting people and changing according to the technological dynamism.There are 3.4 billion unique subscribers, which represents a 47% penetration.Globally, 3G connections have approximately quadrupled since 2008 to two billion in 2013 (GSMA Intelligence, 2014). Latin America accounts for 10% of the global mobile telephone market in terms of revenue.This market is now moving to a new phase of development characterized by increasing market maturity and by slowing revenue and subscriber growth.However, signifi cant growth remains, driven by new services and applications and by the increase of mobile broadband access (GSMA and BCG, 2013). Smartphones are becoming more accessible to middle and lower income groups in this region because of increasing availability of lower cost models (Bibolini and Lancaster, 2014).Brazil, which is our study focus, is the fi rst mobile network market in this region, and the fi fth in the world, with a density of 1.38 cell phones per inhabitant (Teleco, 2015). Despite mobile subscriber growth and increased mobile broadband access, the average revenue per user (ARPU), calculated by dividing the net service revenue by the average number of cell phones, has globally declined.This result indicates that the average tariff prices for mobile network operators (MNOs) have been decreasing due to competition, the possibility of access at lower price points, the growth of multi-devices, etc. MNOs have invested in mobile value added service (MVAS) to gain differentiation in this market.MVAS adds value to the standard telecommunications offering, encouraging subscribers to increase the use of their phone and allowing the MNO to enhance its ARPU (Frattini, Dell'Era and Rangone, 2013).The data revenue related to services has been considerable to liquid revenue of the main 3G MNOs.In 2014, the percentages were 69% in SoftBank (Japan), 57.6% in Verizon (United States), and 38.6% in Vivo (Brazil) (Teleco, 2015). MVASs add new utilities related to access, storage, presentation, movement or information retrieval to a telecommunication service (LGT 9472/97).Teleco (2015) highlights three main components: short message service (SMS), multimedia messaging service (MMS), and Internet access (data packages), in addition to services such as messaging, entertainment, social networks, payments, and location based services. According to GSMA et al. (2012), the key stakeholders of the MVAS sector are MNOs, over-the-top players (OTTPs), handset manufacturers, international foundations, content developers, and local partners.Figure 1 shows that the relationships in this sector often underscore interdependencies between different actors in developing and presenting different MVASs to the market.In the Brazilian MVAS sector, the key stakeholders are the content provider, MNO, and integrator.Service integration is a potentially multifaceted role that includes different business models, such as setting up a mobile portal, aggregating content from various sources, and customizing service packages for different market segments (Mylonopoulos and Sideris, 2006).In Brazil, the integrator provides support and technology for the use of the operating platform, including telephones as a media channel (Pure, 2012).Its main role is to provide technology services to content providers so that they can spread and monetize their contents. In this market (Figure 2), the provider supplies content to be integrated by the integrator.The integrator then provides the technology that is disseminated by the MNO, which then reaches the end user, generating trades for the whole chain.The content provider can also present a project directly to the MNO, which sends the project to the integrator.The National Telecommunication Agency (Anatel) regulates the relationship between MNO and end user, and published rules about MVAS in 2010.In addition to Anatel, there are self-regulated associations, such as the Mobile Entertainment Forum (MEF), that developed a conduct code with procedures for mobile services and publicity. Within this context, this study aims to examine the dynamism of the Brazilian MVAS sector, through an analysis of its main stakeholders: the integrator and the MNOs.We intend to analyze how an integrator has invested in solutions to add value to their services and create a potential competitive advantage to this dynamic sector, with a focus on knowledge management (KM), intellectual capital (IC) and competitive intelligence (CI).To complement this approach, we will analyze the viewpoint of representatives of four MNOs.We choose an integrator responsible for approximately 90% of the national integrations, and MNOs that represent 74.1% of the national market share.We prefer to study MNOs instead of content providers because MNOs are responsible for the transmission of mobile content through cell phones.This paper is organized into eight sections.Section 1 is the introduction.Section 2 reviews the literature.Section 3 describes the research method.Section 4 provides results.Section 5 is a discussion based on the theoretical framework.Section 6 concludes the paper with an overview of our contributions and the limitations of the study.Section 7 contains the acknowledgements, and Section 8 lists the references. Literature review In this dynamic market, CI, KM systems and IC have been relevant to understand the end users' demands and to diversify services.CI involves environmental scanning and detailed analyses regarding market behavior.From the strategy input selection, strategic information can be collected with environmental scanning, which is related to acquisition and use of information about events, trends, and relationships in the external environment, resulting in knowledge that companies can use to plan future actions (Choo, 1999).Due to constant technological improvement and the emergence of new resources and functionalities that modify the tools for data mining as well as the customer needs, continuous organizational learning is also essential for CI.Kahaner (1996) stated that CI is a systematic program for gathering and analyzing information about one's competitors' activities and general business trends to further one's own company's goals.Intelligence helps a company sustain and develop distinct competitive advantages by using the entire organization and its networks to develop actionable insights about the environment, including customers, competitors, regulators, technology, and others (Calof, 2008).The knowledge generated in the process of CI needs to be transferred to the organization, promoting a culture focused on KM (Machado, Abreu and Neto, 2013).Erickson and Rothberg (2009) stated that KM tends to be oriented to human resources, including Information In addition to the recent regulation, the expansion of smartphone and tablet sales has transformed this market. With access to mobile Internet came the introduction of the app stores, expanding the possibilities of producing and of distributing contents to companies, such as Google and Apple, as well as to autonomous developers and MNOs.The MNOs have been the propeller engine of this market, but the business models are changing due to the entrance of new actors, which is directly related to technological convergence (Silveira, 2008). In this converging environment, users can acquire information and entertainment services in addition to high quality data and voice services (Tan and Zeng, 2009).Therefore, traditional revenues (voice and messaging) for the MNOs have been affected by new online messaging services, such as WhatsApp and Skype, increasing the pressure to diversify their revenue streams (GSMA Intelligence, 2014). In addition to the diffusion of devices with Internet connection, Fries (2011) also observed a risk of direct connection between content provider and MNO that threatens the integrator's role.He noted that integrators must invest in more effi cient reports and analyses in professional qualifi cation and service delivery with differentials to remain competitive. Therefore, the same dynamism that has restructured the mobile telephone market, and particularly the Brazilian market, has also presented potential risks to the current MVAS sector, as it can modify the role of some segments as well as the productive and distributing fl ow of content and access. with questions that aimed to understand the meanings that interviewees assign to issues and situations related to the studied topics (Godoy, 2006).The interviews were administered to the Chief Executive Officer (CEO) and the commercial and operational managers that signed a consent form authorizing the use of their information in this paper. We also administered questionnaires to the managers and consultants of four of the main MNOs in Brazil that have the business of more than 88% of the covered population.The respondents were a planning control manager in network operations, a VAS marketing consultant, a VAS marketing manager, and a network-planning manager.They consented to the use of their information provided that their name and their companies were not identified. The analysis of the interviews' answers of the case study and the open questions of the questionnaire used a categorization of the information.The categories are derived from the theoretical framework and are consolidated in the agenda of semi-structured questions (Duarte, 2005).The analysis categories were KM, IC, and CI.After this categorization, an interpretive analysis was performed.The interpretive analysis is different from the content analysis due to its focus on the information itself, instead of on the language or the quantification of words. Results The results showed the studied integrator created a market standard and participated in the regulation of the standard as a partner of the MNOs.(Takeuchi and Nonaka, 2008). IC is related to intangible assets and involves human capital (skills, training, and experience), structural capital (IT systems, and corporative culture), and relational capital (internal knowledge about external sectors, such as customers and suppliers).According to Stewart (2001), IC is the knowledge that transforms raw materials and makes them more valuable. He explained how companies that use knowledge assets can deftly eliminate the expense and burden of carrying physical assets, or maximize their return on those assets. Both CI and KM programs can benefit from business intelligence (BI) tools improving the competitiveness in the market.BI analytical capability helps knowledge workers in the development of strategic opportunities, in the investigation of problematic areas in the business, and in decision-making (Heinrichs andLim, 2003, 2005). The investment in technology and in the promotion of KM, IC, and CI can be a way for the MVAS sector to survive and to innovate in this dynamic market, allowing for competition and collaboration at the same time.According to Luo (2004), coopetition is a new strategy that uses the conventional rules of competition and cooperation to combine the advantages of competitors.To Gueguen and Isckia (2001), coopetition builds on the idea that competitors should not just be considered rivals for market dominance but also valuable sources of innovation. Methodology From the theoretical framework, we developed a single case study in the main Brazilian integrator and administered a questionnaire in 2012 and 2013 to managers/consultants of four MNOs.The development of a single case study, and not of a multiple case study, is due to the representativeness of the studied integrator (Yin, 2005).This integrator is responsible for approximately 90% integrations and hired by all the MNOs in Brazil, with branch offices in Italy, Mexico, and Central America.Therefore, the lessons learned in this case can provide information about the integrator's role in this market. We used a protocol to conduct the case study that included the study's general procedures, the field procedures, the study issues and the guidelines for preparing the final report (Yin, 2005).The field procedures used were semistructured interviews conducted from an agenda (guideline) of a good relationship with partners, in addition to developing better technologies. Three of the four MNO respondents corroborated the collaborative relation in the MVAS sector.One of the MNOs does not notice strategic information exchange, except in the contractual case of confi dentiality because this market is competitive and the ideas/implementation time are determinant factors.Others agree that there is a strategic information exchange related mainly to the business management, such as interesting rates for specifi c services, better days to tariff the end user, experiences about services that were already created, and others. According to MNO respondents the main MVAS consumed in Brazil, in priority order, are SMS, music, ringtones, mobile broadband, games, mobile publicity, MMS, videos, and mobile payment.Regarding the main differences between the national and international mobile telephone market, some respondents noted that the integrator's role is more common in Latin America.An MNO manager stated that in Spain and in Portugal, for instance, companies with a focus on integration are rare, while companies that operate with content development and aggregation jointly with integration are more common. The MNO respondents highlighted that the integrator's role has been threatened by the possibility of direct connections between content providers and MNOs.The respondents suggested that integrators should provide business opportunities and improve the value chain.They listed the most important types of information given by the integrator to their business.Figure 3 shows that the main types of information are linked with billing and improvement in services.The MNO respondents explained that their companies have invested in BI and CI, developing searches and hiring consultants to analyze market trends and consumers' needs. They enumerated fi ve strategic inputs defi ned by Fahey platforms, and market research.The scanning aims to meet the integrator's own challenges, with improvements in management through the analyses of the partners' complaints.Research is conducted on foreign and national companies that developed solutions that can add value to the integrator's services, generating CI. BI platforms have mostly been developed by the integrator's IT team to create reports with information needed by MVAS business, such as the global revenue of this sector per type of service, per MNO, and per content provider.The operational manager stated that analyses processed by the platform are still basic, and simple data are generated.These data include a comparison between the number of prepaid and postpaid consumers that hired a service. The platform generates between 20 and 30 million records a day, and analysis is necessary to turn the records into strategic knowledge for partners.The integrator has a revenue assurance sector that continuously scans the services, customer-by-customer, generating results that compare each day with results of previous days.The partners receive not only primary data but also complete analyses based on BI tools and the expertise of the strategic team.For example, the integrator produces monthly analyses to content providers, showing billing data, user base behavior, need of investment in media campaign, and information about traffi c profi le (prepaid and postpaid users, main day to charge, main place to offer a service, among others). Based on this information, the integrator creates market behavior indices, and when the results do not fi t the standard, the system emits an alert.When the integrator identifi es changes in the traffi c of a content provider, the changes are immediately reported, due to the urgency of this information.The integrator also does consulting work to guide partners or future partners concerning the feasibility of their projects in the MVAS sector. Therefore, the integrator has invested in IC in addition to developing platforms for integration.According to the interviewees, the integrator actually operates with more emphasis on the human and relational capital than on the structural capital.The analyses of data collected by BI tools and the good relationship with MNOs and content providers have been more relevant than the very capacity of the integration platform. In addition, the integrator has worked to turn its IT team into a more proactive one, promoting improvements in the communication between its teams as well as with MNOs and content providers.It has also developed organizational learning concerning the integration platform and the MVAS sector.These investments show the importance Discussion The integrator has developed KM and CI based on its relationships with other companies.Regarding environmental scanning, the integrator has focused on opportunities, researching new solutions and improvements for its services and management.The integrator has invested in IC, with greater focus on human and relational capital, highlighting the role of the staff in performing strategic analyses. The MNOs have also invested in KM, and have focused on market trends and end users' needs.This procedure corroborates one of the ideas of the new service-oriented value chain mode of telecom MVAS proposed by Tan and Zeng (2009).Their proposal highlighted that developing and providing telecom business will be based on users' practical demands in an information society rather than on existing network features.Therefore, the differential in this market lies in services that add value to the end user. Regarding challenges in the national MVAS sector, the MNO respondents agree that the possibility of direct connection between content providers and MNOs threatens the integrator's role.However, this research shows that the integrator has observed market changes that justify the expansion of its commercial sector to improve communication with partners, the actions to turn IT into a more proactive and communicative sector, the search for new solutions, and the continuous scanning of the partners' actions. The app stores imply a decrease in the number of MVAS end users but amplify mobile Internet use, thus increasing the sale of plans by the MNOs.The MNO respondents highlighted that primary challenges for their businesses included the development of more useful services and the competition.However, according to Teleco (2015), although the MVNOs represent a potential risk of loss of direct customers by MNO, the association with an MVNO can result in increased revenue and lower cost to customers.Regarding OTTPs, many analysts believe MNOs need to reinvent themselves as signifi cant players in the over-the-top segment or risk losing a growth opportunity and becoming mere bandwidth providers. This market has worked as an ecosystem with exchange of strategic information between the stakeholders.A business ecosystem is a complex networked system in which a variety of fi rms coexists in an interdependent and symbiotic relationship (Basole, 2008).Therefore, it is necessary to share the information so that the CI system works (Kahaner, 1996).The collaboration is an important factor for emerging economies for innovation propensity (Temel, Mention and Torkkeli, 2013).Jing and Xiong-Jian (2011) also observed (2007) according to their companies' priority.These inputs included marketplace opportunities, competitive risks, key vulnerabilities, core assumptions, and competitor threats. Most of the MNO respondents noted that the impact of devices with Internet connections and app stores in their businesses and in the mobile telephone market chain represents a possibility for new partnerships and services, as depicted in Figure 4.A respondent highlighted that new devices represent a risk or an opportunity for MNO business and for the companies of the productive fl ow, depending on the service offered.For some MNOs that offer Internet access, for instance, new devices could be an opportunity.Regarding app stores, some respondents stated that they imply the decrease of MVAS end users, but expand mobile Internet access, and consequently expand the sales of Internet plans by MNOs.A respondent noted that Google Play represents an opportunity, while the Apple Store has been a risk.He explained that Android apps are free and that MNOs have control of the billing, while Apple has the highest revenue. The competition from MNOs with mobile virtual network operators (MVNOs) and OTTPs was also mentioned as a challenge because it reduces the price of voice, SMS, and Internet plans, creating demand for increased network capacity.For instance, the MVNOs use the MNOs' networks and their focus lies on specifi c market segments, such as corporate and university.The OTTPs use the MNOs' networks to offer competitive services such as SMS, but they are neither charged the same tax nor regulated with the same laws as MNOs are. collaborative relationships in their study about the Chinese mobile telephone market.Although the Chinese market is different from that in Brazil, they concluded that the cooperation with partners has been a good opportunity to create a win-win situation in this business.These results are consistent with Luo's (2004) study that suggests the concept of coopetition. Final considerations The results show that devices with an Internet connection, app stores, MVNOs, and other innovations and changes in the mobile telephone market represent not only a risk but also an opportunity to MNOs, such as the investment in the MVAS production to increase their ARPU.In this context, the integrator has operated as a business enabler, creating market behavior indices and CI to help other stakeholders in the decision-making. The integrator has invested in KM and IC, developing IT systems, environmental scanning, and analyses regarding market trends to help its partners and to secure its role in this sector.Therefore, the results show the analysis of the collected data has been more relevant than the capacity of the integration platform. This study was limited in several ways, including the small sample size that only focused on the market in Brazil. Regardless of these limitations, the current findings have added to our understanding of the Brazilian MVAS ecosystem, suggesting its dynamism, its focus on opportunities and innovation, and the coopetition between its stakeholders. Figure 2 . Figure 2. The MVAS sector in Brazil.The percentages indicate the approximate billing of each stakeholder. Figure 3 . Figure 3. Main types of information given by the integrator to MNOs. Figure 4 . Figure 4. What the devices with Internet connection and the app stores represent to MNO business and to the Brazilian MVAS sector. IT) systems to collect, store, and distribute codified knowledge.Organization knowledge creation must amplify the knowledge created by individuals and crystallizes the data at the group level through dialogue, discussion, experience sharing, and community of practice The integrator has invested in a Data Warehouse for BI activities, which helps in data storage, environmental scanning, and KM.Internal and external scanning are conducted daily through informal talks, information generated by Technology (
2018-07-18T20:51:37.779Z
2015-07-27T00:00:00.000
{ "year": 2015, "sha1": "e3ac80f1d92690c9377c4adb6ad4536bba3b9da6", "oa_license": "CCBY", "oa_url": "https://scielo.conicyt.cl/pdf/jotmi/v10n2/art01.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e3ac80f1d92690c9377c4adb6ad4536bba3b9da6", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
266996827
pes2o/s2orc
v3-fos-license
Using saliva epigenetic data to develop and validate a multivariable predictor of esophageal cancer status Background: Salivary epigenetic biomarkers may detect esophageal cancer. Methods: A total of 256 saliva samples from esophageal adenocarcinoma patients and matched volunteers were analyzed with Illumina EPIC methylation arrays. Three datasets were created, using 64% for discovery, 16% for testing and 20% for validation. Modules of gene-based methylation probes were created using weighted gene coexpression network analysis. Module significance to disease and gene importance to module were determined and a random forest classifier generated using best-scoring gene-related epigenetic probes. A cost-sensitive wrapper algorithm maximized cancer diagnosis. Results: Using age, sex and seven probes, esophageal adenocarcinoma was detected with area under the curve of 0.72 in discovery, 0.73 in testing and 0.75 in validation datasets. Cancer sensitivity was 88% with specificity of 31%. Conclusion: We have demonstrated a potentially clinically viable classifier of esophageal cancer based on saliva methylation. Esophageal cancer has one of the lowest 5-year survival rates (20%) of all cancers that were diagnosed in the USA from 2010 to 2016 [1].This contrasts starkly with rates for prostate cancer (98%), melanoma of the skin (92%) or breast cancer (90%).Similar rates for this disease have been observed worldwide.The disease is the sixth leading cause of mortality related to cancer and the eighth most common cancer globally [2].There are two histological subtypes: squamous cell carcinoma and adenocarcinoma (AdCa).Whereas squamous cell carcinoma is the commonest subtype in the global south, AdCa is the predominant type in the global north, and the UK is the world epicenter for this disease [3].Esophageal AdCa develops almost entirely in patients with the premalignant condition Barrett's esophagus, which is known to progress through low-then high-grade dysplasia (HGD) to intramucosal AdCa before it finally becomes invasive [4].While it is easily and reliably treated endoscopically at all stages including intramucosal AdCa, as soon as it becomes invasive disease, the prognosis rapidly worsens [5-7].Early detection is therefore key. Clinical symptoms of esophageal cancer are less likely to come to the attention of individuals who have the disease until it has progressed to an advanced stage [8,9].The disease may only present itself with clinically relevant symptoms at the point where the tumor has reached a point of extensive infiltration [10].Diagnostic tests for the early identification of the disease could therefore have a considerable impact on favorable prognostic outcomes. Salivary diagnostics is well established for the detection of certain conditions (oral diseases [11], autoimmune conditions [12], HIV [13]).Epigenetic-based salivary biomarkers hold great promise but have currently not yet been implemented in a clinical setting.They would be very easy to use.A saliva sample can be provided by the patient in their own home.Such a convenient, self-administered procedure would eliminate the need for the patient to visit a clinic, freeing the valuable time of healthcare staff and reducing social exposure risks for everyone.Saliva has many advantages over blood.It does not clot and is easier to store [14].Saliva is stable in preservative and maintains viable DNA at room temperature for at least 8 months [1,15] while inactivating viral particles.Furthermore (and in spite of the recent pandemic), saliva remains a safer material than blood to store, ship and work with. All cancers can induce an extensive range of epigenetic changes, with both hypomethylation [16] and hypermethylation [2,17] widely prevalent.These changes can often precede more severe clinical phenotypes [18].Detection of methylation changes in the body therefore has the potential to act as a valuable marker for early diagnosis.Furthermore, these epigenetic changes can be detected beyond the tissue of tumor origin.Cancer biomarkers based on DNA methylation outside the tissue of origin have been proposed using blood [19,20], but multiple studies also suggest that saliva could have similar potential.For example, a meta-analysis of 18 salivary DNA methylation panel studies has shown its value to diagnose new head and neck cancers [21].Changes in salivary DNA methylation can also detect cancer recurrence [22].Beyond this, DNA hypermethylation in sputum is a promising tool for early detection of lung cancer [23].It is not yet known whether saliva can be used as a detection tool for esophageal cancer.The aim of this study was to develop and validate a multivariable predictor of esophageal AdCa (but not squamous cell carcinoma) status using salivary epigenetics. Study design & participant recruitment The study design is an observational case-control study.Participants were obtained via voluntary recruitment.Ethical approval for the study was obtained (see 'Ethical conduct of research').All patients undergoing secondary care assessment of suspected esophageal cancer or surveillance endoscopy for the preneoplastic lesion Barrett's esophagus and who had never had any cancer-related treatment were eligible for the study.Exclusions included patients unfit for endoscopy due to comorbidity, or for endoscopic biopsy due to bleeding disorders.They were recruited at 19 hospital sites across the UK between October 2018 and July 2021.In addition, we recruited healthy volunteers (HVs), who included family members or friends who were attending the hospital with patients; additionally, some HV people were also recruited outside of the hospital environment.No HV had any previous history of any cancer.All patients and volunteers signed a written consent form after receiving a detailed information sheet that had been approved by the Ethics Committee. Figure 1 shows a Consort diagram summarizing the acquisition of participant data.A total of 2275 people were recruited to the Saliva to Predict Disease Risk (SPIT) study.Of these, only 1696 people contributed saliva samples.We selected 384 Caucasian people for inclusion into the final array analysis using case-control matching as we did not have adequate numbers of samples from non-Caucasians with cancer.A total of 109 people failed array quality control (QC), leaving 275 individuals.Of these, nine samples were from patients with HGD which were not used in the main analysis (see next section) and ten samples were technical replicates used to assess reproducibility.This left us with 256 samples for the principal analysis. Sample collection Saliva samples were collected using the Oragene R DNA OG/600 saliva collection kit (DNAGenotek, Ottawa, Canada).All participants were required to fast for a minimum of 1 hour prior to sample collection.Sample collection for most non-HV participants was performed at the site hospital, prior to the subject receiving an endoscopy.In the case of the HV group (who did not have any endoscopy), sample collection was done in their own time, following detailed written instructions in line with the manufacturer's instructions [24] which explicitly stated the need to fast before sample collection for at least 1 hour.We recommended sample submission upon waking.We instructed volunteers to seal the samples immediately after donation and provided a prepaid envelope for easy postal return at room temperature.Samples were stored at -80 • C upon receipt at our center.Samples were frozen within 7 days of collection, although DNA collected with this system is known to remain stable at room temperature for many months [15]. Sample processing DNA extraction was undertaken using the Zymo Quick-DNA™ Miniprep Plus Kit (Cambridge Bioscience, Cambridge, UK).Briefly, saliva was thawed and cells were lysed using proteinase K. DNA was then isolated by digesting the sample with genomic binding buffer before it was purified using a Zymo-Spin™ IIC-XL column in a collection tube.The sample was centrifuged and the flow-through was discarded.The DNA was then eluted using a DNA elution buffer before being centrifuged again. DNA quantification was undertaken using the Bioanalyzer 2100 (Agilent, CA, USA), a microfluidic chip-based automated capillary electrophoresis machine.It yields highly precise analytical evaluation of DNA, RNA and protein integrity and quantity.We used the Bioanalyzer High Sensitivity Assay Kit (Agilent), following the standard manufacturer's protocol. A sample was acceptable for the next step only if the DNA/protein optical density ratio at 260/280 nm was above 1.8 and the contamination with protein (absorption at 230 nm) was low (260/230 ratio was above 1.5)The pellet was then frozen at -80 • C until use. Bisulfite conversion The Zymo EZ-96 DNA Methylation Kit (Zymo Research, CA, USA) was used.Optimal DNA input to the bisulfite conversion process is 200-500 ng/μl.The input volume was calculated by dividing the total DNA input (ng) by DNA sample concentration (ng/μl.)The bisulfite conversion process followed standard protocols [25].Briefly, after addition of Zymo M-dilution buffer, the final volume was adjusted to 50 μl with UltraPure DNase/RNase-Free Distilled Water.The sample was incubated at 95 • C for 30 s, followed by 50 • C for 60 min × 16 cycles, then held at 4 • C. On the following day, the samples were incubated with binding buffer and loaded into the wells of the binding plate, then repeatedly centrifuged.Flow-through was discarded.The bisulfite-converted bound sample was then eluted and stored at -80 • C until used. Array plate experiments & duplication Two methylation array sets were collected, which were scanned in July 2020 and January 2022.An important question that has raised concerns about methylation array technology is how well probes replicate, especially given the delay between the two experiments.We were concerned to determine whether the two experiments could be combined.We included ten biological replicates to gain insight into this and to inform our approach to QC.We extracted the distance metric that is part of a standard Kolmogorov-Smirnoff (KS) significance test of the β-values and noted the distance of the duplicates from each other.We also created an 'average array' of the several hundred other samples in our experiment.We plotted the duplicate pairs on a graph (Figure 2).The x-axis is the duplicate pair distance (which is the same value for each pair because the distance from A to B is the same as the distance from B to A).The y-axis contains the distance of each array to the summarized methylation values (the average array).Only four of the ten duplicate pairs replicated well.We did, however, observe that the pairs that replicated well had an average β-KS distance of between 0.3 and 0.6 and that the worst-performing pairs contained single samples that exceeded 0.6.As a consequence of this, we applied this informal rule to choose the duplicate samples that were likely to be the more reliable pairs.We agree with the findings reported in the literature that there are replication issues with EPIC arrays [26].Exerting highly stringent QC may lead to discarding arrays where the KS distance of the array to the mean of all array values exceeds 0.6, but this is likely to remove good samples along with the bad. Study size & case-control matching Our priorities were to include as many disease phenotypes as possible and to match the controls to the disease cohort so that any potentially confounding covariates would be minimized.We had ten duplicate samples, so we had to restrict ourselves to a single one of these.We decided to include intramucosal esophageal adenocarcinoma (IMC) and invasive esophageal AdCa in model generation, but relegated HGD subjects to independent testing only, due to diagnostic and categorical ambiguities that could be present in that latter group, which subsequent analyses appeared to vindicate. We developed procedures for selection and assignment of the samples to the array plate.Batch effects in this technology can be large [27].If these effects are confounded with nontechnical covariates, statistical batch correction risks removing additional information of potential biological interest.We therefore took measures to ensure that the main batch variables (the two array plates and the 12 rows of each individual plate) had as much biological and clinical homogeneity as possible at the design stage.This was done via a three-step process.In the first step, case-control matching was performed.We constructed a set of multidimensional vectors representing each person.Each dimension of the vector represented a clinical or lifestyle variable of interest that we considered important.These variables were age, sex, BMI, smoking (pack-years), alcohol consumption, consumption of proton pump inhibitor (PPI) medication and accumulated incidences of heartburn.The dimensions were normalized and then weighted by importance.A participant in the esophageal AdCa group was selected at random, their closest neighbor from each clinical diagnosis group was found and this cluster was then set aside.The process was repeated until the AdCa samples were all matched (Table 1). The second step of array plate design involved assigning all samples to a particular row of the array and adjusting the composition of subjects on the two plates so that their age, sex, smoking, drinking, heartburn and PPI characteristics were as close in value as possible.Matched participant groups were randomly assigned a row on the plate.A pair of randomly selected samples were then swapped and tested.If the swap increased global covariate heterogeneity, the swap was kept.Heterogeneity here was defined through standard statistical tests (t-test and χ-square). Processing of arrays The data were analyzed in the R statistical programming environment [28].Array .idatfiles were loaded into the environment via the minfi package [29].Annotation of arrays was provided by the IlluminaHumanMethyla-tionEPICanno.ilm10b4.hg19package [5,30].QC was performed using the ShinyQC package [31], which produces QC measures for the scanning and hybridization stages, as well as a general summative QC measure and various measures associated with control probes.It also provides density plots of the data.An initial round of filtering removed samples that were strong outliers by visual inspection of their M-and β-values when viewed on a density plot.Samples that failed to show a maximum peak around an M-value of -5 while producing sizagingable and anomalous-looking peaks at other values were considered to have systematic errors and these were removed.Further inspection of individual QC measures caused more arrays to be removed if they were obvious outliers in more than one measure; however, particular importance was attached to the bisulfite conversion measures, and all outliers on this measure were removed.Specifically, any bisulfite conversion with control intensities lower than 2000 were automatically removed even if other QC measures indicated no issues.In total, 109 samples were removed for QC reasons, leaving 275 samples for final analysis.The data were normalized using the functional normalization method implemented in the R minfi package.This method was tested and recommended by Fortin et al. [6,32] as appropriate for datasets with global methylation changes, such as those encountered in cancer-related data, and thereby deemed the most appropriate method.Sample cell-type heterogeneity can cause patterns of methylation unrelated to clinical diagnosis phenotype and thereby increase type I errors if it is not addressed.The EpiDish R package [7,8,33,34] estimates cell type fractions from the data using key indicator probes.Epithelial cell type fraction was the main variable used from this result.The quantities of fibroblast cells were always very small (mean content value = 0.005), meaning that the immune cell (IC) type composition percentage was effectively an inverse mirror of epithelial cell content.If epithelial cell content was adjusted for in the analysis, IC cell content would also be adjusted.Additionally, we calculated the epigenetic 'Horvath age' values from the data and plotted them against the reported age values of the participants [35]. Batch effect adjustment We observed strong batch effects in the normalized data.Singular value decomposition (SVD) of the data showed that experimental batch variables, namely the array rows and the plates, were both strongly significantly associated with higher order SVD components (second and third components).We attempted to remove the effects of these with the R ComBat package [36].However, calculation of SVD components after ComBat had been applied still showed strong batch effects and we considered that a more stringent method of batch removal was necessary.A linear model based on the array rows was fitted to every probe on the array and the residuals were taken.There was no detectable presence of any batch variable after this had been performed (none of the 20 highest SVD components had any batch-associated statistical significance).Encouragingly, the highest-order SVD component of the batch-fitted residuals was now significant with the clinical diagnosis group (p < 1 × 10 -4 ).All subsequent calculations worked with the residuals of a batch-fitted linear model.Figure 3A & B show heat map plots of the batch variables before and after complete removal of both row batch and experiment batch; the higher-order components retain strong significance with the disease and other covariates after this adjustment. Weighted gene coexpression network analysis We performed the main analysis using weighted gene coexpression network analysis (WGCNA).This technique was originally developed for the analysis of gene expression data but has seen more recent application in methylation data [37].The aim of the technique is to find modules of genes that are expressed in a similar manner across experimental replicates.Although methylation arrays survey both coding and noncoding regions, we reduced the analysis dataset to the gene level as a first point of analysis.There are practical and theoretical reasons why this was done.Firstly, the analysis of methylation networks is in a state of relative infancy compared with that of gene networks, and their relevance and importance are not as well established because of this.We would therefore have more confidence in the validity of any conclusions and their connection to the disease if we worked at the gene level.Secondly, and perhaps most importantly, there are genuine concerns about the reproducibility of methylation arrays in general [9,38], so by collapsing loci to the level of the gene and selecting the locus with the most stable behavior, the chances of reproducibility were greatly increased. Assessing robustness of the data As well as the technical replication issues mentioned above, another issue that is potentially problematic for any analysis of samples drawn from a subpopulation is the question of how robust the data are to small changes in cohort.This is, arguably, an issue that does not receive enough attention, yet many published analyses fail to show reproducibility in the wider population [39].It is obviously not possible to say in advance whether an analysis is robust in the outside world, but some assessment of the internal robustness of an analysis can be performed by considering what happens if small changes are made to the composition of the input data.If certain genes or gene modules, for example, disappear when the analysis is repeated after the removal of a very small number of samples, this is likely to be an indication that the analysis was too finely fitted to the particular cohort being analyzed.We therefore modified the standard WGCNA pipeline to address the issue of robustness by repeating the analysis several hundred times with minor modifications to the discovery dataset, and used clustering methods to summarize the results. Mapping methylation data to single gene values Annotation for the methylation locus was provided by the same R package used for QC: IlluminaHumanMethy-lationEPICanno.ilm10b4.hg19.We selected methylation loci that had a GenBank mapping, and all other probes were discarded.This meant that the 169,185 methylation loci had been isolated to 13,910 genes.Usually, several probes map to a single gene, and we reduced the data further by using the CollapseRows function on the WGCNA R package, which chooses a representative probe for each gene [40].This is a necessary procedure for an analysis in WGCNA as genes cannot have multiple module membership.The selected methodology for this selection is MaxMean method (the function default), which goes through all the probes of a single gene and selects the one with the highest mean among all samples.Selection of the WGCNA parameters, specifically the scale independence value, was performed using the entire dataset.The WGCNA methodology raises correlation coefficients to a power in order to demarcate module boundaries more clearly and to bring the network closer to scale-free topology.It is necessary to determine in advance a suitable power.This was performed and a value of 6 was selected.This value was maintained through repeated WGCNA calculations.We therefore made the assumption that the removal of five individuals in a cohort of over 100 individuals would not radically alter the network topology.Every constructed network had a scale factor of 6. A succession of networks was constructed by removing just five randomly selected samples at a time.The number of permutations possible is such that the probability of removing the same five samples when performed a few hundred times is almost zero.We constructed the WGCNA modules and determined module membership of all the genes and noted this (it was the basis of the module clustering performed later on).The module Eigengene was determined, and each module was statistically tested for significance via logistic regression against the Eigengene.The logistic regression model was fitted to clinical diagnosis but also to age, sex and epithelial cell content.This meant that a module had to be significant for the disease once these covariates had been accounted for. Determination of important genes in modules We took the results of the module adjacency function and normalized it to an upper bound of one to make it independent of module size and took the reciprocal so that the hub gene had a value of 1 and less significant genes were larger.We then multiplied this normalized adjacency gene value by the p-value of the entire module which was generated by the logistic regression testing of the module Eigengene.The hub gene would retain the p-value of the module, but the modified adjacency function would inflate the values of other genes.This is no longer a p-value, but is instead a very useful numerical indicator of whether a gene belongs to a significant network and whether its role in that network is important.This metric value was noted for every single gene, across every instance of network construction.Those genes that failed to gain an assignment to a module were given a default value of 200, which was similar to the metric value of the least important gene in the least significant network.Averages were then taken.If a gene was routinely scoring low values of the modified p-value metric across hundreds of instances of classification, we considered this to be an important gene that is robustly associated with disease. Clustering of modules Selecting the best genes from each module after creating hundreds of networks is less straightforward because the number of modules in each network is not constant.Figure 4 shows a histogram of the distributions of the module membership for the 5000 most significant genes after all rounds of WGCNA module creation.What was unexpected was the extent of the variation; a very small change in cohort composition of just five samples (3% of the entire discovery cohort) can make a WGCNA change from creating two modules to creating 12 using the exact same parameters.In each occasion of module generation, the modules are created and numbered in order of size, with some genes often appearing together in the same module, but that module is numbered differently at different instances of WGCNA module creation.These changes and similarities can be tracked by performing k-means clustering based on the genes' module assignments over all rounds of network construction.If a group of genes is consistently forming a meaningful module, but that module is assigned a different identity, clustering will retain the broader composition of that module. Generation of candidate genes from the modules We decided to rank the modules based on the best (i.e., lowest) scores of the genes in the module.Module clusters with the lowest-scoring genes are assumed to be the best modules, which means that module clusters can be ranked.The best genes from every module cluster could have been selected, but from module testing we know that some modules repeatedly show no significance with disease.We implemented a simple selection method where we selected N genes (where N is a predetermined number) from the best module and (N − 1) genes from the second-best module and continued until N = 1.We therefore have several ways of creating gene lists based on the k-value of the clustering and the initial N value describing the number of genes selected from the most significant module.We investigated all possible classifiers from k = 2 to 20 and N = 2 to k. Selecting the best gene list Testing of the data was performed using the R-Caret package [10,41] implementing a random forest classifier.Candidate probes generated a classification model using the discovery dataset and also the covariate information (age, sex and epithelial cell content).Receiver operating characteristic (ROC) curves and area under the curve (AUC) values were determined for all three datasets.For the training data, a tenfold cross-validation model was produced, while for the test and independent validation datasets the data were assessed using the model established from the discovery data. By proposing a relatively large number of candidate gene lists, there is a risk that the best-performing classifier could be specific to the dataset in question.We therefore calculated the average AUC of all three classifiers and also the variance of the average of all three classifiers and added these ranks together and selected the classifier with the lowest sum.This means that we are choosing a classifier that works well but that also works consistently across the three datasets, making the discovery of the classifier less likely to be a chance discovery. Given the relatively high values for the ROC, it is possible to bias the model toward finding cases of higher interest, such as cancer.We therefore added a wrapper cost model to bias the classification [42].We aimed for a minimum sensitivity to detect cancer of 0.9.We found that biasing cancer cases as being 20-times more important than non-cancer (i.e., a weight of 20) ensured this threshold.Adding a wrapper to a model can change its overall accuracy, and we took care to minimize such negative effects by manually setting the weights. Participants The cohort-wide male-to-female ratio was 2.52, which reflects the male preponderance of the disease; every discovery, testing and validation group in our cohort had more male subjects, in both controls and cancer groups.Participants completed a questionnaire prior to sample collection which included the recording of age and sex information.There were no missing data on age or sex. Horvath age calculation We calculated the epigenetic Horvath age values [11,43] from the data and plotted them against the reported age values of the participants.We observed no significant deviations between them (Supplementary Figure 1). Assessing reproducibility in the data using technical replicates We tested ten technical replicate samples on different array plates, which were collected at different times.We calculated the mean absolute difference in β-values across all probes that had passed QC and compared them with their corresponding replicates.Only four of the ten duplicate pairs replicated well (mean absolute difference <0.01).We also calculated a composite array for data comparison purposes, which was derived from the mean absolute difference calculated across all other samples in the experiment.There was a tendency for arrays that had a mean absolute difference that was greater than that of the composite array to also replicate badly.We did observe that the pairs that replicated well had an average β-KS distance of between 0.3 and 0.6 and that the worst-performing pairs contained single samples that exceeded 0.6.We could not use duplicate pairs in the study and needed to select an individual one.We therefore used the array that had the lower KS distance of the duplicate pairs.We did not apply the discarding of arrays based on the KS distance in general. Creation of classification outcome groups There were three classes of non-disease control (74 HVs, 42 subjects with no positive diagnosis of abnormalities after endoscopy and 52 samples from people with nondysplastic Barrett's esophagus [NDBE]).There were 88 individuals in the cancer group, which comprised 28 people with IMC and 60 with invasive esophageal AdCa, which was defined as disease extending to the submucosa or more extensively [9].All the cancer patients were newly diagnosed and none had yet received any cancer therapy.The nine HGD samples represented a potential categorical ambiguity; we therefore decided to omit them during the discovery stage.This agrees with the clinical finding that approximately 40% of patients with HGD in Barrett's esophagus progress to AdCa within 5 years [4]. Creation of analysis cohorts The entire cohort was divided into three subgroups: one large group for classifier discovery, a testing group and a validation group.The discovery cohort comprised 158 people (64% of total available participant data), and the two testing and validation cohorts contained 52 and 46 people, respectively (20% and 16% of participant data).Patient samples were stratified by disease status then selected randomly, to ensure similar ratios in all the training and testing groups [41]. Table 1 show the mean ages of cancer and non-cancer subjects, as well as the mean values for other factors used in matching (BMI, PPI intake, cigarette smoking, alcohol consumption and heartburn incidence).These latter variables represent accumulated measures.The pack year measure for accumulated smoking is a commonly measure of cigarette consumption [44].We produced analogous measures for alcohol consumption (drink years), PPI intake (pill years) and heartburn (incidence years) based on frequency and duration information reported by individuals. We aimed to match participants in cancer and non-cancer groups by a combination of all the factors.This was considered important as although there was a broad agreement of reported age with epigenetic age (see Supplementary Figure 1), there is still some variation.It is known that lifestyle factors such as cigarette and alcohol consumption can accelerate epigenetic aging (and have their own unique effects that need to be matched [14,15,45,46]).We have therefore taken all these factors into account in producing a matched pairs so that even though an older cancer subject at times may be matched with a younger person if (for example) their combination of lifestyle factors and of reported symptoms and medication (heartburn, PPI intake) would potentially make them more epigenetically similar.When the broader situation of lifestyle, symptoms and medication is viewed, we found that the variation in our broader cohort is generally modest.To assess this, we selected random pairs of individuals in the cohort to see how "badly" they would mismatch, using the distance-based matching used to match subjects.Using 100,000 randomly selected pairs of individuals.We found an average mismatch distance of 2.95.This compares to the average mismatch distance of our pairs of 0.352. Cancer subjects were very carefully matched by all variables, including age, with three subclasses of control (HV, no positive diagnosis and nondysplastic Barrett's esophagus).During matching, AdCa and IMC individuals were matched together against the three subclasses of control, with HGD matched last.Most of the individuals in the AdCa and IMC groups of five had an age range of less than 10 years from each other.There are, however, additional controls who have tended to be younger on average, as can be observed in the means of ages in Table 1.These additional younger people in the controls had originally been matched against the HGD group, which was younger on average than AdCa and IMC.There were also fewer available older controls to match against them as AdCa and IMC subjects had been matched first.We did not remove these additional younger controls because it would represent a substantial loss of statistical power.These additional younger people have tended to possess above-average values of BMI and of cigarette, alcohol and PPI consumption as well as greater incidences of reported heartburn for their age, as this reflects the values in the HGD group they were matched against.It is likely, therefore, that genetic ageing effects have been accelerated in these younger people, which will offer some mitigation against the age differences between cancer and non-cancer groups; however, we recognize that it is imperative that any subsequent fitted statistical models should include and adjust for age as a covariate so that probe values are unable to use it as a proxy for disease. During initial data processing, we investigated methods for removing the batch effects of the arrays, which from principal component analysis were found to be strong, but found that many conventional methods of correction still left observable batch effects in the data (Figure 3).Taking residuals from a batch-fitted linear model, based on each row of the array, removed any detectable traces of batch effect (confirmed with principal component analysis), both within the array plate and between all the plates.All subsequent analyses were performed on batch-corrected data produced in this manner. Methylation probes were mapped to genes, and a representative gene probe was used based on the probe with the largest mean across all samples.WGCNA was performed repeatedly on the discovery cohort with slight variations in cohort each time, achieved by removing five randomly selected samples (3% of the cohort) and repeating 2500 times.This was performed to discover those probes that were repeatedly being shown as significant despite this small variation, thereby reducing the possibility of type I errors.WGCNA generates modules of genes.Typically, between two and three modules were generated, with a maximum instance of 12 modules created.The histogram of the distribution of all maximum module values generated is shown in Figure 4. WGCNA module membership was noted for every gene over every instance of WGCNA.In order to determine which genes were repeatedly being assigned to the same module, the WGCNA module membership information was clustered using k-means clustering.This is necessary because the WGCNA module numbering for individual genes might change in response to small variations in the cohort.There are several reasons for this: larger modules might sometimes be broken apart, for example, or some genes might not qualify for any module on occasion.The clustering is able to identify any groups of genes that were exhibiting similar module assignment behavior together.It is also possible to see any instances where larger modules were broken up by adjusting the parameters of the clustering. The classifier models included all the covariates used for original matching of patients with controls.We had expected that they would disappear from the final algorithm but this was not the case.Age and sex remained important independent features.The performance of all classifiers was ranked according to the mean AUC of the three classifiers and by variance between the training, testing and validation datasets.We therefore defined the most successful classifier as the one with the best combination of high-ranking mean and low-ranking variance.The AUC was used as the classifier performance metric, and the results of the single best-performing classifier are displayed in Table 2, although multiple models with parameters different from the defaults yield similar results.For example, we also created random forest prediction models with values for the mtry and ntree parameters within Caret other than the default, but noted no significant differences.The ROC curve is shown in Figure 5.For the training set, the diagnostic accuracy was 0.73 (95% CI: 0.69-0.75),with an AUC also of 0.73 (95% CI: 0.69-0.75).The findings were very similar in the testing set, with an accuracy of 0.69 (95% CI: 0.66-0.72)and AUC of 0.72 (95% CI: 0.68-0.74),and the validation set, with an accuracy of 0.65 (95% CI: 0.61-0.70)and AUC of 0.73 (95% CI: 0.68-0.75).Using the base model, the training set had a specificity of 0.94, and this increased to 0.97 in the testing and independent validation sets.To test the effectiveness of our models to find all cases of cancer, we considered a cost of 20-times more for cancer instances than non-cancer ones using the cost-sensitive wrapper algorithm.This yielded a sensitivity of 1.0 within the discovery dataset and values of 0.94 and 0.91 within the test and validation datasets, respectively, and the specificity dropped to between 0.10 and 0.31.Note that the ROC of these results is slightly less than the nonweighted versions as the wrapper algorithm used to generate these results is biased for finding the cancer cases.There was no difference in discriminatory ability of the epigenetic panel between cancer versus HVs or cancer versus benign disease. To assess the individual effects of the methylation data and the covariate data, we performed separate classification using only probes and only the covariates (Table 3).In each instance, classification was above the level of random assignment.We also investigated using probe and covariate information but without epithelial cell content.ROC curves for these different analyses are shown in Supplementary Figure 2. In all cases, the epigenetic probes performed better than the covariates alone, as expected, but the combination performed better still (Table 3). Intramucosal & AdCa samples We looked at the classification results for a range of the best-performing classifiers using the base and cost-sensitive models derived from the discovery dataset (Table 2).In the base model, sensitivity was very low but better for IMC (0.29-0.55) compared with the invasive cancer group (0.00-0.18).In the cost-sensitive model, sensitivity was very high for both IMC (0.78-1.00) and invasive cancer (0.86-1.00). HGD samples Classification of the nine HGD samples was performed using several competing candidate classifiers from the discovery dataset with a random forest classifier.Using the base model, none of the HGD samples was classified as cancer.Once again, as we raised the cost of missing cancer cases, the ability to find HGD cases changed.At a cost set at 20, the cost-based model found 100% of the HGD cases as well.The progressions between these two cost extremes are shown in Supplementary Table 1. Discussion This is the first paper to demonstrate the possibility of a clinically viable tool to detect patients with esophageal AdCa, based purely on the epigenetic profile contained in a saliva sample together with age and sex information.In three independent datasets which were matched for patient characteristics, an algorithm was generated, tested and validated and had an overall accuracy for identifying esophageal AdCa of 69-72% based on the ROC.Using a cost-sensitive wrapper algorithm, it is possible to detect 86-100% of all cancer cases if aimed at a symptomatic patient group where sensitivity is the key variable.It may also be possible to use this same test for screening, as the base model offers a specificity of 94-97%, with a sensitivity for detecting disease of 11-28%.These findings are very similar to those used in the UK bowel cancer screening program [47].Some of the seven probes that we found have been individually linked to other cancers, but ontology analysis of the gene panel showed that these seven loci do not offer any statistical evidence of being associated with any other cancers.We have not specified the specific probes here because, while the ones chosen are representative of particular genes, others could be substituted which would work almost as well.Furthermore, the small number does not permit any meaningful insight into the etiopathogenesis of the cancer itself. Our saliva-based epigenetic algorithm works particularly well in patients with early disease and in those with premalignant changes of HGD, where early intervention can dramatically reduce the risk of dying [48].Using a cost-sensitive model, the classification of both IMC and HGD samples was 78-100%.This indicates that early disease is almost as successfully detected as cancer cases.We plan to further study this point in the future.It is also very important as the crucial cases to detect are the early cancers that are potentially curable.No reliable noninvasive test currently exists for these important groups. The case-control matching combined a variety of covariates, which led to a certain disparity of age between case and control groups.This meant that, for example, two people of different ages were matched if the elder case sample were to be matched with someone with less healthy characteristics in the control group (smoking, alcohol consumption).In further studies, it seems reasonable to match subjects for similar lifestyle properties, but perhaps age matching of subjects should be given a higher priority. A practical diagnostic tool would involve a fully trained classifier such as the random forest classifier that was used based on the discovery loci.In theory, batch correction would not be necessary, as it is a single measurement.As we used the fitted residuals of the batch variable, this means that average values of the whole batch-adjusted array were effectively zero and a baseline adjustment would therefore need to be applied. Diagnostic and prognostic success has been achieved using tissue sample methylation.Li et al. were able to produce a classifier that could distinguish between Barrett's esophagus, AdCa samples and squamous cell carcinoma from normal tissue with a high level of success (AUC = 0.992) [16,49].Biomarkers for esophageal cancer have been discovered both in the tissue of origin and in blood.Salta et al. discovered individual biomarkers in tissue with the best-performing of them (ZNF569) able to differentiate cancer samples from non-cancer samples with an accuracy of 79% (AUC = 0.85) [50].Qin et al. found a panel of five markers present in blood plasma with an AUC of 0.93 [17,51].Both these studies did combine AdCa and squamous-cell types in their cases, which have considerable histological and biomolecular differences from each other [52]. Methylation-based biomarkers in saliva specifically have found comparable levels of accuracy in discriminating samples of other cancers from controls.For example, Lim et al. evaluated a panel of five tumor-suppressor genes isolated from saliva to discriminate human head and neck cancers from controls for a cohort of human papillomavirus (HPV)-negative and -positive individuals (AUC for HPV-negative = 0.86 and HPV-positive = 0.80) [3,53]. In our study, the WGCNA methodology has demonstrated its usefulness in determining a set of candidate probes very efficiently.It appears to have certain inherent advantages over generating candidate probes from a two-group analysis.First, the collapsing of data to a single representative probe has meant that an individual probe has had to demonstrate a certain reliability against what are effectively competing pseudo-replicates.Second, the distinctness of the modules means that probes from each module are also exhibiting more distinct behavior.Often, the genes at the top of a two-group analysis are probes from the same entity (gene or CpG island).This is desirable to bolster evidence, but for diagnostic purposes, if the probe is reliable, there is no need to replicate it.What would be desirable is to survey all the possible ranges of inherent biological variation to ascertain disease status.WGCNA, with its distinct modules, is arguably in a better position to do this.It could be possible to ascertain more biologically variant probes in a two-group gene list, but such post-processing procedures would be equivalent to what WGCNA is doing in the first place. The covariate data had limited predictive value, due perhaps to the age and sex matching.In a clinical setting, age and sex information will be freely available. This initial analysis has been restricted to the gene level, but further scope for this study could be achieved using the level of methylation loci instead.It would, however, require special consideration of not only the much increased demands in computational facilities but also the vast increases in variables which could make the scope for erroneous (i.e., type I) discoveries considerably greater. Limitations of the study These promising results suggest that information relating to esophageal cancer status can be ascertained from a saliva sample.This study is an initial one that has deliberately focused on a set of probes associated with specific genes collapsed to a single representative probe.For now, we have disregarded the rest of the data due to concerns about reproducibility, but it could be possible to make further improvements to the study by incorporating the considerable amount of data present in these other regions.This would be a considerably larger computational task to implement, and the manifold increase in candidate probes could also have repercussions in the generation of an increased number of type I errors, although these would be mitigated by including a larger training dataset and more independent validation datasets. The weighting of all the different covariates for case-control matching has meant that certain factors, such as age, were not as closely matched as they could be but were instead matched based on certain lifestyle similarities.Whether so many covariates should be taken into account is, perhaps, overambitious given the limitations of the available subjects in the study.It might be better in follow-up studies to more closely match factors such as age. Array-based studies (and all studies that survey the entire genome or methylome) are at great risk of statistical errors due to the number of variables being far in excess of the number of replicates.We have worked to reduce this by operating at the level of the gene.In any investigative study of this nature, with limited space for sample interrogation, it is necessary to greatly increase the number of disease cases in the discovery set to a ratio that is significantly higher than in the population.Based on the performance of our test, it is likely to be used as a diagnostic triage tool for symptomatic patients, and in such instances the number of disease cases and people with esophageal abnormalities will also be inflated. We do not yet know whether the test is specific for esophageal AdCa, although, given the number of possible classifiers we have generated, this is likely to be the case for at least some of them.Furthermore, an array-based test is not currently viable in a clinical setting as it is too expensive.However, it should be possible to dramatically reduce costs if the test can be reliably reproduced using the small number of epigenetic markers with a cheaper technology such as amplicon-based sequencing or other newer high-throughput testing platforms that are becoming available. Diagnostic implications The development of diagnostic triaging tests based on saliva could produce substantial benefits to healthcare providers and patients.Most of the methods currently required for the accurate diagnosis of esophageal cancer involve procedures that are invasive, uncomfortable, anxiety-inducing, time-consuming, costly and limited by the availability of trained personnel.This in turn has the potential to make such procedures less likely to be recommended by the clinician, or agreed to by the patient, unless symptoms are particularly self-evident and persistent [18,54].This creates a situation where those individuals whose symptoms are advanced are prioritized for diagnosis.This is understandable, but it happens at the expense of individuals who might accrue substantial, and maybe even greater, benefits from a diagnostic appointment when the disease is at an earlier stage.A simple diagnostic triaging test would therefore be valuable.The cytosponge is currently being examined in this role, but it too is invasive as patients have to swallow a capsule on a string and then have it removed [55,56].Our simple saliva test could be used to triage patients at risk for cancer who are being considered for endoscopy.Although this test has an accuracy of only around 72%, the cutoff can be set to achieve a sensitivity of 98%, which would currently give a specificity of 10%.If sensitivity were lowered to 90%, specificity would reach more than 30%, and the test has not yet been optimized.These results could lead to a 10-30% reduction in endoscopies performed in patients referred with suspected esophageal cancer.In the UK alone, 1 million endoscopies are undertaken annually, and around 50% of these are to exclude cancer [57].Worldwide, the impact of a simple saliva-based screening test would therefore be enormous, particularly as the situation -where diagnostics are only implemented when it is too late to help -will only get worse in the short term as backlogs in many healthcare systems are now reaching unprecedented levels [57][58][59]. Another issue that saliva-based diagnostic tests address which is arguably not given enough consideration at present is that saliva collection does not require needles.Surveys have indicated that around 10% of the population have a significant fear of them [19,60] and that this fear actually causes people to avoid preventative care [61]. Any rapid, noninvasive, easy-to-administer test that can provide intelligent prioritization of limited human and material healthcare resources therefore has the potential to save lives.Saliva-based epigenetics have shown exciting potential, and we consider that bringing this technology to a clinical diagnostic setting would not just be useful but is actually an urgent necessity to alleviate some very pressing problems. Conclusion This study has demonstrated the potential for saliva as a diagnostic material whose potential is just starting to be explored.Using gene-based methylation information, we have shown that there is information about disease status in the methylome.Further incorporation of other methylation data that were not used in the study has the potential to improve prediction accuracy, but due attention will need to be paid to reproducibility. Figure 3 . Figure 3. Heat map plot of p-values of first ten principal component analysis components (A) before and (B) after plate-fitted residual correction for experiment batch, array row, age, sex, disease diagnosis and epithelial cell content.The components were statistically tested against experiment, array plate row, age, sex, disease status and epithelial cell content.The plate-fitted residuals show p = 1.00.Blue = low p-value; red = high p-value. FrequencyFigure 4 . Figure 4. Module membership after repeated classification with modification of five samples (3%) of the discovery cohort. Figure 5 . Figure 5. Receiver operating characteristic curve of the most successful classifiers for the discovery set, testing set and validation dataset. Table 1 . Risk stratification of patients and controls in the discovery, testing and independent validation datasets. Testing Validation Weight factor Cancer (n = 51) Non-cancer (n = 107) Cancer (n = 18) Non-cancer (n = 34) Cancer (n = 19) Non-cancer (n = 27) Having shown that the groups had similar epigenetic profiles, IMC and invasive cancer were clustered together, as were HVs, patients who had undergone normal endoscopy and had NPD, and patients with NDBE.All covariates are shown together with the mean values and standard deviation in brackets for each group.HV: Healthy volunteer; IMC: Intramucosal esophageal adenocarcinoma; NDBE: Nondysplastic Barrett's esophagus; NPD: No positive diagnosis; PPI: Proton pump inhibitor; SD: Standard deviation. Table 2 . Area under the curve for training set and the two holdout sets.All results are mean (95% CI).Showing tenfold cross validation for the base model and after a cost-sensitive wrapper algorithm is applied which generates a high cost for missing cancer cases.AdCa: Adenocarcinoma; AUC: Area under the curve; IMC: Intramucosal esophageal adenocarcinoma. Table 3 . Area under the curve using only probes and only covariates of age and sex.All results are mean (95% CI).Tablefortraining set (tenfold cross validation) and the two holdout sets. AUC: Area under the curve.
2024-01-17T06:16:53.013Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "7b07adf54381b7f5020e3409333fd056b4989d92", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2217/epi-2023-0248", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "02b2f27185f4681a3e913d6a6d5353567269bbb2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
238696224
pes2o/s2orc
v3-fos-license
On the Dilemma and the Paths of Management Accounting’s Innovation and Development from the Perspective of Informatization 2.0 With the rapid development of China's economy and society, the scale of enterprises in various industries is constantly expanding. As the booster of quality and efficiency of enterprise, the upgrading of management accounting (MA) has become an inevitable trend of enterprise innovation and development in informatization 2.0. Based on history analysis and literature research of MA, the paper investigates the current situation of MA, and explores the problems of MA application in China, and draws conclusions that there are some difficulties existed in MA application, such as the weak foundation of informatization, the neglect of enterprise leaders and the low level of accounting quality. The research shows that the innovation and development of MA is an important support for enterprises to remain invincible in fierce economic competition. The paper then discusses strategic approaches to strengthen practical application of MA from external and internal aspects, such as speeding up the transformation of financial accounting, adopting the means of computerization, eliminating the information islands and strengthening the information security management, so as to provide guarantee for enterprise sustainable development. Concept of MA Management accounting (MA) is a branch of modern accounting, which was developed by the combination of management and accounting in the 1950s. As an emerging discipline, MA is recognized as an epoch-making milestone in the development of modern accounting history, marking that modern accounting has entered a new stage of development. In the late 1970s, China began to systematically introduce the theories and methods of Western MA. Compared with the financial accounting based on the national accounting standards, MA highlights the management function. MA includes analyses of nonfinancial resources, including manufacturing and sales performance data, and a range of techniques for managing costs [1]. In the social and economic environment, the enterprise managers need to set short-term and long-term development goals for the enterprise according to the development trend of the industry. The work content of MA includes the use of management accounting knowledge, accurate analysis of enterprise development and market conditions, providing data support for managers to make enterprise development decisions, and making future business decisions more scientific. Transformation and Upgrading of MA Nowadays, the world economy is facing a new economic environment, which is characterized by internationalization, financialization and intellectualization on the macro-level. From the microlevel, it is transforming from the industrial society to the information society, and the business environment of enterprises has undergone tremendous changes. With the rapid development of cloud computing, big data, Internet of things and mobile commerce, enterprise management mode has broken time and space constraints [2]. Since the invention of digital computer in 1946, human society has experienced two informatization movements. The first one is dominated by American information enterprises represented by IOE (IBM, Oracle, EMC), and the second is initiated by Chinese enterprises. The change of industrial rules has promoted Chinese enterprises' "de-IOE", and enterprise management has entered Information 2.0 era. The NDRC and CAC have issued an implementation plan to deeply implement the digital economy strategy and accelerate the digital industrialization and industry digitization [3], which promotes enterprises to develop emerging industries and transform to informatization, and improve the level of decision-making. Modern accounting is responsible for forecasting, decision-making, planning, control, assessment, evaluation and other functions. There is more and more room for the further improvement of the basic theory, basic methods and practical operation technology of MA. The transformation from financial accounting to management accounting is an important road to reduce operating costs, avoid operating risks and improve the competitiveness of enterprises [4]. The application scope of information technology in MA is greatly broadened. MA can accurately analyze information data, accurately predict market dynamics, provide decision-making basis for enterprises, and promote the smooth operation of enterprises and the improvement of core competitiveness. Analysis of Current Situation of MA Since the late 1970s, MA has been carried over into China's economic system, constantly improved and played an irreplaceable role. Some enterprises and organizations have introduced Western MA theories and methods, and set up special MA organization system, cost target profit assessment mechanism, cost difference analysis and so on, which have achieved preliminary results in application. As a branch of accounting, MA supports enterprise decision-making around standard cost control, budget control, responsibility accounting, performance appraisal, cost volume profit analysis and prediction decisionmaking. It has become an important part of modern enterprise accounting system and a subsystem of information system providing management information to enterprise leaders. Some large enterprises have absorbed MA in enterprise top economic organization, set up MA institutions to involve into decision-making. Limitations of Discipline Development The limitation of MA discipline hinders the enthusiasm of accountants in MA using. The first reason is to attach importance to theoretical education and neglect practical application. Both the academic education and non-academic education of MA in our country pay attention to the teaching of theory and methods, and ignore the case analysis within enterprises, which makes many accountants think that MA theory is too difficult to learn, and has no application value in reality. The second, quantitative analysis is emphasized and qualitative analysis is ignored. Overemphasizing the former leads to the phenomenon that theory is divorced from practice. Even some simple principles are often demonstrated and expressed by abstruse and complicated higher mathematics formulas so as to ignore the interpretation of the economic significance behind numbers. The third is to focus on result calculation and ignore data analysis. The mode application and result calculation are placed in the first place, ignoring the premise analysis of the model application and the data retrieval process of result calculation. Accountants either do not know how to obtain the relevant data, or do not know the preconditions of the mode. The errors between result and fact often take place and the results lose reference value of decision-making. Difficulty of MA Promotion MA promotion has met with some difficulties and problems in practice. First, the theoretical system is not perfect. Both theories and methods of Western MA have not been completely refined. In our country, a systematic theoretical system has also not been built, which is affecting its promotion and application in enterprises. There are few monographs on the theoretical research, and the related research lags behind because of lacking of high-level theoretical researchers. The existing application experience of MA has not been summarized from the perspective of the combination of theory and practice. Second, accounting methods of enterprises are also backward. Some enterprises still use traditional manual bookkeeping method. Although some enterprises have adopted computerization, the application degree and utilization efficiency of computers are not relatively high. On the other hand, most of the computer software may be only suitable for financial accounting system. Third, the quality level of enterprise managers and accounting employees need improving. The Advances in Economics, Business and Management Research, volume 186 decision-makers of enterprises are not aware of the importance of MA. Some entrepreneurs are used to taking financial accounting departments as economic accounting units. The perceptions of accounting staff fall behind the times and their MA knowledge level is weak. Survey data show that in some large and medium-sized enterprises, accounting employees with college level or above only account for about 20%, and many employees lack data analysis ability. Lag of Information Platform Construction Many enterprises are hesitant in face with the trend of informatization 2.0. First, the infrastructure of information platform is backward. Most manufacturing enterprises need to make continuous efforts in ICT infrastructure, operation and maintenance. Research shows that under the policy background of vigorously promoting industry 4.0 in China, 84.9% of manufacturing industries are undergoing digital transformation, but the comprehensive success rate of digital transformation is less than 40%. In 2020, when COVID-19 suddenly came, entrepreneurs began exploring digitalization, trying to shift their offline businesses to online businesses so as to resist possible risks. Second, data islands exist. The emergence of data islands lies in the lack of global data thinking in enterprise information construction. The independent servers (desktop, standard server, etc.) of each business system are placed in different departments of the enterprise, and they are divided into independent parts. Network Information Security Faultiness MA information reserve and safety have a direct impact on the transformation of modern enterprise accounting work. The vast majority of enterprises do not pay special attention to the storage and security of accounting data. The storage and distribution of data is relatively scattered, and its integrity and security cannot be effectively guaranteed, which has a negative effect on the data integration and analysis of MA. Due to ineffective preventive measures of data security, more and more data stored and transmitted through electronic media suffer from the risk of loss or theft. The illegal actions of network hackers and competitors also cause immeasurable risks. Nowadays special software platforms or warning systems for MA information reserve and security are still not mature enough. Enhancing the internal driving Modern enterprise systems should be established to enhance the internal driving force of MA. The prerequisite of MA application is a mature market system and a concordant mechanism with clear rights and responsibilities, separation of government and enterprise. From the analysis of the external environment of enterprises, the unclear property rights of enterprise at present leads to the fact that the government and enterprises are not separated and the responsibilities are not clear, so MA is useless in enterprise management. To clear the bottleneck, it is necessary to clarify and establish a modern enterprise system, clarify the relationship between property rights, and place enterprises in real market environment. Increasing the practicability of MA theory The theoretical systems of the discipline should be improved to increase the practicability. Theory teaching of MA should be integrated into the practice of enterprises to enhance its universality. Investigation shows that MA theory can be better implemented, improved and applied with the cooperation of accountants. Due to the differences between the external and internal environment of enterprises, management methods vary from enterprise to enterprise, we should adopt different approaches to improve MA operability. Establishing relevant management institution and systems Relevant management institutions and systems should be established to enhance the mandatory application. Due to the lack of targeted legal and institutional issues, it often receives enterprise leaders' neglect or contempt. We should promulgate relevant management policies, laws and regulations, establish appropriate management organizations, and implement MA systems. Although China's accounting circles have successively established some accounting academic and practical organizations, such as Accounting Society of China, the Chinese Institute of Advances in Economics, Business and Management Research, volume 186 Certified Public Accountants, China Cost Research Society, Research Association of China Chief Financial Officer, there is still a lack of a specialized MA academic research organization. The industry standards of MA have not been established effectively. So we should learn from the Western experience, establish a special academic group or organization of MA to organize and guide MA research and application. Constructing Enterprise leaders' new awareness of MA Managers of enterprises should renew their ideas and improve strategic awareness of future development. Under the global fierce competition, if enterprises want to survive and develop, they must make scientific prediction, planning and decisionmaking for their business activities. The lack of MA may not have a great impact on the short-term operation and development of enterprises, but in the long run, it will certainly affect and restrict the innovation and development of enterprises [5]. Enterprise leaders must speed up the renewal and transformation of ideas and actively integrate MA into enterprise internal management. Strengthening the construction and application of information platforms The construction and application of information platforms should be strengthened to eliminate the phenomenon of information islands. Enterprise informatization is a systematic project. The integration of ICT infrastructure is the core foundation of enterprise information construction. We should make a unified planning from the strategic development direction of the enterprise, and build information platforms and neural network based on the modular architecture, edge computing equipment and other ICT infrastructure by taking advantage of cloud computing, big data and other intelligent technologies. Upgrading MA employees' quality and abilities MA employees' quality should be improved to enhance their abilities of participating in business decision-making. Nowadays the function of accounting should cover the whole process of enterprise management, such as the prediction in advance, the control in the event, and the assessment after the event. Accounting employees should not only have profound accounting abilities, but also have certain management and organizational abilities [6]. Enterprises should actively promote the transformation from financial accounting to management accounting, and pay attention to the selection, training of employees with MA knowledge and skills, organizational ability, and strong sense of innovation. Developing computerized accounting information systems Accounting computerization should be used to optimize the ecological environment of MA application. The popularization of MA not only can strengthen the basic accounting work such as original records, quota management, measurement and detection, but also can realize the leap of accounting information means and ensure the authenticity and timeliness of accounting information [7]. The software of MA should be developed towards computerization. Computer's fast calculation ability and analysis ability not only simplifies complicated accounting calculation in the daily work, but also improves calculation accuracy of MA and greatly reduces the workload of accounting daily work. Reinforcing accounting information security and data protection Accounting information reserve and data security should be strengthened. The management accounting information not only includes the basic financial data information, but also covers the data information and market information generated in the operation of the enterprise. It is necessary for enterprises to comprehensively improve the data information reserve and security work, formulate scientific and sound information reserve and security management systems, strengthen, and ensure no online or offline loopholes. CONCLUSION The innovation and development of MA is an important support for enterprises to remain invincible in the fierce economic competition. The improvement of economic benefits and stable development of enterprises are inseparable from the reasonable planning of capital construction. Localization of MA theory and practice is a long-term project. We should unremittingly push on the research and application. Enterprise must recognize the value advantages of MA and constantly strengthen the functions and solve the Advances in Economics, Business and Management Research, volume 186 external and internal problems in an all-round way. To sum up, in the background of information 2.0, innovation and development of MA can contribute to the progress of governance abilities of modern enterprises, and can be in favor of enterprises' healthy and stable development. To change the external and internal environment of MA implementation and achieve the accuracy of decision-making, modern enterprises should absorb strategic paths of updating the concept of management employees, introducing new and efficient theories and methods, building information platforms, adopting advanced accounting computerization, strengthening information reserve and safety, and carrying on high-quality professional training. AUTHORS' CONTRIBUTIONS The author conceived and designed the study, collected related data and conducted research, wrote initial draft and final draft of this manuscript.
2021-08-27T17:23:04.486Z
2021-08-09T00:00:00.000
{ "year": 2021, "sha1": "65c5d65e68e72350428016a4dc786f9e3c0701e0", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125959865.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c79612956659e5ce7ddd4d18de1e246fb9daf2e6", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
118865972
pes2o/s2orc
v3-fos-license
Torsion and accelerating expansion of the universe in quadratic gravitation Several exact cosmological solutions of a metric-affine theory of gravity with two torsion functions are presented. These solutions give a essentially different explanation from the one in most of previous works to the cause of the accelerating cosmological expansion and the origin of the torsion of the spacetime. These solutions can be divided into two classes. The solutions in the first class define the critical points of a dynamical system representing an asymptotically stable de Sitter spacetime. The solutions in the second class have exact analytic expressions which have never been found in the literature. The acceleration equation of the universe in general relativity is only a special case of them. These solutions indicate that even in vacuum the spacetime can be endowed with torsion, which means that the torsion of the spacetime has an intrinsic nature and a geometric origin. In these solutions the acceleration of the cosmological expansion is due to either the scalar torsion or the pseudoscalar torsion function. Neither a cosmological constant nor dark energy is needed. It is the torsion of the spacetime that causes the accelerating expansion of the universe in vacuum. All the effects of the inflation, the acceleration and the phase transformation from deceleration to acceleration can be explained by these solutions. Furthermore, the energy and pressure of the matter without spin can produce the torsion of the spacetime and make the expansion of the universe decelerate as well as accelerate. I. Introduction In the last few years the realization that the universe is currently undergoing an accelerated expansion phase and the quest for the nature of dark energy has renewed interest in so-called modified gravity theories (for a review see [1]). In these theories one modifies the laws of gravity so that a late-time accelerated expansion is produced without recourse to a dark energy component, a fact which renders these models very attractive. The simplest family of modified gravity theories is obtained by replacing the Ricci scalar R in the usual Hilbert-Einstein Lagrangian with some function f (R) (for reviews, see , [2][3][4][5]). There are actually three versions of f (R) gravity: Metric f (R) gravity, Palatini f (R) gravity, and metric-affine f (R) gravity. In fact, these are physically different theories rather than manifestations of the same theory in different * Electronic address: qgy8475@sina.com guises, as the different variational principles yield inequivalent equations of motion (except when the action is the Einstein-Hilbert and matter is minimally coupled to geometry). In metric f (R) gravity, the action is varied with respect to the metric as usual (for an introduction see [6]). Palatini f (R) gravity comes about from the same action if we decide to treat the connection as an independent quantity. The connection, however, does not enter the matter action. Such a approach were introduced and initially studied by Buchdahl [7] and has attracted a lot of interest as possible infrared modifications of general relativity (for a shorter review of metric and Palatini f (R) gravity see [8]). It has recently been generalized to f (R) theories with non-symmetric connections, i.e. theories that allow for torsion [9] and f (R, R µν R µν ) theories [10]. In metric-affine f (R) gravity the matter action is allowed to depend also on the connection. In addition, the connection can include both torsion and non-metricity [11]. It has been shown that even in the most general case of Palatini f (R) gravity where both torsion and non-metricity are allowed, the connection can still be algebraically eliminated in favor of the metric and the matter fields [12]. Clearly, f (R) actions do not carry enough dynamics to support an independent connection which carries dynamical degrees of freedom. However, this is not a generic property of generalized Palatini gravity. The addition of the R µν R µν term to the Lagrangian radically changes the situation and excites new degrees of freedom in the connection. The connection (or parts of it) becomes dynamical and so, it cannot be eliminated algebraically. If the connection is torsion free, the dynamical degrees of freedom reside in the symmetric part of the connection [13]. In generic metric-affine theories the addition of the R µν R µν term to the Lagrangian makes the propagating degrees of freedom reside in both the antisymmetric and symmetric parts of the connection. In other words, the dynamical degrees of freedom can be both torsion and non-metricity. In these theories torsion field plays a fundamental role: it contributes, together with curvature degrees of freedom, to the dynamics. Propagating torsion is the key feature of these theories [14,15]. Torsion proves to be essential for total angular momentum conservation when intrinsic spin angular momentum is relevant (for reviews on torsion, see [16,17,18]). It has been argued that torsion must be present in a fundamental theory of gravity [19,20]. In the teleparallel gravity, for example, torsion plays a central role (for a shorter review see [21]). Recently, models based on modified teleparallel gravity, namely f (T ), were presented. In these models the torsion proves to be the responsible of the observed acceleration of the universe [22]. The rediscovery of the metric-affine (Palatini) formulation was mainly driven by the interest in finding cosmological scenarios able to explain the current observations. Using the respective dynamically equivalent scalar-tensor representation of Palatini f (R) gravity some cosmological models with asymptotically de Sitter behavior have been presented [23]. It was shown that adopting the metric-affine formulation together with an action that includes a term inversely proportional to the scalar curvature, such as the one in [24], can address the problem of the current accelerated expansion equally well as when using the purely metric formalism [25]. Additionally, it was found that f (R) theories of gravity in the metric-affine formulation do not suffer from the problems for the metric formulation. On the other hand, although cosmology in the theories with the Lagrangian including R 2 and R µν R µν terms have been studied in the purely metric formulation (for example see [26]) and the Palatini formulation [27], the similar cosmological models in the metric-affine formulation have not been discussed thoroughly in the literature. Especially, the cosmological effect of torsion in metric-affine theories of gravity has not been explored extensively. We have not known whether the dynamical torsion could lead to a de Sitter solution and then be used to explain the observed acceleration of the universe. An answer will be given in this paper. The metric-affine approach has been widely used in order to interpret gravity as a gauge theory many times over the years (see, for example, [28] for a study on f (R) actions and [29] for a thorough review). In recent years it has bee used in cosmology to interpret the accelerating expansion of the universe [30,31]. In this approach the structure of the gravitational equations and physical consequences of cosmology, in particular, the situation concerning the accelerating expansion depend essentially on the form of the Lagrangian. The metric-affine gravity can be divided into different sectors in dependence on the number of nonvanishing components of the torsion tensor and the order of the differential equations. One sector of the metric-affine gravity is so-called dynamical scalar torsion sector considered in [30]. Starting from a Lagrangian consisting of R 2 and the quadratic torsion terms a cosmological model has been constructed. This model can contribute an oscillating aspect to the expansion rate of the universe. A different model of acceleration with torsion but without dark matter and dark energy has been presented in [31]. The Lagrangian of it is the most general form including the linear in the scalar curvature term as well as 9 quadratic terms (6 invariants of the curvature tensor and 3 invariants of the torsion tensor with indefinite parameters). Its Lagrangian involves too many terms and indefinite parameters, which make the field equations complicated and difficult to solve and the role of each term obscure. In order to simplify the field equations some restrictions on indefinite parameters have to be imposed. Under these restrictions, especially, all the higher derivatives of the scale factor are excluded from the cosmological equations. The question is whether such a complicated Lagrangian is necessary. Can we use a simpler Lagrangian to construct a model of cosmic acceleration? In fact all the indefinite parameters in the Lagrangian in [31] have been combined into four new ones, which implies that some terms are not necessary and the Lagrangian can be simplified. In this paper we will show that a rather simpler Lagrangian, R + αR 2 + βR µν R µν + γT µ νρ T µ νρ , is sufficient and necessary to construct a model of cosmic acceleration. The terms βR µν R µν and γT µ νρ T µ νρ play different roles in the theory: the former determines the structure of the field equations while the latter determines the behavior and the stability of the solutions. The βR µν R µν term leads to different structure of the cosmological equations from the one in [30]. In addition to the simplicity the main advantage of this Lagrangian is to permit exact or analytic solutions which have not been found in previous works. For any physical theories, to find exact or analytic solutions is an important topic. Next comes the physical interpretation of the solutions thus obtained. Mathematically de Sitter spacetime as the maximally space is undoubtedly important for any gravity theories. From the observational side, recent studies illuminate that both the early universe (inflation) and the late-time universe (cosmic acceleration) can be regarded as fluctuations on a de Sitter background. So de Sitter solutions take a pivotal status in gravitational theories, especially in modern cosmology. We will follow the approach of [26,30,31] rather than the one in [27] to avoid getting involved in debate on the transformation from one frame to another [32]. We choose the tetrade µ I and the spin connection Γ IJ µ instead of the metric g µν and the affine connection Γ λ µν as the dynamical variables following the gauge theory approach [30]. The descriptions in terms of the variables (e I µ , Γ JK ν ) and (g µν , T λ ρσ ) are equivalent in our approach (the argument in detail see [16]). We will concentrate on the role of torsion subject to the metricity. In this case only the torsion part of the connection is independent of the metric (or tetrad). Because the field equations can result of order higher than second and very difficult to handle, the theory of dynamical systems provides a powerful scheme for investigating the physical behavior of such theories [33] for a wide class of cosmological models. The dynamical system approach has acquired great importance in the investigation on various theories of gravity. Some works have been done in the case of scalar fields in cosmology and for scalar-tensor theories of gravity [34]. This approach has the advantage of offering a relatively simple method to obtain exact solutions (even if these only represent the asymptotic behavior) and to obtain a (qualitative) description of the global dynamics of the models. Such results are very difficult to obtain by other methods. The application of this method has allowed new insights on higher order cosmological models and has shown a deep connection between these theories and the cosmic acceleration phenomenon. It makes possible not only to develop experimental test for alternative gravity but also to allow a better understanding of the reasons underlying the success of the theory. The dynamical systems approach has been used to investigate universes in theories of gravity [26,30,31]. In contrast with [31] we allow the field equations to contain higher derivatives. We will see that for the Lagrangian of the form R + αR 2 + βR µν R µν + γT µ νρ T µ νρ the field equations can be simplified and solved exactly for some choices of α, β and γ. Some meaningful consequences can be inferred from the solutions obtained. The accelerating expansion of the universe can be explained without a cosmological constant or dark energy. A vacuum spacetime can possess torsion which causes the acceleration of the cosmological expansion. The conception of vacuum as physical notion is changed essentially. Instead of it as passive receptacle of physical objects and processes, the vacuum assumes a dynamical properties as a gravitating object. The torsion of the spacetime can be produced by the energy and pressure besides the spin of matter. The paper is organized as follows. In section II the gravitational field equations are derived following the approach of [29,30,31]. Using them to the spatially flat Friedmann-Robertson-Walker metric a system of cosmological equations is obtained in section III. Since the spin orientation of particles in ordinary matter is random, the macroscopic spacetime average of the spin vanishes. In this case, the solutions of the cosmological equations are divided into two classes. Each of them is related with only one torsion function, the scalar or the pseudoscalar torsion function. They are obtained in section IV and V, separately, using different methods. For the scalar torsion function the equations take the form of a dynamical system, of which asymptotically stable critical points represent the exact de Sitter solutions. For the pseudoscalar torsion function an exact analytic solution of the cosmological equations is presented in section V. In terms of this solution the acceleration and the phase transformation from decelerating to accelerating expansion of the universe can be explained. All of these solutions indicate that in vacuum the spacetime possesses an intrinsic torsion which does not originate from the spin of matter. It is the torsion that causes the acceleration of the cosmological expansion in vacuum. The torsion of the spacetime can be produced by the energy and pressure besides the spin of matter. In section VI we obtain some exact analytic solutions of the cosmological equations in the case γ = 0. These solutions can only describe the inflation (in the early epoch) or the decelerating expansion (in the later epoch) of the universe. This means that the term γT µ νρ T µ νρ is necessary to construct a model of cosmic acceleration. The section VII is devoted to conclusions. II. Gravitational field equations We start from the action where l = G/c 3 is the Planck length, α, and β are two parameters with the dimension of l 2 , γ is a parameter of dimensionless, ψ denotes matter fields. In contrast with [27], here the connection is not symmetric, i.e. Γ α βγ = Γ α γβ , and appears in the action of matter S m . In other words, we are dealing with a metric-affine theory rather than a Palatini one. By the same way used in [29,30,31], the variational principle yields the field equations for the tetrad e I µ and the spin connection Γ IJ µ : where E I µ and s IJ µ are energy-momentum and spin tensors of the matter source, respectively. We use the Greek alphabet (µ, ν, ρ, ... = 0, 1, 2, 3) to denote (holonomic) indices related to spacetime, and the Latin alphabet (I, J, K, ... = 0, 1, 2, 3) to denote algebraic (anholonomic) indices, which are raised and lowered with the Minkowski metric η IJ = diag (−1, +1, +1, +1). If α = β = γ = 0, these equations become the field equations of Einstein-Cartan-Sciama-Kibble theory.. Especially, (20) becomes the Einstein equation. To understand these equations, we will do a translation of (2, 3) into a certain effective Riemannian form-transcribing from quantities expressed in terms of the tetrad e I µ and spin connection Γ IJ µ into the ones expressed in terms of the metric g µν and torsion T λ µν (or contortion K λ µν ), as was done in [30]. It should be noted [16] that the set (e I µ , Γ JK ν ) corresponds to the first order formalism, while the set (g µν , T λ ρσ ) to the second order formalism. The origin of this is that in the last case the non-torsional part of the affine connection is a function of the metric, while, within the gauge approach, the variables (e I µ , Γ JK ν ) are mutually independent completely. The descriptions in terms of the variables (e I µ , Γ JK ν ) and (g µν , T λ ρσ ) are equivalent in our approach (the argument in detail see [16]). Subject to the metricity, the affine connection Γ λ µν is related to the tetrad e I µ and the spin connection Γ IJ µ by where µ λ ν , K λ µν are the Levi-Civita connection and the contortion, separately, with Accordingly the curvature R ρ σµν can be represented as where is the Riemann curvature of the Levi-Civita connection. In view of this, we can identify the actual degrees of freedom of the theory with the (independent) components of the metric g µν and the tensor K λ µν . III. Cosmological equations For the spatially flat Friedmann-Robertson-Walker metric the non-vanishing components of the Levi-Civita connection are The non-vanishing torsion components with holonomic indices are given by two functions, the scalar torsion h and the pseudoscalar torsion f [35]: and then the contortion components are The non-vanishing components of the curvature R ρ σµν and the Ricci curvature R µν are where H = · a (t) /a (t) is the Hubble parameter. Using these results and supposing the matter source is a fluid characterized by the energy density ρ, the pressure p and the spin s IJ µ we obtain four independent equations from (2) and (3): The system of the equations (14)- (17) has the similar structure as the system of gravitational equations for homogeneous isotropic cosmological models in [31] except the coefficients. However, it is the differences in coefficients that make the system of the equations (14)- (17) easy to handle and possible to obtain some exact or analytic solutions in several cases as will be shown in the next sections. The equations (14) and (15) can be written as and Since the spin orientation of particles in ordinary matter is random, the macroscopic spacetime average of the spin vanishes, we suppose s IJ λ = 0, henceforth. Then, the equations (16), (17) and (21) has the solutions and We will solve the equations (18)(19)(20) in the cases (22) and (23), respectively in the next two sections. IV. Exact de Sitter solutions with scalar torsion function In the case f = 0, (18) and (19) can be written as which have the solutions Differentiating (24) gives The equation (20) (28) and (29) have the solutions Letting we have the dynamical system The critical point equations consist of In order to discuss the stability of the critical points we need to calculate the matrix elements of the Jacobian: In order to stress the role of the torsion as the source of the accelerating expansion of the universe we concentrate on the vacuum solutions for some special choices of the parameters α, β, and γ. The critical point equations (34) can be simplified and solved exactly when β = 4α or β = 3α. A. When β = −4α In this case the gravitational Lagrangian is a special case of quadratic curvature gravities [36] when the torsion vanishes. Then we have two cases. In the first case the solution corresponds to a static Minkowski spacetime. In the second case, h and H satisfy the equations 8hHα + 8αh 2 − 1 = 0, which can be written as In order to obtain a concrete results we give some specific value of γ. When The Jacobian matrix of the dynamical system (36) given by (35) is which has the eigenvalues: −0. 85371/ √ α, −4. 951/ √ α, −1. 3522 × 10 −3 / √ α. If α > 0, all the real parts of the eigenvalues are negative. This means that the critical point is asymptotically stable and then · H= X = 0, gives an asymptotically stable de Sitter solution. Letting · H= X, we have the dynamical system with the matrix elements of its Jacobian: The fixed point equations are Using (63) we obtain the equation In vacuum The fixed point equation (68) becomes which leads to or In the first cas, we have · H= X = 0, H = 0, h = 0, which correspond to a static Minkowski solution. In the second case according to (66) the Jacobian matrix has the form and has the eigenvalues: , (74) and (76) give, separately, Then the Jacobian matrix has the eigenvalues: If α > 0, all the real parts of the two eigenvalues are negative. This means that the critical point is asymptotically stable and then · H= X = 0, gives an asymptotically stable de Sitter solution. (58) gives We have seen that by appropriate choices of γ, we can obtain asymptotically stable de Sitter solutions in both cases, β = 4α and β = 3α, though the cosmological equations have different structure. This means that the structure of the cosmological equations depends on β, while the stability of the solutions depends on γ. In all of the solutions obtained the torsion function h does not vanish in vacuum, which means that the torsion is an intrinsic geometric nature of the spacetime. It is the torsion that causes the accelerating expansion of the universe in vacuum. V. Analytic solutions with pseudoscalar torsion function Differentiating (23) gives Substituting (23) and (79) into (20) gives Then the equations (18), (19) and (23) become They have the solutions (85) The equations (84) and (85) Relativity. The equation (86) indicates that even in vacuum the spacetime possesses the torsion f = 1−8γ 32γβ , which has been found in [32]. Hence the conception of the vacuum as physical notion is changed essentially. Instead of it as passive receptacle of physical objects and processes, the vacuum assumes a dynamical properties as a gravitating object. The combination of (84) and (85) yields the acceleration equation Letting Some important consequences can be obtained from (88): i) The term 1−8γ 4nα plies the role of the cosmological constant, which agrees with the result in [31]. If 1−8γ 4nα > 0, ρ = p = 0, then ·· a> 0, the acceleration of cosmological expansion acquires the vacuum origin. ii) If n > 0, or n < −3 ρ + 3p accelerates the expansion of the universe. If ρ + 3p decelerates the expansion of the universe. Especially, when n = −2, γ = 1/8, (88) becomes the acceleration equation in general relativity. In other words, the latter is only a special case of the former. iii) If the universe can undergo a phase transformation from an accelerating to a decelerating expansion. iv) If the universe can undergo a phase transformation from a decelerating to an accelerating expansion. We find this picture very appealing and physical since it seems to indicate that in metric-affine gravity as matter tells spacetime how to curve, matter will also tell spacetime how to twirl. The equation (108) gives the acceleration equation Let us consider two special cases. The results obtain above indicates that if γ = 0 in both cases, f = 0, and f = 0, there exists no solution describing an accelerating universe. In other words, the term γT µ νρ T µ νρ is necessary to the existence of the solutions describing an accelerating universe. VII. Conclusions Quadratic theories of gravity described by the Lagrangian R + αR 2 + βR µν R µν have been studied in many works in supergravity, quantum gravity, string theory and M-theory. However, the cosmology in these theories has not been explored extensively, especially, when the torsion of the spacetime is considered. In this paper we show that by only allowing the connection to be asymmetrical and adding a term γT µ νρ T µ νρ to the Lagrangian R + αR 2 + βR µν R µν some meaningful cosmological solutions can be obtained. These solutions provide several possible explanations to the acceleration of the cosmological expansion without a cosmological constant or dark energy. One can find that although the field equation (2) returns to Einstein's equation when α = β = γ = 0, the cosmological equations (18)(19)(20)(21) are essentially different from the Friedmann equation and the Raychaudhuri equation and then give different description to the evolution of the universe. The acceleration equation of the universe in general relativity is only a special case of the equation (87). These equations involving higher-derivatives can be solved by appropriate choice of β and γ. Not only numbers of asymptotically stable de Sitter solutions expressed by critical points of a dynamical system but also exact analytic solutions are obtained. These solutions indicate that the terms βR µν R µν and γT µ νρ T µ νρ ply the different roles: the former determines the structure of the equations while the latter determines the behavior and the stability of the solutions. To construct a model of cosmic acceleration the Lagrangian R + αR 2 + βR µν R µν + γT µ νρ T µ νρ is sufficient and necessary. Owing to the solutions obtained some conceptions have to be changed essentially. According to these solutions, even in vacuum the spacetime can possesses torsion and curvature. Therefore, instead of vacuum as passive receptacle of physical objects and processes, the vacuum assumes a dynamical property as a gravitating object. It is the torsion of the spacetime that causes the acceleration of the cosmological expansion in vacuum. Both the torsion and the accelerating expansion possess geometrical nature and do not invoke any matter origin. Furthermore, the energy and pressure of the ordinary matter can produce the torsion of the spacetime and cause either the deceleration or the acceleration of the cosmological expansion depending on choices of β and γ.
2013-01-09T12:17:04.000Z
2012-05-24T00:00:00.000
{ "year": 2012, "sha1": "6ab487dcebc81e3e6c67aafa74fd819f31ff0083", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1205.5419", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6ab487dcebc81e3e6c67aafa74fd819f31ff0083", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14565326
pes2o/s2orc
v3-fos-license
Serotonin 2A Receptor Gene Polymorphism in Korean Children with Attention-Deficit/Hyperactivity Disorder Objective The purpose of this study was to investigate the association between the T102C polymorphism in the serotonin 2A receptor gene and attention-deficit/hyperactivity disorder (ADHD) in Korean patients. Methods A total of 189 Korean children with ADHD as well as both parents of the ADHD children and 150 normal children participated in this study. DNA was extracted from blood samples from all of the subjects, and genotyping was conducted. Based on the allele and genotype information obtained, case-control analyses were performed to compare the ADHD and normal children, and Transmission disequilibrium tests (TDTs) were used for family-based association testing (number of trios=113). Finally, according to the significant finding which was showed in the case-control analyses, the results of behavioral characterastics and neuropsychological test were compared between ADHD children with and without the C allele. Results In the case-control analyses, statistically significant differences were detected in the frequencies of genotypes containing the C allele (χ2=4.73, p=0.030). In the family-based association study, TDTs failed to detect linkage disequilibrium of the T102C polymorphism associated with ADHD children. In the ADHD children, both the mean reaction time and the standard deviation of the reaction time in the auditory continuous performance test were longer in the group with the C allele compared to the group without the C allele. Conclusion The results of this study suggest that there is a significant genetic association between the T102C polymorphism in the serotonin 2A receptor gene and ADHD in Korean children. INTRODUCTION Studies on the causes and pathogenesis of attention-deficit/ hyperactivity disorder (ADHD) are actively underway. As many researches have reported that ADHD is heritable in 60-100% of cases, 1 the importance of conducting molecular genetic studies on ADHD is higher than ever. Association analysis is the most widely used molecular genetic research method in studies of human disease because it is more likely to detect genes of interest compared to other methods as long as the candidate genes are selected properly. Family-based association analysis has also been used frequently in this type of investigation in recent years. The transmission disequilibrium test (TDT) is one of the representative methods for family-based association analysis. 2,3 Similar to analyses that have been used to evaluate other mental disorders, molecular genetic studies involving ADHD have focused on several neurotransmitters which play major roles in the pathophysiology of these diseases. Consequently, the researches have been conducted on the candidate genes of all components that comprise the dopaminergic system, such as the studies for dopamine transporter 1, dopamine receptor types 1-5 (D1, D2, D3, D4, D5), monoamine oxidase (MAO), and catechol-O-methyltransferase (COMT). Moreover, in recent years, studies related to the norepinephrinergic system in ADHD have also been reported. 4,5 In contrast, the molecular genetic interest in the serotonergic system has been relatively small, which is attributed to the lack of consistent results from the early studies on the relationship between ADHD and serotonergic substances. Nevertheless, in the late 1990s, it was reported that the serotonergic system is associated with aggressive behavior in children with ADHD. 6,7 In particular, one study using dopamine transporter knock-out mice, which are animal models for hyperactivity in ADHD, demonstrated that the serotonergic system is involved not only in regulating aggressive behaviors associated with ADHD but also in regulating hyperactivity. 8 Based on these findings, Quist and Kennedy 9 claimed that serotonergic substances show widely variable correlations with several ADHD symptoms, similar to the results of previous studies on serotonin-related substances in the blood, urine, or cerebrospinal fluid of ADHD patients. The 5-HTR2A is a post-synaptic receptor. 5-HTR2A antagonist reduced hyperactivity induced by amphetamine, which is a representative substance of the dopaminergic system. 10 Furthermore, 5-HT neurons innervate dopamine neurons either directly via postsynaptic 5-HTR2A on the dopamine neuron or indirectly via 5-HTR2A receptors on GABA interneurons. When serotonin is released onto the postsynaptic 5-HTR2A, it would be like a brake on dopamine release, so the dopamine neuron would be inhibited. 11 On the basis of these reports, researchers studying the molecular genetics of ADHD became interested in the interaction between 5-HTR-2A and the dopaminergic system. The 5-HTR2A gene is located on chromosome 13q14-q21, and the polymorphisms in this gene that are most frequently found in the general population are T102C, A-1438G, and His452Tyr. Among them, A-1438G was found to be a mutation in the gene promoter region, 12 while His452Tyr is located at the intracellular C-terminal end of the protein, and the secondary structure of the receptor protein may be altered by this mutation. 13 Furthermore, T102C and A-1438G have been reported to show strong linkage disequilibrium. 14 There have been a number of studies involving ADHD and 5-HTR2A gene polymorphisms. One such study by Quist et al. 15 revealed selective transmission of the 452Tyr allele of the His452Tyr polymorphism, which was not observed for the T102C polymorphism. However, a study by Hawi et al. 16 re-ported that the 452His allele was selectively transmitted instead. Furthermore, Zoroglu et al. 17 reported that no significant results were found in both T102C and A-1438G polymorphisms. On the other hand, researchers have reported that selective transmissions were found in T102C polymorphisms in Chinese Han children with ADHD. 18 Thus, a number of studies addressing the involvement of the 5-HTR2A polymorphism in ADHD have been conducted, but their results are not yet clear. In particular, many of these studies involve non-Asians, such as Caucasians. Other than the two studies conducted in China, few investigations have targeted Asian populations. Therefore, additional studies within the Korean population are necessary. Furthermore, few of the previous studies have reported results about relationship between the neuropsychological/behavioral characteristics and the genotypes in ADHD patients. Therefore, in this study, we aimed to identify the association between the 5-HTR2A gene polymorphism with ADHD in Korean patients using a case-control association analysis and family-based TDT, as well as to investigate changes in neuropsychological and behavioral characteristics, depending on the genotype or allele frequencies of the ADHD patients. Research subjects ADHD outpatients who visited the Child and Adolescent Psychiatric Departments at 4 hospitals were selected for the study based on the following factors: 1) an age more than 5 years old; 2) diagnosis of ADHD using the DSM-IV 19 diagnostic criteria and semi-structured interviews; and 3) a total score on the Parent ADHD Rating Scale (Korean ADHD Rating Scale; K-ARS) 20 under the 90 th percentile cutoff point. On the other hand, patients with the following conditions were excluded from the study: 1) an IQ of 70 or less; 2) a congenital genetic disorder; 3) a history of acquired brain injury, such as cerebral palsy; 4) a seizure disorder or other neurological disease; 5) a developmental disability, such as autism; and 6) schizophrenia, bipolar disorder, other childhood psychosis, Tourette's syndrome, speech disorder, or severe learning disability. 189 ADHD patients who satisfied these selection and exclusion criteria and agreed to cooperate in the study, and 113 pairs of biological parents of these patients who were able to cooperate in the blood sampling and other research were selected as the study subjects. Because 3 patients did not receive results from the genetic analysis due to an error during the analysis, they were excluded from the association analysis; therefore, a total of 186 patients were assessed. As the normal control group, 150 students were selected from one elementary school in Seoul and another in Jeonju after excluding children according to the following criteria: 1) having major medical/neurological/psychiatric diseases, as determined through parent surveys; 2) exhibiting serious behavioral problems, as determined through telephone consultations with their teachers; 3) presenting a total K-ARS score falling above the 90 th percentile cutoff point; and 4) having an IQ of 70 or less (overall intelligence was estimated through vocabulary and block design subtests from the KEDI-WISC intelligence assessment). 21 This study was approved by the Bioethics Committee of the College of Medicine at Chungbuk National University. A written description of the research protocol was provided to the parents and children who decided to participate in the study, and written consent forms were obtained. Clinical and neuropsychological assessments The clinical and neuropsychological assessment tools used in this study were as follows: Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version-Korean Version (K-SADS-PL-K) This is a semi-structured interview tool that was designed to assess the current and continuing morbidities of 32 child and adolescent psychiatric disorders and the severity of the symptoms based on DSM-IV diagnostic criteria. 22 A version of this tool has been translated into Korean. Using this tool, we examined the diagnoses of the ADHD children, their subtype classifications, and the presence of any coexisting diseases. Korean Educational Development Institute-Wechsler Intelligence Scale for Children (KEDI-WISC) 23 As an intelligence assessment tool for children and adolescents, it was modified and supplemented using the American WISC-R so that the items thought to have cultural differences were made to fit Korean society. This study used 3 variables of verbal IQ, performance IQ, and full-scale IQ. 20 This tool has been designed to assess ADHD symptoms in school-age children, and a Korean version has been developed and standardized. This instrument consists of a total of 18 questions and is arranged such that the total score for the odd-numbered questions measures the symptoms of attention deficit disorder, while the total score for the even-numbered questions measures the symptoms of hyperactive-impulsive disorder. Korean Child Behavior Checklist (K-CBCL) This tool, which was developed by Achenbach et al., 24 measures parents' observations of various aspects of their children's behaviors and has been developed and standardized for the Korean population. The K-CBCL consists of 2 types of scales to assess social competence and problem behavior. Among these scales, the problem behavior scale consists of 13 subscales, which can be divided into internalizing problems subscales and an externalizing problems subscales. In this study, we used scores from 3 types of scales, including an internalizing problems subscale, an externalizing problems subscale, and an overall problem behavior scale. Continuous Performance Test (CPT) The CPT is a very important tool for assessing attention, and in this study, we used the ADHD Diagnostic System, 25 which was developed in Korea. This system consists of a visual CPT and an auditory CPT. The following indicators were calculated: the omission error, the commission error, the mean reaction time, and the standard deviation of reaction time. We used all indicators in this study. DNA extraction and storage Approximately 10 cc of whole blood was collected from each subject in every group (patient, parent, and control groups) who participated in the genetic analysis, and the blood samples were stored at -20℃. Genomic DNA was extracted from frozen blood using the G-DEX TM II Genomic DNA Extraction Kit (Intron, Korea). Analysis of candidate genes The 5-HTR2A gene T102C polymorphism was analyzed using a chip-based MALDI-TOF mass spectrometry platform (Sequenom, Inc., CA). The general test method described in the basic protocol provided by the manufacturer was used. Homogeneous MassEXTEND (hME) The PCR products were treated with 0.3 U of shrimp alkaline phosphatase for 20 minutes at 37℃, after which the enzymatic activity was inhibited for 5 minutes. The hME reaction was performed in a 9 µL reaction mixture containing the hME enzyme (Thermosequenase, GE Healthcare, UK), ACT termination mix, and 5 µM extension primers (T102C-E: 5'-AGA AGT GTT AGC TTC TCC; G861C-E: 5'-AAT CCG GAT CTC CTG TGT ATG T). The primer extension reaction was carried out in 55 cycles consisting of 2 minutes of denaturation at 94℃, followed by 5 seconds at 94℃, 5 seconds at 52℃, and 5 seconds at 72℃. The reaction products were desalted using SpectroCLEAN (Sequenom, Inc., CA), and a SpectroJET (Sequenom, Inc., CA) was used to spot a 384-well SpectroCHIP (Sequenom, Inc., CA). The prepared Spectro-CHIP was analyzed using the automated MALDI-TOF Mas-sARRAY system (Bruker-Sequenom, CA). Following automated peak calling, samples with bad call signs that could be confirmed using the naked eye were further analyzed. Statistical analysis To evaluate the demographic and clinical characteristics of the study subjects, Student's t-test was performed to compare the ages and IQs between the case group and control group, and a χ 2 (chi-square) test was performed to compare the sexes. In addition, Student's t-test was used to analyze the results from the clinical scale and neuropsychological tests according to the genotypes of the case group. Furthermore, a χ 2 test was also performed to evaluate the association analysis between the case-control groups, whereas for the family-based association analysis, a TDT was performed using McNemar's χ 2 test under the hypothesis that specific alleles are inherited more favorably. This study included only cases in which the patient-father-mother trio participated in the analyses. SPSS 10.0 for Windows was used for the statistical analyses, and the significance level was set at p<0.05. Demographic and clinical characteristics of the subject groups There was no significant differences between the age of case (9.2±2.3 years) and the control (9.4±0.6 years)( Table 1). With respect to the sex ratios, there were higher proportions of males in both groups (87.8% and 87.3%, respectively). The overall IQ of the case group was 105.0±14.6, which was not significantly different from that of the control group (100.5± 8.5). The comorbid disorders of the case group were as follows: oppositional defiant disorder (9.5%) >anxiety disorder (3.2%)=enuresis (3.2%) >mood disorder (1.6%). A total of 5 subjects presented with more than two comorbid disorders, including 2 cases with both oppositional defiant disorder and anxiety disorder, 2 cases with both oppositional defiant disorder and enuresis, and 1 case with both conduct disorder and mood disorder. Case-control association analysis results Hardy-Weinberg equilibrium analysis of the genotype distributions of the target genes in the case group and control group revealed that equilibrium was maintained for both the case (χ 2 =2.61, p=0.106) and control group (χ 2 =0.75, p=0.387). The frequency distribution of the 3 genotypes among the entire ADHD patient group and the control group did not reveal any statistical differences (p=0.078), though the T/C type tended to appear more frequently in the patient group. Moreover, the frequency distributions of the groups according to the presence of T allele were also examined, and no statistical difference was detected between the case and the control gro- and without (21.5% vs. 32.0%) C allele (Table 2). In addition, an association analysis was conducted between the control group and the combined-type ADHD subtype (n=113), which was the ADHD subtype showing the highest distribution. The results indicated that there were significant differences (χ 2 =6.02, p=0.049) in the frequency distributions of the 3 genotypes between the case group (T/T 19.5%, T/C 59.3%, C/C 21.2%) and the control group (T/T 32.0%, T/C 46.0%, C/C 22.0%). Furthermore, comparison of the distributions between the case and control groups according to alleles also showed that there were significant differences (χ 2 =5.18, p=0.023) between the groups with (80.5% vs. 68.0%) and without (19.5% vs. 32.0%) C allele. However, there were no significant differences in allele frequency (p=0.181) or classification according to the presence of the T allele (p=0.882) ( Table 3). Results of TDT analysis for families of ADHD patients To analyze the results of the TDT, Hardy-Weinberg equilibrium analyses were performed separately for the genotype distributions of the ADHD patient, father, and mother groups, revealing that Hardy-Weinberg equilibrium was maintained in each case (patient group: χ 2 =1.65, p=0.199; father group: χ 2 =0.94, p=0.333; mother group: χ 2 =1.65, p=0.199). A TDT analysis was conducted on the entire family group of the patient-parent trios, but the selective transmission of a specific allele was not observed (p=1.000)( Table 4). Likewise, no selective transmission of a specific allele was observed in the trios (n=66) of the combined-type ADHD patient families (p=0.718). CPT and K-CBCL results in the patient group according to the presence of the C allele Based on the earlier finding of significant differences in the genotype distributions of the case and control groups according to the presence of the C allele, as revealed by the casecontrol analysis, we investigated the results of CPT and K-CBCL according to the presence of the C allele in the ADHD patient group. The age difference between the group with the C allele and without the C allele was not significant (p=0.243). Moreover, there were no significant differences between the two groups regarding total IQ (p=0.874), verbal IQ (p=0.742), or performance IQ (p=0.333). However, the group with the C allele presented significantly higher scores in the mean reaction time (t=2.956, p=0.004) and in the standard deviation of reaction time (t=2.654, p=0.009) of the auditory CPT compared with the group without the C allele, whereas the scores for the omission error (t=-2.539, p=0.012) and commission error (t=-2.820, p=0.007) tests of the visual CPT were significantly lower in the group with the C allele compared the group without the C allele. Furthermore, the group with the C allele showed a significantly lower score compared with the group without C allele on the externalizing symptoms subscale (t=-2.316, p= 0.022) of the K-CBCL (Table 5). DISCUSSION With respect to the comorbid disorders in the subject groups in this study, oppositional defiant disorder was observed at a frequency of 9.8%, whereas the other 4 disorders (anxiety disorder, enuresis, mood disorder, and conduct disorder) were found at minimum and maximum frequencies of 1.1% and 3.2%, respectively. However, we believe that the results of the present study adequately reflected the genetic ef-fects of ADHD, because the frequency of comorbid disorders in our study population was comparatively lower than that in previous genetic studies about 5-HT2A in ADHD. 26,27 The most significant results of this study came from the association analysis, which revealed that there was a significantly higher proportion of genotypes that included the C allele (C/C and T/C) in the case group compared with the control group. Furthermore, this difference was more noticeable when the case group was limited exclusively to the combinedtype patients rather than the entire ADHD group, and the significant difference of the proportion of 3 genotypes were observed in the combined-type patients only. Todd et al. reported that DSM-IV combined-type ADHD might be a genetically homogeneous subgroup. 28 Furthermore, from the point of sibling risk ratio, the risk of combined-type ADHD was slightly higher than that of broadly defined ADHD. 29 Therefore, the results of combined-type ADHD in our study demonstrate the importance of C allele more remarkably. Most of the previous studies involving the T102C polymorphism were family-based, whereas few case-control studies have investigated this polymorphism. However, in a group of Chinese Han subjects, Li et al. 17 reported results that were notably similar to those of our study. In particular, the combinedtype genotype distributions of the case and control groups were 22.3% vs. 33.5% for the T/T type and 64.0% vs. 47.3% for the T/C type, respectively, and thus similar genotype dis- Table 5. Comparisons of the scores of the CPT and K-CBCL tests between ADHD probands with C allele and without C allele in T102C polymorphism of 5-HT2A receptor gene 14,16 were on Caucasian subjects and did not suggest any significant results. Both the previous studies and our present study suggest that the T102C polymorphism of the 5-HTR2A gene shows significant differences between the ADHD case and control groups but only in the northeast Asian population. No selective transmission of the T or C allele within families was observed in the TDT performed in the present study. A review of previous studies showed that significant selective transmission of the T or C allele has also not been observed in Caucasian subjects; however, selective transmission of the His452 allele of the His452Tyr polymorphism within families has been reported. 14,15 In contrast, a study by Li et al. detected selective transmission of the C allele in families of female combined-type patients using the TDT (195 trio). Similarly, in the present study, the TDT conducted on trios (n=13) for the entire female ADHD group revealed a tendency to transmit the C allele, although the results were not statistically significant (p=0.08). However, not only was the presence of the C allele important regarding the differences in distribution between the case and control groups, but also when the score of CPT and K-CBCL were compared within the ADHD patient group, the group with the C allele and the group without the C allele showed different results. Therefore, caution is necessary in interpreting these results. First, the group without the C allele exhibited significantly higher omission and commission errors in visual attention compared with the group with the C allele, and the externalizing symptoms in K-CBCL were also significantly high. With respect to this idea, Bjork et al. 30 conducted a study on normal subjects without ADHD and reported on the results of the CPT with regards to the T102C polymorphism. These researchers reported that the visual commission error score from the CPT was significantly higher for the T/T genotype compared to the T/C and C/C genotypes. Although these researchers used unaffected individuals as subjects, their results were similar to those of our study in terms of visual attention. What is even more significant in the present study is that the mean reaction time and standard deviation of the reaction time for the auditory CPT were higher in the group with the C allele compared to the group without the C allele. There have been various studies performed focusing on modality-specific attention problems, such as visual attention and auditory attention. For example, Cooley and Morris 31 claimed that attention is specific to each sensory system, i.e., the visual, auditory, and verbal systems. Bedi et al. 32 noted that among normal children, those children who display 'visual distractibility' have a significant correlation between vi-sual distractibility and the inattentiveness reported by teachers; however, those children who display auditory distractibility have significant correlations between auditory distractibility and academic achievement. Moreover, there have been a number of reports that some ADHD children are classified as having the 'central auditory processing disorder' . 33,34 In particular, the neurophysiological studies on the relationship between serotonergic substances and the auditory attentiveness are in the spotlight. Research on the association between serotonergic substances and auditory attentiveness using a method measuring auditory selective attention while conducting tryptophan depletion tests revealed that serotonin is associated with the modulation of auditory attention. 35 Therefore, our findings in the present study most likely suggest that the existence of the C allele in ADHD causes variation in the serotonergic system that negatively affects auditory attention in particular. In other words, we think that the T102C polymorphism is associated with the modality-specific attention problems observed in ADHD. However, there may be another reason for such complex results. As stated in the introduction, if the dopaminergic and noradrenergic systems are directly impacts the manifestation of ADHD, the serotonergic 5-HTR2A gene does not have a direct effect but rather affects the manifestation of ADHD by modulating the dopaminergic system, which may have resulted in these complex findings. Considering clinical treatment in conjunction with the results of this study, it should be noted that the mechanism of action for atypical antipsychotics, which are another choice in drug treatment for children and adolescents, involves not only the dopaminergic system but also the serotonergic system. For instance, the typical mechanism of action of risperidone is to block DR2 and 5-HTR2A, so risperidone is classified as a serotonin-dopamine antagonist. Additionally, aripiprazole, which is known to have both dopaminergic partial agonist effects and 5-HTR2A blocking effects, has recently been shown to be slightly effective in treating ADHD. 36,37 This finding suggests that interactions between the serotonergic and dopaminergic systems are very important in ADHD. Of course, not all children with ADHD benefit from treatment with risperidone or aripiprazole compared with methylphenidate, but a subgroup with a far superior reaction to these drugs may exist, and the C allele may be included as one of the markers for such a subgroup. There were several limitations of this study. First, there were a total of 189 patients and 115 trio subjects in the patient and family groups, which is significantly lower than the ideal study population size. The use of a small number of subjects has also caused problems in previous similar studies, as parental cooperation is not easy to obtain when study subjects are chil-dren or adolescents, rather than adults. Although 4 university hospitals participated in the study together, a true multi-center study that includes more hospitals will be necessary in the future. Second, out of the three types of genetic polymorphisms that have been reported in 5-HTR2A, only T102C was examined in this study. Thus, the possibility of obtaining a different result cannot be ruled out if a candidate gene is selected from a different genetic locus. Third, the only neuropsychological testing selected in the present study was the CPT. If we had conducted more elaborate neuropsychological tests by performing other important tests in addition to the CPT, we may have been able to identify differences in the neuropsychological test results according to each genotype more impartially. Despite these limitations, this study is significant in the following respects: this study conducted a case-control study and a family-based association study for the gene polymorphism of 5-HTR2A that focused on the serotonergic system transmission, which has rarely been reported in Korean AD-HD children; this study discovered that significant differences were found depending on the presence of the C allele, and the patient group showed differences in modality-specific attention depending on the presence of the C allele. These results may be useful for future studies addressing ADHD treatment methods and for selecting drugs that affect the interactions between the dopaminergic and serotonergic systems or the norepinephrinergic and serotonergic systems. It is our hope that the limitations of this study will be overcome in the future and that there will be breakthroughs in elucidating the causes of ADHD as well as in improving treatment options, early intervention, subtype detection, and diagnosis related to this disease through genetic research on a wider variety of components involved in the serotonergic system.
2017-10-15T20:05:18.544Z
2012-09-01T00:00:00.000
{ "year": 2012, "sha1": "6e153ece1a324f9e91b5cbd290590cd6adc0683a", "oa_license": "CCBYNC", "oa_url": "http://psychiatryinvestigation.org/upload/pdf/pi-9-269.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6e153ece1a324f9e91b5cbd290590cd6adc0683a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
92288777
pes2o/s2orc
v3-fos-license
Pluripotency factors regulate the onset of Hox cluster activation in the early embryo Pluripotent cells are a transient population of the mammalian embryo dependent on transcription factors, such as OCT4 and NANOG, which maintain pluripotency while suppressing lineage specification. However, these factors are also expressed during early phases of differentiation, and their role in the transition from pluripotency to lineage specification is largely unknown. We found that pluripotency factors play a dual role in regulating key lineage specifiers, initially repressing their expression and later being required for their proper activation. We show that Oct4 is necessary for activation of HoxB genes during differentiation of embryonic stem cells and in the embryo. In addition, we show that the HoxB cluster is coordinately regulated by OCT4 binding sites located at the 3′ end of the cluster. Our results show that core pluripotency factors are not limited to maintaining the precommitted epiblast but are also necessary for the proper deployment of subsequent developmental programs. INTRODUCTION Pluripotency, the ability of a cell to give rise to derivatives of all embryonic germ layers, occurs in cultured embryonic stem (ES) cells and for a brief period during development of the mammalian embryo. A small group of transcription factors, octamer-binding transcription factor 4 (OCT4), NANOG, and SOX2, controls this state both in vivo and in culture by regulating a large battery of downstream target genes (1). During preimplantation stages of mammalian embryos, these factors are expressed in the epiblast of the blastocyst, which shares various molecular features with ES cells, among them the expression of the core pluripotency factors. Progression from pluripotency toward differentiation requires the dismantling of the pluripotency regulatory network, leading to the expression of lineage determination genes and turning on of specific developmental pathways. However, the expression of pluripotency factors beyond the blastocyst stage suggests roles not directly related to pluripotency maintenance (2). Oct4 (official gene symbol Pou5f1) is continuously expressed up to embryonic day (E) 8.5, initially throughout the epiblast and subsequently showing progressive restriction to the posterior part of the embryo (3). Nanog is reexpressed at E5.5, but only in the posterior-proximal region, where it controls development of the primordial germ cells (4) and is turned off by E7.5 (5). Some studies have suggested roles of these factors beyond pluripotency. For example, Oct4 has been found to promote mesoendodermal differentiation (6,7) and cardiomyocyte fate (8), while Nanog has been proved to regulate primitive hematopoiesis (9). Nevertheless, loss-of-function approaches to investigating the role of Oct4 and Nanog at these stages have proved difficult because preimplantation lethality precludes analysis of later phenotypes (1). To overcome early lethality, conditional Oct4 mutants have been analyzed at early postimplantation stages. However, loss of Oct4 at these stages leads to tissue disorganization and proliferation defects at gastrulation, which could obscure potential lineage-specific defects (10,11). Therefore, we still lack a complete understanding of the roles of pluripotency factors during later stages of development, as well as how pluripotency and differentiation programs are coordinated in the embryo. In this work, we aimed to understand the role of Oct4 and Nanog beyond pluripotency. We have characterized the transcriptional changes caused by gain of function of these factors and determined that they regulate many developmental regulators in a dual fashion, repressing their expression at E7.5 and activating them at E9.5. Among them, we have used as a paradigm the regulation of Hox genes by Oct4. Hox genes are a large conserved family of transcription factors that specify cellular identities along the anterior-posterior axis throughout metazoan evolution (12). They are organized in clusters along the chromosome that determine their temporal and spatial activity (13). By using an inducible Oct4 loss-of-function model, we have determined that it is required for proper activation of the HoxB cluster. We have identified functional OCT4 binding sites in the regulatory regions of the cluster and demonstrated that these regions are essential for proper HoxB gene expression. Stage-dependent regulation of developmental genes by pluripotency factors We used doxycycline (dox)-inducible transgenic mouse models providing controlled Oct4 or Nanog expression in postimplantation 1 embryos (14,15). These mice carry two independent alleles: on the one hand, an insertion of the cDNA for Oct4 or Nanog driven by an rtTA (see below) responsive promoter at the permissive Col1a1 locus and, on the other, the transcriptional transactivator rtTA, which is only active if bound by dox, inserted at the Rosa26 locus. Adding dox to the drinking water of pregnant mice will result in the activation of the transgenes in the embryos in a temporal controlled manner. We chose two different time windows for induction of Oct4 and Nanog: from E4.5 to E7.5 and from E6.5 to E9.5 (Fig. 1A), thus maintaining expression beyond the point when endogenous gene activity is turned off. Robust expression of both transgenes was obtained at E7.5 and E9.5 (fig. S1A), with higher levels in the neural tube and the mesoderm ( fig. S1B). Expression levels of Oct4 or Nanog in treated embryos were comparable or even lower than endogenous levels in E14 or R1 (16) ES cells (fig. S1C). We analyzed the transcriptomes of embryos by RNA sequencing (RNA-seq) from untreated and dox-treated females and compared gene expression changes between stages and models (data file S1). More than 50% of genes differentially expressed upon Oct4 expression up to E7.5 also changed when Nanog was induced in the same time window (fig. S1D) and included major developmental regulators. However, this proportion halved when we compared changes occurring in E9.5 embryos (24%). Similarly, 23% of genes deregulated by Oct4 at E7.5 also changed at E9.5. As for Nanog, 36% of genes changing at E7.5 were shared with Oct4, 16% at E9.5, and only 14% were common at both stages in Nanog-expressing embryos ( fig. S1D). Core pluripotency factors activate each other's expression (1,17), and we observed positive cross-regulation of Oct4 and Nanog at E7.5 (data file S1), but not at E9.5 (Data file S1; fig. S1A). Furthermore, we did not observe up-regulation of other pluripotency factors, such as Sox2, upon Oct4 or Nanog expression. Therefore, there is not an overall activation of the embryonic pluripotency program in the gastrulating embryo driven by these factors. We performed unsupervised hierarchical clustering of the data using genes that were differentially expressed in at least one condition (4090 genes; data file S2). Most of the resulting clusters (Fig. 1B) show a stronger tendency for up-regulation or down-regulation in only one condition (e.g., clusters #1 to #5; fig. S2A), confirming the largely independent and stage-specific effects of Oct4 and Nanog expression. Functional annotation of Gene Ontology terms showed that most clusters were enriched for genes involved in development and transcription, with some exceptions such as cluster #5, which is enriched for cell cycle genes, and cluster #7, which includes genes involved in lipid metabolism ( fig. S2B and data file S3). Widespread regulation of Hox genes by pluripotency factors The large number of Hox genes in this last group (18 genes; data file S2) prompted us to examine the response of all 39 Hox genes to Oct4 and Nanog ( Fig. 2A and data files S1 and S2). Nanog induction up to E7.5 down-regulated the expression of 14 Hox genes, while it up-regulated the expression of 3. Its expression up to E9.5 only up-regulated the expression of 5 Hox genes. On the other hand, Oct4 significantly down-regulated 23 Hox genes when expressed up to E7.5, and up-regulated 24 when expressed up to E9.5 ( Fig. 2A). It is noteworthy that in the case of Oct4, no Hox gene was up-regulated at E7.5 or down-regulated at E9.5. Oct4 affects expression of Hox genes from all four clusters (HoxA-D), but excluding most of the posterior Hox genes from paralog groups 10 to 13. Previous work has shown that Oct4 delays the activation of posterior Hox genes in trunk progenitors at later embryonic stages (20), suggesting that Oct4 could have opposite roles in the regulation of anterior and posterior Hox genes during development (21). These results indicate that endogenous Oct4 might be regulating Hox genes during postimplantation development, which would require them being coexpressed at these stages. To address whether this was the case, we analyzed published single-cell expression data in gastrulating embryos (E6.5 to E8.5 embryos) from the mouse gastrulation atlas (22). We found that the majority of cells expressing Oct4 also express Hox genes; at E8.0, approximately 80% of Oct4-positive cells also express any of the Hox genes examined ( fig. S3A). Pearson correlation of single-cell RNA-seq expression data showed that the highest correlation between Oct4 and Hox genes occurs from E7.75 to E8.25 ( fig. S3B). Correlation at these time points is positive, which is coincident with the up-regulation of Hox genes by Oct4 overexpression in the late-time window, but is negative at earlier stages when OCT4 would be repressing Hox gene expression. Analysis of the single-cell RNA-seq data for Oct4 and Hoxb1 expression in different cell types at E8.0, the time point where they show the highest correlation, identified highest levels of both genes in mesodermal derivatives and a near complete overlap between them (fig. S3, C and D). We next analyzed Oct4 mRNA levels in Hox-positive cells and observed that Oct4 is expressed at similar levels in these populations, compared to all Oct4-positive cells from the same stages ( fig. S3E). This rules out the possibility that Hox genes are only expressed in cells with low levels of Oct4, which would be turning it off as a prerequisite for proper lineage commitment. The results obtained from the analysis of scRNA-seq data in postimplantation embryos point toward a strong correlation between Oct4 and Hox gene expression throughout the developmental stages that we have examined. To determine when the switch from Hox gene repression to activation takes place, we induced Oct4 for 1.5-day periods between E5.5 and E8.0, harvesting embryos between E7.0 and E9.5 (Fig. 2B). We examined the expression of Hoxa1 and three HoxB cluster genes (Hoxb1, Hoxb4, and Hoxb9) by reverse transcription quantitative polymerase chain reaction (RT-qPCR) using genotype-matched embryos from non-dox-treated females as controls. Consistent with the transcriptomic data, we observed a switch from repression to activation. However, the switch occurred earlier for Hoxa1 and Hoxb1 than for Hoxb4 and Hoxb9 (Fig. 2C), resembling the collinear activation of the clusters in the early embryo (23). We next examined Oct4-induced changes to HoxB gene expression patterns by in situ hybridization of E7.0 to 7.5 and E9.5 embryos exposed to dox for 3 days. At E7.0 to 7.5, expression of Hoxb1 and Hoxb4 was down-regulated in the posterior region of the embryo, as predicted from the transcriptomic analysis (Fig. 3, A and C), and at E9.5, we observed gain of expression for HoxB genes in several domains of the embryo (Fig. 3, A, B, and D). As by this stage Oct4 is no longer expressed in Hox-positive territories, these results show that Oct4 can regulate multiple Hox genes even if expressed out of its endogenous context. Hoxb1 was the most altered, with a shift along the anterior-posterior axis in the neural tube ( Fig. 3, B, arrowhead, and D) together with up-regulation or persistence of expression in presumptive rhombomere 6 territory (Fig. 3B, asterisk). Expression of Hoxb4 was posteriorized in the neural tube ( Fig. 3, B, arrowhead, and D), showing an irregular and patchy pattern (Fig. 3B, bracket). However, its expression in the somatic mesoderm shifted anteriorly (Fig. 3D). As for Hoxb9, there is a change in the anterior limit of expression in the neural tube (Fig. 3B, white arrowhead) compared with that in the paraxial mesoderm, which also shows an irregular pattern (Fig. 3B, black arrowhead). Most notably, all three genes examined showed patches of expression in the anterior neural tube ( Fig. 3A and fig. S4A), a territory devoid of Hox expression at all developmental stages (23). In situ hybridization in adjacent sections ( fig. S4, B and C) revealed that these patches correspond to Oct4expressing vesicle-like structures, where the anterior marker Otx2 was lost and Hox genes were expressed in various combinations ( fig. S4C, arrowheads, and D, arrows). These results suggest a coordinated response of the HoxB cluster to Oct4 gain of function, leading to its activation in Hox-free domains such as the forebrain. On the other hand, Nanog caused no obvious changes in Hox gene expression in E9. 5 Oct4 is necessary for the correct activation of Hox genes To definitively establish the role of Oct4 in the regulation of Hox genes under endogenous conditions, we analyzed its requirement both in vivo and in ES cell differentiation assays. In the first place, we used a 4-hydroxytamoxifen (4-OHT) inducible Oct4 loss-offunction mouse model. These mice carry a floxed Oct4 allele together with an inducible Cre driven by the ROSA26 promoter (R26CreERT2) (24). To prevent embryo lethality at preimplantation or early postimplantation stages (10, 24), we deleted Oct4 by administering a single dose of 4-OHT at E6.5 (Fig. 4A). Embryos recovered at E7.5 and E8.5 from treated mothers showed no obvious morphological defects, while at E9.5, we observed a partially penetrant phenotype with craniofacial and trunk defects, as has been previously described (11). We subclassified E9.5-treated embryos in those showing a mild (open neural tube but normal posterior trunk) or severe (anterior malformations and posterior truncations with impaired somitogenesis) phenotypes. Quantification of Oct4 expression levels by RT-qPCR at these stages confirms that its expression is gradually lost (Fig. 4B), Results are shown as log 10 ; analysis was performed by unpaired two-tailed Student's t test. *P < 0.05; ***P < 0.001; ****P < 0.0001. which suggests that deletion is not complete at the earlier stages examined or that we are observing prolonged stability of Oct4 mRNA because of its intrinsic half-life. However, when we analyzed HoxB genes (Hoxb1, Hoxb4, and Hoxb9), their expression was reduced in Oct4 loss-of-function embryos as compared to controls (non-4-OHT treated) at all stages (Fig. 4B). We next examined HoxB gene expression in control and Oct4 loss-of-function embryos by whole mount in situ hybridization. We observed a down-regulation of Hoxb1 and Hoxb4 expression in both mild and severe E9.5 Oct4-deleted embryos ( Fig. 4C and fig. S5, A to C). However, we did not detect changes in Hoxb9 expression by in situ hybridization. To determine how changes of Oct4 affect HoxB gene expression during differentiation, we used an ES cell model where we could modulate Oct4 expression in an inducible fashion. The ZHBTc4 ES cell line has both copies of endogenous Oct4 deleted and harbors a lineages. mRNA levels were measured by RT-qPCR in ZHBTc4 ES cells treated with tetracycline from day 3 to day 6 (Oct4 WT), from day 0 to day 6 (cells that do not express Oct4 along differentiation; Oct4 LoF), and untreated ZHBTc4 (cells that express constitutively Oct4 along differentiation; Oct4 GoF). Statistical significance was calculated by two-way analysis of variance (ANOVA) followed by Tukey's test (n = 3). *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001. tetracycline (tet)-dependent Oct4 transgene (Tet-Off), where addition of the drug leads to the quick down-regulation of Oct4 (6). We differentiated ZHBTc4 cells toward mesodermal-like (via Wnt activation) or anterior neural-like (hindbrain, via retinoic acid induction) fates (25), under three different experimental conditions. In the first case, ZHBTc4 cells were treated with tet 3 days after starting the differentiation process ( fig. S5D). This mimics the endogenous dynamics of Oct4 that is strongly down-regulated from day 3 to day 4 in both differentiation protocols ( fig. S5E), and thus, we considered this condition equivalent to the wild-type behavior (Oct4 WT). In the second, cells were treated with tet from day 0 and therefore were devoid of Oct4 during the whole differentiation process (Oct4 LoF; fig. S5D), but not during the previous pluripotent phase. Last, we also analyzed untreated ZHBTc4 cells that express Oct4 throughout pluripotency and the differentiation process (Oct4 GoF; fig. S5D). Under these conditions, we observed the expected up-regulation of Hox genes during differentiation, with Hoxb1 showing an earlier peak in expression than Hoxb4 or Hoxb9 (Fig. 4D). Gain of function of Oct4 leads to increase in Hox gene expression, although not robustly except for the expression of Hoxb4 at day 6 during both mesodermal and neural differentiation. On the other hand, loss of function of Oct4 caused a failure to properly activate Hox genes during differentiation (Fig. 4D). It is noteworthy that we observed these effects during both Wnt-dependent (mesoderm) and Wntindependent (hindbrain) differentiation protocols. This is relevant because we identified different Wnt genes as putative targets of Oct4 (see above), and a possibility was that the effect we observed of Oct4 on Hox genes was not direct but mediated by Wnts, which are known to activate Hox gene expression in the early embryo (26). Together, these experiments strongly suggest that Oct4 is required for the correct initiation of the expression of genes from the HoxB cluster, once differentiation has begun starting from the pluripotent epiblast. Global control of the HoxB cluster by OCT4 To investigate how pluripotency factors regulate Hox genes, we examined previously published ChIP-seq binding profiles of OCT4 and NANOG in mouse ES cells (27). We detected some weak binding of these factors within the HoxA, HoxB, and HoxC clusters, D, but observed very prominent peaks at the anterior ends of the three clusters ( Fig. 5A and fig. S6A). As for HoxD cluster, various peaks were found both at the anterior end and within the cluster. In the HoxB cluster, both OCT4 and NANOG bind proximally to the Hoxb1 promoter (P-site) and to a distal region (D-site) approximately 9 kb downstream of its transcriptional start site (Fig. 5A). These sites have been shown to bind OCT4 during ES cell differentiation (28) and are bound both in ES cells and epiblast-like cells in the transition from naïve to primed pluripotency (16). We confirmed that these sites were occupied by OCT4 in ES cells by ChIP-qPCR (Fig. 5B), using a region from the Nanog promoter shown to be bound by OCT4 (27,29) as a positive control and other unrelated genomic regions as negative controls ( fig. S6B). ChIP in E9.5 embryos showed no binding, but after dox administration, we observed binding at both the P-site and D-site (Fig. 5B), demonstrating that these sites are occupied by OCT4 upon induction of its expression. The response of multiple Hox genes to Oct4 we observed in the RNA-seq dataset and in whole mounts in situ suggested the existence of global mechanisms of Hox cluster regulation by pluripotency factors. To address whether these regions could be acting as common regulatory elements for the cluster, we examined their interaction profile by circular chromatin conformation capture followed by highthroughput sequencing (4C-seq). We designed viewpoints for both the P-site and D-site and carried out 4C-seq in mouse ES cells, where OCT4 is present and HoxB genes are not expressed, as well as in dox-treated and dox-untreated E9.5 Oct4 tg embryos (Fig. 5C). Normalized reads were used to fit a distance-decreasing monotone function, to take into account that nearby fragments will randomly interact more frequently, and contacts that deviated significantly from the normal distribution were identified. The chromatin structure surrounding the anterior end of the HoxB cluster is relatively stable independently of its expression (Fig. 5C), as has been shown for other viewpoints in the cluster (30). Interaction occurs on both sides of the viewpoints: toward the HoxB cluster itself, with a strong limit near Hoxb13, and outside the cluster toward the telomeric region (Fig. 5C). However, interactions are differently distributed around the viewpoint in ES cells and embryos. In ES cells, there are more interactions toward the Hoxb13 region, possibly reflecting the closed conformation of the cluster mediated by poised promoters (31), whereas in E9.5 embryos, interactions increase toward the telomeric gene desert defined by Skap1 (Fig. 5C), whose expression at E7.0 to E7.75 is limited to embryonic blood progenitors (32). This difference in interactions might reflect the active state of the HoxB cluster in the embryo, where distal regulatory elements located in this region (33) are recruited to define its correct expression, as is also the case for the HoxA cluster (26). Our observations are also in line with recent results that show that the HoxA and HoxD clusters are organized in compact domains in ES cells that open up during differentiation (34). When Oct4 is induced in E9.5 embryos, previously unidentified contacts are established from both the P-site and D-site toward the cluster (Fig. 5C, asterisks) accompanied by a reduction in the interactions with the Hoxb13 domain (Fig. 5C, dashed boxes). We can conclude that these regions at the telomeric end of the HoxB cluster, which are bound by pluripotency factors, establish intracluster interactions in active or inactive states. Furthermore, the presence of OCT4 in E9.5 embryos leads to a reorganization of the local architecture of the HoxB cluster. Deletion of the distal OCT4 site disrupts the pattern of HoxB expression during differentiation To complement these observations, we tested the necessity of the OCT4-bound regions described above in the regulation of the HoxB cluster by their deletion in ES cells by CRISPR-Cas9-mediated genome editing. We examined the genomic regions covered by ChIPseq peaks, finding that the proximal site (P) contains one consensus OCT4 binding site, while the distal site (D) contains at least two ( fig. S7A). In the case of the P-site, this consensus lies within the previously described proximal Hoxb1 autoregulatory element (35) and in very close proximity to the promoter. In addition, this region was shown to be bound by SOX/OCT heterodimers being required for the maximal transcriptional activity of Hoxb1 (36). Given the difficulty of deleting this site without compromising other known regulatory inputs on Hoxb1, we decided to only analyze the effect of deleting the D-site. This region does not map to any other known Hoxb1 regulatory elements, such as the two described 3′ retinoic acid response elements ( fig. S7A) (37, 38). We generated two independent ES cell clones deleted for the D-site (clones #30 and #57; fig. S7A) and analyzed changes in HoxB gene expression during their differentiation toward mesoderm-like or anterior neural-like (hindbrain) fates ( fig. S7B) (25). Comparison of parental with HoxB D-site / ES cells along differentiation showed a similar trend for each of the Hox genes examined independently of the differentiation protocol, and a comparable behavior in the two independent clones (Fig. 6A). Hoxb1 is down-regulated as compared to controls at later stages of differentiation. Hoxb4 do not show significant changes along differentiation. Last, Hoxb9 is consistently activated in deleted cell lines throughout the differentiation window analyzed (Fig. 6A). To analyze the effect of the deletion in vivo, we used the HoxB D-site / ES cells to generate mouse lines. Homozygous mice survive to term, which did not come as a surprise, as deletion of the entire HoxB cluster in homozygosity does not cause embryonic lethality (39). We examined the expression of Hoxb1 and Hoxb4 at E7.5, a time when anterior HoxB genes and Oct4 are still codetected (fig. S3, A and B). RT-qPCR showed down-regulation of Hoxb1 and up-regulation of Hoxb4 in deleted embryos (Fig. 6B). This further confirmed the different roles of Oct4 at this stage, when it would be necessary to achieve proper expression of Hoxb1 but at the same time would be lowering levels of Hoxb4. Whole mount in situ hybridization at the early head-fold stage showed no clear changes in Hoxb1, but an expansion of the area of expression at the posterior part of the embryo of Hoxb4 (Fig. 6, C and D). We also examined expression of HoxB genes in E9.5 embryos from the HoxB D-site / line and did not observe major changes except for overall lower levels of Hoxb4 expression ( fig. S7, C and D). Therefore, these results suggest that at early stages of expression, OCT4 is necessary to fine-tune expression of HoxB genes in their endogenous domains. DISCUSSION It is generally assumed that pluripotency factors act to restrict lineage decisions before gastrulation. However, Oct4 has been shown to participate in several later developmental decisions in the mouse, including primitive endoderm development (40,41), lineage priming (24,28,42), primitive streak proliferation (11), regulation of trunk length (20), or the formation of cranial neural crest (43). Furthermore, recent results have shown how the lack of Oct4 in early gastrulating embryos results in a blockade of epithelial-to-mesenchymal transition in the posterior epiblast, thus disrupting proper axis formation (10). Both single-cell RNA-seq data and in situ detection in mouse show that Oct4 and Hox genes are coexpressed in cells of the gastrulating embryo, from the onset of Hox gene expression and up to E8.5 (3,32). We propose that at these early stages, Oct4 plays a dual role, first maintaining Hox genes silent before lineage commitment, and later being required for their proper activation. This behavior is specific for Oct4, as in the case of Nanog, we only observe initial repression of Hox genes, which agrees with the description of the mutual crossrepression of Nanog and Hoxa1 to differentially regulate a common set of downstream target genes involved in early phases of lineage commitment (44). OCT4 switches from a repressor to an activator at the onset of gastrulation, when Hox genes become activated and when OCT4 is still widely expressed, but the exact timing of this event is yet to be determined. OCT4 associates with both activator and repressor complexes, through the recruitment of different cofactors. For example, OCT4 recruits ERG-associated protein with SET domain (ESET) to silence trophoblast-associated genes (45,46) but also can bind SALL4 or NURD in a histone deacetylase complex (47). On the other hand, the interaction of OCT4 with WDR5 induces transcription of key self-renewal regulators (48). In addition, other protein partners are key for specific roles of OCT4, such as the recruitment of BRG1 that is essential for its pioneer activity (49) or its cooperation with OTX2 at the transition between naïve and primed states of pluripotency (16). Thus, it is tempting to speculate that an exchange of cofactors would explain the bimodal function of OCT4 in the regulation of Hox genes regulation. In addition, the function of OCT4 on some of its target genes is not a simple on/off binary system, but dependent on its own levels (6). This could also potentially explain our observation that at early developmental stages, OCT4 is repressing genes that will be later up-regulated, when OCT4 levels are lower. The capacity of a single factor of having opposite transcriptional activities allows for a rapid switch in expression of its targets. This is needed for a correct establishment of developmental processes. Also, it is critical immediately after the loss of pluripotency when cells need to take fate decisions in a short time window. Further studies using protein dynamics to complement our studies on gene expression will provide better resolution on OCT4's role on Hox genes expression. The concerted response of Hox genes, together with the chromatin interactions established by bound regions we see in the HoxB cluster, suggests that OCT4 regulates globally Hox clusters and forms part of the complex regulatory apparatus that ensures proper Hox gene expression (12,26). Furthermore, when we examine the expression of anterior, middle, and posterior HoxB genes in gain-and loss-of-function models, we observe a collinear response to Oct4, in line with recent findings of the repression of most posterior 5′ Hox genes by OCT4 (20,21). It is also interesting to note that pluripotency factors have been involved in the development of the neural crest of amphibians (50) and mammals (43), a multipotent population of cells that are equally patterned by Hox genes. Therefore, we find multiple instances and cell populations during vertebrate development (gastrulation, trunk extension, and the neural crest) where pluripotency factors (and more specifically POU5-like factors such as OCT4) could be regulating Hox genes. Moreover, our expression data indicate that other patterning genes respond in a similar fashion, suggesting that OCT4 and other pluripotency factors mediate a switch from repression to activation of an array of developmental regulators at the time of lineage decisions. The early activation of Hox genes is dependent on several pathways and factors, such as retinoic acid, fibroblast growth factors (FGFs), Wnts, GDF11, or CDX transcription factors (21). The results we present here suggest that OCT4 is also essential to trigger correct Hox gene activation at a specific time and in cell-specific populations. However, we cannot rule out that OCT4 could be priming or maintaining these regions open for the activation by other transcription factors, as has been described during reprogramming (51). Our data using loss-of-function models are compatible with a pioneering role of OCT4 on Hox early enhancers. On the other hand, when we use gain-of-function models, the results suggest a direct activation of Hox genes by OCT4. These two possibilities are not mutually exclusive and, certainly, OCT4 binding could favor the later recruitment and activity of other factors. For example, the HoxB P-site is embedded within the Hoxb1 autoregulatory element that drives expression in rhombomere 4 of the hindbrain (35). This region binds SOX and OCT proteins that are necessary for the optimal response to the transcriptional activation by HOX/PBX heterodimers (36). Thus, the early binding of OCT4, even before Hoxb1 is expressed, could facilitate later recruitment of HOXB1 to nearby sequences. In agreement with this hypothesis, Hoxb1 expression in rhombomere 4 is greatly diminished in Oct4 mutant embryos. In summary, initial lineage specification involves not only dismantling of the core pluripotency gene regulatory network (2) but also a switch in function of key factors such as OCT4 from repressors to activators that would supervise the transition from pluripotency to lineage determination. Our results provide new insight into temporally dynamic roles for factors such as OCT4, which beyond their well-described function in pluripotency, are directly responsible for regulating genes associated with differentiation programs. Animal models Mice lines used in this study were housed and maintained in the animal facility at the Centro Nacional de Investigaciones Cardiovasculares (Madrid, Spain) in accordance with national and European legislation. Procedures were approved by the CNIC Animal Welfare Ethics Committee and by the Area of Animal Protection of the Regional Government of Madrid (ref. PROEX 196/14). Double-homozygote transgenic males of the Oct4/rtTA (R26-M2rtTA;Col1a1-tetO-Oct4) (14) or Nanog/rtTA (R26-M2rtTA;Col1a1-tetO-Nanog) (15) mouse lines were mated with CD1 females, which were treated with dox (0.2 or 1 mg/ml) in the drinking water to induce the Oct4 or Nanog transgene, respectively, in embryos. For Oct4 transgene induction in E7.5 embryos to be used for in situ hybridization, a single 100-l intraperitoneal injection of dox (25 g/l) was administered to pregnant females at E5.5, followed by dox administration (0.5 mg/ml) in drinking water. Nontreated mice of the same genotype were used as controls. Double-homozygote R26CreERT2;Oct4 LoxP/LoxP (24) mice were mated, and females were treated at E6.5 by administering a single dose of 4-OH (5 mg, at 25 g/l) by gavage. Mouse lines deleted for the Oct4 distal site adjacent to Hoxb1 were generated by blastocyst injection of mutated ES cells (see below) following standard procedures (52) and genotyped using the primers specified in data file S4. RNA sequencing RNA-seq was performed with three biological replicates, each consisting of pools of 8 to 12 E7.5 embryos or 3 E9.5 embryos obtained from untreated (controls) or dox-treated double heterozygous embryos. Levels of Oct4 or Nanog overexpression were tested by RT-qPCR for each independent litter before RNA-seq. Equally, three biological replicates of E14 ES cells were used for RNA-seq. Single-end sequencing was performed by the CNIC Genomics Unit using a GAIIx sequencer. Adapters were removed with Cutadapt v1.14, and sequences were mapped and quantified using RSEM v1.2.20 to the transcriptome set from Mouse Genome Reference NCBIM37 and Ensembl Gene Build version 67. Differentially expressed genes between groups were identified using the limma bioconductor package. Only P < 0.05 adjusted through the Benjamini-Hochberg procedure was considered as significant. Clustering analysis was conducted for all genes differentially expressed between the induced and control conditions for any of the four conditions. Overrepresented biological categories were identified using DAVID v6.8 (53). Lists of genes located close to OCT4 and NANOG genomic bound regions were generated from published ChIP-seq datasets (18,19). Each ChIP-seq peak was assigned to the single nearest gene in a 100-kb window. RNA-seq data are available at the NCBI Gene Expression Omnibus (GEO) database under accession number GSE94954. Single-cell gene expression data from Pijuan-Sala et al. (22) were analyzed to assess coexpression of Oct4 and Hox gene. A gene is considered to be expressed in a cell when its expression level is >0 fragments per million. RT-qPCR assays Total RNA from single embryos from E8.0 onward, embryo pools up to E7.5, or ES cells directly lysed in their wells, was extracted with RNeasy kit (Qiagen) and digested with deoxyribonuclease I (Qiagen) to remove genomic DNA. Total RNA (0.5 to 1.0 g) was reverse transcribed using Quantitech Reverse kit (Applied Biosystems). qPCR was performed with SYBR Green Master Mix (Applied Biosystems) on an AB 7900-Fast-384 machine. qPCR primers are listed in data file S4. Expression values were normalized to the expression of Actb and Ywhaz (whose expression as measured in our RNA-seq data did not change upon Oct4 or Nanog induction) using the comparative C T method (54), and SDs were calculated and plotted using Prism 7.0 software (GraphPad). All assays were performed in triplicate. In situ hybridization In situ hybridization in whole mount embryos or sections was performed using digoxigenin-labeled probes as described (55). Probes for Hoxb1, Hoxb4, Hoxb9, and Otx2 were generated by PCR (primers listed in data File S4), and the probe for Oct4 was provided by T. Rodriguez (Imperial College London). Early embryos were staged according to Forlani et al. (56). Quantification of in situ hybridization staining was carried out using Fiji to measure the area and length of the positive domain of Hox gene expression, in both the neural tube and the somites. Values were normalized by the total length or area of the embryo from the otic vesicle to the end of the tail. 4C sequencing 4C was performed as previously described (30,57) on two replicates of pools of 60 to 70 E9.5 embryos or 1 × 10 6 to 2 × 10 6 G4 ES cells. Samples were cross-linked with 2% paraformaldehyde, frozen in liquid nitrogen, and stored at −80°. Chromatin was digested with Dpn II (New England BioLabs) followed by Nla III (New England BioLabs), and ligated with T4 DNA Ligase (Promega). For all experiments, 0.5 to 1 g of the resulting 4C template was used for the subsequent PCR (primers listed in data file S4). 4C libraries were sequenced (single end) at the CNIC Genomics Unit using an Illumina HiSeq 2500 sequencer. Sequences were mapped and quantified using RSEM v1.2.20 to the Mouse Genome Reference NCBIM37. Reads located in fragments flanked by two restriction sites of the same enzyme, in fragments smaller than 40 base pairs (bp) or within a window of 10 kb around the viewpoint, were filtered out. Mapped reads were converted to reads per first enzyme fragment ends and smoothed using a 30-fragment mean running window algorithm. Smoothed scores from each experiment were then normalized to the total number or reads before the visualization. To calculate the frequency of captured sites per window, Fastq files were demultiplexed using Cutadapt with the viewpoint sequences as indexes. Potential Illumina adaptor contaminants and small chimeric reads were removed. Processed reads were assigned to their corresponding genomic fragment after a virtual digestion of the reference genome with the first and second restriction enzymes. Reads located in fragments 5 kb around the viewpoint were filtered out. Quantification was performed considering each fragment end as one capture site if one or more sequences were mapped to it. The number of capture sites was summarized per 30-fragment window. The frequency of capture sites per window was used to fit a distance-decreasing monotone function, and z scores were calculated from its residuals using a modified version of FourCSeq (58). Significant contacts were considered in cases where the z score was >2 in both replicates and deviated significantly (adjusted P < 0.05) from its normal cumulative distribution in at least one of the replicates. 4C-seq data are available at the NCBI GEO database under accession number GSE94954, Chromatin immunoprecipitation ChIP was performed using 10 E9.5 embryos or 1 × 10 6 ES cells per experiment. After recovery, embryos were treated with collagenase type I (Stemcell Technologies, 07902) at 0.125% for 1 hour at 37°. Then, embryos were desegregated using a pipette and washed with cold phosphate-buffered saline. Samples were fixed, and protein-DNA complexes were cross-linked by treatment with 1% formaldehyde (Pierce, 289069) for 15 min rocking at room temperature. To stop fixation, glycine (Nzytech, MBO1401) was added to a final concentration of 125 mM during 10 min. Next, ChIP was performed using the ChIP-IT High Sensitivity kit (Active Motif, 53040), following the manufacturer's instructions. DNA was sheared into fragments ranging from 200 to 1000 bp using a sonicator (Diagenode Bioruptor Water Bath Sonicator, 30 s on 30 s off for 30 min). Immunoprecipitations were carried out using rabbit polyclonal anti-OCT4 antibody (Abcam, ab19857), and anti-Rabbit immunoglobulin G polyclonal antibody (Abcam, ab171870) was used as negative control. Enrichment was measured by qPCR. A fragment from the Nanog promoter was used as a positive control (27,29), and genomic fragments from the loci of Anks1b, Smg6, and Tiam1 were used as negative controls (59) after checking they did not contain OCT4 bound peaks (27). qPCR primers used are listed in data file S4. For anterior neural (hindbrain) and paraxial mesoderm differentiation, cells were treated as described (25). Briefly, cells were grown in monolayer using Corning p24 plates with CellBIND surface and with 0.1% gelatin (Sigma-Aldrich) added 30 min before passing in N2B27 media supplemented with basic Fgf (10 ng/ml) (R&D Systems) for 3 days (d1 to d3) and then were transferred into different media depending on the differentiation process. To induce anterior neural identity, 10 nM RA (Sigma-Aldrich) was added from D3 to D5. To induce mesodermal differentiation, the cells were treated with CHIR990215uM from D3 to D5. Cells were collected at each time point by adding lysis buffer directly to the wells. ZHBTc4 cells were treated with tet (1 g/ml) (Sigma-Aldrich) from D0, from D3, or nontreated to activate Oct4 expression. Statistical analysis No blinding or randomization method was used for mouse experiments, and sample size was not predetermined. Statistical tests used are described above where relevant and in the figure legends.
2019-04-03T13:11:35.154Z
2019-03-01T00:00:00.000
{ "year": 2022, "sha1": "668b739ee3bb282887fcb88533890d746846b975", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1101/564658", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "45a961a328da6a49f10a1012bf069ab6ccb7ae4e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
253076215
pes2o/s2orc
v3-fos-license
Transcription start site signal profiling improves transposable element RNA expression analysis at locus-level The transcriptional activity of Transposable Elements (TEs) has been involved in numerous pathological processes, including neurodegenerative diseases such as amyotrophic lateral sclerosis and frontotemporal lobar degeneration. The TE expression analysis from short-read sequencing technologies is, however, challenging due to the multitude of similar sequences derived from singular TEs subfamilies and the exaptation of TEs within longer coding or non-coding RNAs. Specialised tools have been developed to quantify the expression of TEs that either relies on probabilistic re-distribution of multimapper count fractions or allow for discarding multimappers altogether. Until now, the benchmarking across those tools was largely limited to aggregated expression estimates over whole TEs subfamilies. Here, we compared the performance of recently published tools (SQuIRE, TElocal, SalmonTE) with simplistic quantification strategies (featureCounts in unique, fraction and random modes) at the individual loci level. Using simulated datasets, we examined the false discovery rate and the primary driver of those false positive hits in the optimal quantification strategy. Our findings suggest a high false discovery number that exceeds the total number of correctly recovered active loci for all the quantification strategies, including the best performing tool TElocal. As a remedy, filtering based on the minimum number of read counts or baseMean expression improves the F1 score and decreases the number of false positives. Finally, we demonstrate that additional profiling of Transcription Start Site mapping statistics (using a k-means clustering approach) significantly improves the performance of TElocal while reporting a reliable set of detected and differentially expressed TEs in human simulated RNA-seq data. Introduction The dysregulation of Transposable elements (TEs) has been associated with many phenotypes and disorders such as ageing (Andrenacci, et al., 2020;Gorbunova et al., 2021), neurodegenerative diseases Jacob-Hirsch et al., 2018;Savage et al., 2019) and cancers (Jansz and Faulkner, 2021;Grundy, et al., 2022). These findings fuel the interest in profiling the repeatome on a global scale in related or similar physiologies. For instance, TEs transcriptional profiling led to the formulation of the "retrotransposon storm" hypothesis of age-dependent neurodegeneration due to a global derepression of TEs (Dubnau, 2018). These discoveries also opened new therapeutic avenues targeting the activity of TEs by applying viral suppressants (Gold et al., 2019;Tam et al., 2019). TEs are repetitive DNA segments that have the ability to move and replicate in the genome and occupy large fractions in mammalian genomes. At least 45% and 37.5% of the human and mouse genome, respectively, is composed of TE DNA sequences (Pace and Feschotte, 2007). Consequently, the computational analysis for the detection and differential expression (DE) of TEs face significant challenges due to a high false discovery rate (FDR). The repetitiveness of TEs leads to the generation of multiple identical or highly similar reads that can be attributed back to multiple genomic loci, i.e. multimappers. Moreover, many TEs are annotated in intronic regions, which makes it difficult to distinguish between the autonomous TE transcription and exaptation events like TE exonization in coding transcripts (Zemojtel et al., 2007;Lin et al., 2008;Schmitz and Brosius, 2011;Park et al., 2012;Davis et al., 2017;Deininger et al., 2017;Gonçalves et al., 2017). Both of these challenges can exacerbate one another, as an expression of the coding transcript with an exapted TE sequence might be reflected in multimappers. To resolve these challenges, the expression analysis of TEs at the subfamilies level has become a popular strategy. However, in some cases, the activity of a single locus could also be the main driver for "subfamily" level overexpression and hence, the primary pathology cause. Such singular active loci could promote tumorigenesis via the regulation of oncogenes (Babaian et al., 2016;Jang et al., 2019). Another evidence for this comes from discordant epigenetic profiles of various normal and tumour tissues, where only some TE loci were demethylated, and most of the loci within the same subfamilies remain repressed (Ewing et al., 2020). Regardless of the quantification level (locus or subfamily), two major approaches are usually undertaken to deal with multimappers-1) incorporate them and distribute their total counts or fractions between the putative origin entities; 2) discard the multimappers altogether. Depending on the tool, the former strategy may overinflate the expression estimates for subfamilies; while the latter may result in an estimate of mappability rather than an expression for the subfamilies, especially for the evolutionary younger ones (Lanciano and Cristofari, 2020). One major strategy for incorporating multimappers in the TE expression analysis, irrespective of the analysis level (locus or subfamilies), is leveraging the Expectation-Maximization (EM) algorithm into the quantification step. This method relies on the iterative redistribution of the count fractions across putative read origin loci/consensus sequences, which may [SalmonTE , SQuIRE (Yang et al., 2019)] or may not [TEtranscript (Jin et al., 2015), TElocal (https:/github.com/ mhammell-laboratory/TElocal)] include the number of uniquely mapping reads in the estimation for each entity. Inclusion or exclusion of unique mappers depends on the basic underlying assumption about TE-derived reads-1) loci/ subfamilies that already have unique mappers unambiguously derived from them are more likely to be the source for the multimappers (SQuIRE, SalmonTE); 2) younger subfamilies and their individual copies (loci) have higher similarity and hence will be primarily represented by multimappers (TEtranscript, TElocal). While benchmarking across the publicly available tools is previously reported (Teissandier et al., 2019;Schwarz et al., 2022), an effort to propose a systematic downstream strategy for improving the performance of existing methods is still lacking, particularly at the loci level. In this short report, we evaluated 8 TE quantification pipelines (4 tools) based on simulated (synthetic) RNA-seq data from human and mouse. We tested the most recent and popular quantification strategies with a focus on locus-specific detection and differential expression in a dataset simulated for both gene and TE expression. Hence, we included featureCounts within Rsubread (Liao, et al., 2014), SalmonTE, TElocal within the TEToolkit suite and SQuIRE. We aimed to identify the best performing tool using simulated RNA-seq data, subsequently improving the FDR and F1 score by utilising the mapping statistics around the Transcription Start Site (TSS) of TEs. Among the considered tools here, TElocal produced the best results based on our simulated RNA-seq data. We demonstrated that the choice of counts and baseMean expression cutoff is critical for reducing the false positive hits. Furthermore, k-means clustering based on the signals around the TSS of TEs aided us in filtering out a substantial amount of false positives. In a nutshell, we propose an additional TSS profiling downstream of TElocal along with visual inspection of genomic regions to significantly improve the TE expression analysis at the loci level. Tested strategies and tools We chose the widely accepted strategies for TEs quantification that allowed for the quantification on the locus level [e.g. SQuIRE (v.0.9.9.92) and TElocal (v.0.1.0)] or allowed for building a custom library [SalmonTE (v.0.4)]. We excluded the tools that at the moment of benchmarking had limited use restricted to single class of elements and no custom database building functionality [REdiscoverTE (Kong et al., 2019)] and/or did not get developers support at the time [Telescope (Bendall et al., 2019)]. In addition to EM-mode, SQuIRE and TElocal have a unique mode as an additional quantification option that we also tested. SalmonTE relies on the quasi-mapping with Salmon for which custom databases were built using default command "index" and fasta file containing all TE instances [from mm9 (mouse) and GRCh38 (human) reference genomes]; EM step applies to the sets of reads mapping to the identical sets of target TEs. However, SalmonTE does not have a specific strategy to deal with the reads that span both gene and TE. TElocal and SQuIRE rely on the RNA-seq read alignment, e.g. produced by STAR; SQuIRE by design relies on mapping with STAR, hence we chose STAR (v.2.7.5a) (Dobin et al., 2013) and used the same alignment files for quantification. SQuIRE and TElocal prioritise read assignment to genic coding regions over the TEs to account for genic reads incorporating TEs within them. In addition, three simplistic quantification strategies were assessed using featureCounts function of rSubread R package (v.1.34.7), which relies on the provided alignment files. We leveraged the inbuilt strategies of dealing with multimappers i.e. exclude multimappers, distribute fractions of counts evenly or assign randomly. Transposable element detection benchmarking The main parameters we relied upon for comparing the performance of different strategies were FDR and F1 score. Identified TEs were considered True Positive, if they were both actually present in the simulation and they were assigned more counts than the detection thresholds tested-0, 5, 10, 20, 30, 50, 70, and 100 raw counts. For human stranded simulation, we also tested the length filters, which would exclude elements below 50, 100, 150, 200, 250 or 300 bp. Detection filtering cutoff used raw counts assigned per element after mapping the reads. Transposable element differential expression benchmarking We used FDR and F1 score for assessment of differential expression of TEs (DE-TEs). Identified DE-TEs were considered True Positive, if they 1) were simulated to be DE, 2) had the correct direction of the expression change as the simulated one, and 3) had a baseMean expression above a filter threshold, 4) had padj <0.01 and |log2FC| ≥2. For human simulation, we also tested the length filters. Differential expression detection used DESeq2 normalized baseMean values for filtering after running DE analysis. Transcription start site profiling We estimated the coverage for a window of 400 bp around TSS and used it to further split putatively active TEs into True or False Positives. Coverage was calculated for alignment files using deeptools (v.3.5.1) (Ramírez et al., 2016) bamCoverage ("--binSize 1 --normalizeUsing RPKM"). We used all putatively active loci passing the threshold of five counts irrespective of their status (True or False) and further aggregated coverage statistics with deeptools computeMatrix ("--beforeRegionStartLength 200 --referencePoint TSS"). K-means clustering ("--silhouette") was further applied to the coverage statistics matrix using deeptools plotHeatmap. Silhouette score is a metric to study the goodness of a clustering technique. The silhouette ranges from −1 to +1, where a high value indicates that a region is well matched to its own cluster. K between 3 and 8 were tested for optimising the number of filtered FP hits using average silhouette scores, F1 score, and percentages for FP and TP. Visual inspection of cluster's TSS profiles was used to assign the clusters between True and False categories. If a reliable peak of averaged coverage coincided within the window downstream of TSS, the cluster was assigned as True; if coverage was even or peaked upstream to TSS the cluster was assigned as False. K = 2 was excluded both due to its insufficiency to differentiate between our minimal expectation of three different TSS profiles shapes described above, as well as its confirmed poor performance for the trial sample. FN was assessed as a number of actual TP in False clusters. FDR and F1 scores were calculated and compared to the performance statistics of the expressionbased filtering. Only the TEs in the identified True clusters were further retained for the DE analysis for the calculation of the FDR. This was achieved by assigning 0 counts per sample to those elements, which were outside the True clusters. The further quantitative and qualitative estimates were obtained as described in the previous method section. FIGURE 1 Benchmarking of TE quantification tools on human model, stranded experiment. (A) A general overview of the simulation setup and the strategies used in benchmarking. For human simulation 2,000 TEs over 200 bp long and top 13,000 genes expressed in substantia nigra were simulated in stranded and unstranded experiments using Polyester 1.22.0. The resulting simulated sequencing data was processed using 3 EM-based tools (both in EM and no EM modes, where permissible) and 3 modes of featureCounts. (B) TE Detection FDR for different detection cutoffs using the tested tools. TElocal in unique mode (TElocal_UM) outperformed other strategies closely followed by TElocal in EM mode (TElocal_MM), however even with higher cutoffs FDR reached 26%. (C) TE Differential Expression detection FDR for different expression cutoffs. (D,E) Length distribution of True Positive (TP) and False Positive (FP) hits for TElocal in UM mode at detection cutoff 5 (D) and 50 (E). (F) Family Composition of total FP hits (Total) and FP hits overlapping the simulated expressed genes (Overlap Genes) for TElocal_UM at detection cutoffs 5 and 50. Only a minority of the FPs at both cutoffs can be explained by misattribution of the genic reads (2091/13301 for cutoff = 5 and 887/3705 for cutoff = 50). (G) Family Composition of TP hits categorized by total and overlap genes. FC_MM_F, featureCounts using multimappers in "fraction" mode; FC_MM_R, featureCounts using multimappers in "random" mode; FC_UM, featureCounts using unique mappers only; SalTE, SalmonTE; SQuIRE_EM, SQuIRE in EM mode; SQuIRE_UM, SQuIRE in unique mode; TElocl_EM, TElocal in EM mode; TElocal_UM, TElocal in unique mode. Results To determine the performance of Transposable Elements (TEs) quantification pipelines, we commenced our study by simulating both stranded and unstranded paired-end sequencing reads for the human substantia nigra and the mouse forebrain (see Methods, Figure 1A; Supplementary Figure S1A; Supplementary Table S1). We assessed the detection and differential expression of TEs (DE-TEs) performance of all the pipelines using false discovery rate (FDR) and F1 score. Improvement of transposable elements characterization using expression thresholds We observed that all the pipelines employed in this study, when using default parameters, performed poorly due to the high number of false positives (FPs) for both stranded and unstranded human RNA-seq data (Figures 1B,C; Supplementary Figure S2B). For instance, using human stranded simulated data, without applying any filtering to the putatively active TEs, we found a FDR range of 71%-86% and 40%-59% for detection and DE-TEs, respectively (Figures 1B,C; Supplementary Table S1A). When we applied a detection cutoff based on the counts (for detection) or baseMean (for DE-TEs), FDR decreased for all the pipelines. While TElocal surpassed the other methods, TElocal in unique mode (TElocal_UM) exhibited the lowest FDR value followed by TElocal in multimapping mode (TElocal_MM). Next, we focused on the characterization of the FP hits using several strategies to mitigate their numbers in the best performing pipeline (TElocal_UM). We hypothesised that the two major sources of the FPs can be related to either mapping errors or annotation errors. To test their relevance, we focused on characterising FP content, length distribution and overlap with the simulated genes. For human simulation and detection with TElocal_UM, we observed a presence of shorter elements in FPs with the length below the minimal cutoff for the simulated elements (TPs) length (Figures 1D,E; Supplementary Figure S2C). Additionally, we observed much higher enrichment for longer elements in FPs compared to TPs-a secondary peak at 6 kbp that corresponds to the average length of L1HS elements. We then tested TE length as a filter in order to reduce the FPs that possibly arise from the misattribution of reads to the shorter elements (Supplementary Figures S2A,E), nevertheless, FDR did not significantly improve for either of the detection cutoffs (5 and 50 counts or baseMean). When the composition of FP hits for both cutoffs (5 and 50 counts, 200 bp length) was examined, L1 elements were revealed to be the dominating FPs, followed by Alu and SVA elements ( Figure 1F; Supplementary Figure S2D). The general composition of the FP hits at the detection cutoff five was strongly enriched in L1P and L1HS elements, as well as SVAs (Supplementary Tables S2A,B). While expression was simulated for only 491 SVA loci for stranded data, TElocal_UM detected 1455 SVA loci as putatively active, including 67 loci for SVA_B that had no simulated loci and 911 loci for SVA_D that had only 4 loci simulated. Similarly, far fewer loci for L1 elements were simulated as compared to the number of loci detected as putatively active. Only a minority of the total FPs (16%-24% and 18%-29% for stranded and unstranded simulations, respectively) could be explained by the reads misattribution derived from the overlapping simulated genes ( Figure 1F; Supplementary Figure S2D). Similar distribution was observed for the TPs ( Figure 1G). We obtained similar results for the mouse simulation RNAseq dataset (Supplementary Figure S2A; Figures S1C,D,F,G). Detection of the short SINE elements such as B2 and B4 was impaired with the best performing pipeline, suggesting a possible bias in detecting these elements (Supplementary Figures S1D,G). Consistent with the human simulation, only a minor fraction of FP hits could be explained by the misattribution of reads derived from the simulated genes (~9% of FP hits at cutoff 5). Improvement of transposable elements characterization using transcription start site profiling As the majority of FP hits were potentially derived from mapping errors, we aimed to profile mapping statistics over putatively active elements as a means to filter out false hits. TEs vary greatly in size, therefore, we focused on the profiling of the window of 200 bp up-and downstream of TSS (see Methods). This strategy would also potentially allow to account for the FPs derived from reads misattribution from the longer transcripts to the exapted TEs within them. The theoretical profile of the autonomously expressed (defined here as independently expressed as opposed to exapted TEs) TE elements would be reflected in a mapping peak downstream of their annotated TSS. Exapted TEs within a longer transcript element would have an evenly distributed coverage both upstream and downstream of its annotated TSS or a minor drop off in coverage down TSS at the highly repetitive regions. In the case of the erroneous mapping of reads derived from the related element, we expect coverage to be shallow and scarce across the examined locus. Hence, it is theoretically possible to separate false hits from the truly Frontiers in Genetics frontiersin.org 05 independently expressed elements based on their mapping statistics from RNA-seq data. To this end, we used k-means clustering to keep the clusters of TEs that showed an enrichment downstream of the annotated TSS. For instance, an enrichment profile using all the detected TEs from the human stranded simulated RNA-seq data showed a mapping peak at downstream of the annotated TSS (Supplementary Figure S3). To separate the background clusters (potentially FPs), we first assessed silhouette scores for k = [3:8] to find the optimal value for k-means clustering applied to the TSS profiles (Figure 2A; Supplementary Figure S4A; Supplementary Figures S5A,F). Based on this, we chose two best k values for downstream filtering, five to six and seven to eight for human and mouse simulation, respectively. Using k-means profiling on human stranded simulation data, we observed that the detection performance improved greatly. The FIGURE 2 TSS profiling improves detection and differential expression detection for human stranded simulation. (A) Average silhouette scores for different k values per each sample. Best silhouette scores for most samples are reached with k = 5 and k = 6. (B,C) F1 Score and False Positive proportions improvement with TSS profiling at the detection level. Both parameters improve significantly (Wilcoxon signed-rank test, p < 0.001) when TSS profiling is applied to elements retained with detection cutoff 5 (E5_K5 and E5_K6) as compared to only filtering by low detection cutoff (5 counts, E5) or increasing detection cutoff (50 counts, E50). (D,E) Family Composition of FP and TP hits categorized by total and overlap genes after TSS profiling (F,G) F1 Score and FDR improvement for differential expression detection with TSS profiling. Frontiers in Genetics frontiersin.org 06 FDR values dropped from 54.9% (counts cutoff 5) to 11.2% (counts cutoff 5, k = 5) and 10.6% (counts cutoff 5, k = 6). This improvement was achieved by a reduction of the number of FPs by over 10 times without losing much of TP hits as the F1 score increased significantly from 61% (counts cutoff 5) to 86% (counts cutoff 5, k = 5) and 87% (counts cutoff 5, k = 6) ( Supplementary Table S3C; Figures 2B,C). Importantly, the reduction in FDR using the k-means approach at a cutoff of 5 was~3-fold less than FDR at a cutoff of 50 when the k-means approach was not utilised (29.9). Comparing family composition of FPs and TPs before TSS profiling (Figures 1F-G) and after TSS profiling (Figures 2D,E) revealed that the TSS profiling reduces false positive TE hits irrespective of gene overlap and without losing TPs. We further tested the outcomes of the DE analysis performed on the putatively active TEs present within filtered clusters only. We found that the TSS profiling significantly reduced the FDR (k = 6, FDR = 1.7%) and increased the F1 score (k = 6, F1 = 0.79) as compared to using only a baseMean expression filter of 5 (FDR = 33%, F1 = 0.61) and 50 (FDR = 10%, F1 = 0.73) ( Figure 2F,G; Supplementary Table S3I). We then compared the expression levels of the differentially expressed elements detected with only high expression cutoff against TSS profiling approach. The application of TSS profiling allowed for detecting some lowly expressed elements, which would be otherwise filtered out with a higher expression cutoff. However, overall mean expression of the detectable elements was higher after TSS profiling (Supplementary Figure S3D). Similarly, detection was improved for human unstranded simulation i.e. FDR dropped from 58.7% (counts cutoff 5) and 37.5% (counts cutoff 50) to 11.2% (counts cutoff 5, k = 5) and 10.5% (counts cutoff 5, k = 6) (Supplementary Figures S4B,C; Supplementary Table S3D). TSS profiling also improved DE detection FDR (k = 6, FDR = 29%) as compared to applying a low expression cutoff (baseMean = 5, FDR = 35%), however, the profiling did not outperform the performance when a higher expression cutoff (baseMean = 50, FDR = 14%) was applied (Supplementary Figures S4D,E;Supplementary Table S3J). Application of the TSS profiling and clustering as a filtering measure improved the detection outcome for the simulated stranded RNA-seq data from mice (FDR = 18.1% at k = 7 as well as k = 8), when compared to the basic detection cutoff of 5 (FDR = 60%). However, the results were comparable to the higher expression cutoff filter (counts cutoff 50, FDR = 18.4%) (Supplementary Figure S5C; Supplementary Table S3G). Additionally, the F1 score also improved with TSS profiling (F1 = 0.88 at counts cutoff five and k = 7 or 8) or high detection cutoff (F1 = 0.88 at counts cutoff 50) as compared to the basic detection cutoff (counts cutoff 5, F1 = 0.55 Supplementary Figure S5B; Supplementary Table S3). At the level of detecting DE-TEs, TSS profiling was outperformed by filtering with the high expression cutoff (baseMean = 50, FDR = 5.4%, F1 = 0.85) (Supplementary Figure S5D; Supplementary Table S3K). Similar to what we observed for the human stranded simulation, TSS profiling allowed us to detect some lowly expressed TEs, and the overall mean expression of these elements was higher than the elements retained with the high expression cutoff filtering (Supplementary Figure S5E). Finally, we obtained comparable results for the simulated unstranded RNA-seq data from mice (Supplementary Figure S5F-I; Supplementary Table S3H,L). Discussion Detecting the expression of TEs at the individual loci level remains a challenging task that includes the choice of methods, parameters and downstream filtering criteria. To resolve this, we first performed benchmarking of various quantification strategies using simulated short-read RNA-seq data on humans and mice. In general, all the pipelines used in this study performed poorly if no filtering was applied. TElocal is compared favourably to the other methods and worked slightly better for stranded paired-end reads than unstranded paired-end reads. Filtering on the minimum number of read counts is an important parameter to consider as we see a significant decrease in the number of false positives by increasing the mapped reads cutoff. While exploring the source of false positives, we found that only a small fraction of the TE false positives overlapped with the simulated gene coordinates. Low rates of false positives that could be derived from genic reads and high rates for unsimulated loci presence in the putatively active dataset suggest that most false positive hits might be derived from erroneous mapping. Following these leads, we observed that such errors directly lead to false identification of whole subfamilies of elements as active, affecting the analysis quality on the whole subfamily level. We did not find any significant relation between the false positive hits and the length of TEs. However, there was an overrepresentation of L1HS loci identified as false positives. It has been previously proposed that younger active mobile elements have relatively fewer variants, which makes them challenging to characterise with current technology on the individual loci level (Criscione et al., 2014;Jin et al., 2015). As shown before (Teissandier et al., 2019;O'Neill et al., 2020), specifically younger L1 elements in the human genome have one of the worst mapping rates of all TEs examined in human repetitome when short-read paired-end sequencing is applied. To further reduce the false positive hits, we profiled TSS mapping signals using the k-means clustering approach. In the past, k-means clustering has been used on TSS mapping chromatin or expression data to identify ubiquitous and/or tissue-specific patterns (Shimokawa et al., 2007;Cui et al., 2017). We hypothesised that the false positive and true positive TEs would have different patterns of mapping signal distribution around the TSS; therefore, we aimed to separate them using a k-means clustering approach. Indeed, for the human genome assembly, our results showed a significant decrease in false positive hits and an increase in overall F1 score, especially using stranded human simulated RNAseq data (F1 > 85%). However, the k-means clustering approach failed to significantly improve the results in the mouse assembly as we obtained a F1 score similar to simply increasing the count or baseMean expression cutoff. This result was not surprising as the mouse genome is known to be more permissive for Frontiers in Genetics frontiersin.org 07 retrotransposition in comparison to humans; therefore, it has a relatively complex TE landscape and annotations than the human genome (Bourque et al., 2018). This finding, however, suggests one needs to adopt the stringency of filtering parameters depending on the repetitive complexity of the organism. Our study has some limitations due to the use of simulated RNA-seq data and the availability of limited resources. First, we relied only on one tool for simulation. We cannot exclude the possibility that multiple simulations using different tools could lead to inconsistent results. For example, a recent study used the same R package polyester (although a different version) for simulating most of their RNA-seq reads (Schwarz et al., 2022). The authors simulated only the TE loci but in high numbers, whereas we simulated the reads for both genes and TEs with the majority of reads derived from the genes rather than TEs to incorporate more biological information. In contrast to our results, they found SalmonTE as the best performing tool for the detection, quantification and DE of TEs. Notably, the authors relied on modifying TEtranscript to quantify TEs at the loci level instead of using TElocal explicitly, which in principle should produce the same results. Another possible limitation of our study might be that we employed only one clustering method. In the future, it would be essential to compare the performance of other clustering algorithms (e.g. Hererichal and Fuzzy C-means clustering) around the different window sizes of TSS in numerous species. Therefore, we recommend adapting the specific parameters according to the organism that is being studied and their respective repetitome qualities, such as transpositional activity, TE age and dominating TE species, as well as the expected proportion of TE transcripts in the whole transcriptome. Nevertheless, in agreement with the previous study (Schwarz et al., 2022), we observed that the slight modifications like stringent filtering cutoff of counts or baseMean expression could improve the outcome of existing methods, especially using human stranded paired-end RNA-seq data. Additionally, we showed that the TSS profiling of TEs significantly reduced the number of false positives. While further work is required to automate a robust pipeline, we envision that our study will serve as a reference guide to improve the TE expression analysis at the loci level Liao et al., 2019. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material."
2022-10-23T15:26:08.781Z
2022-10-21T00:00:00.000
{ "year": 2022, "sha1": "efbf8ade6947f41626ba93bbe65b286c1fba2a7f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "14b4bb9370ddd328455f3d9c66591c52286e4734", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
62363
pes2o/s2orc
v3-fos-license
Efficient Stepwise Selection in Decomposable Models In this paper, we present an efficient way of performing stepwise selection in the class of decomposable models. The main contribution of the paper is a simple characterization of the edges that canbe added to a decomposable model while keeping the resulting model decomposable and an efficient algorithm for enumerating all such edges for a given model in essentially O(1) time per edge. We also discuss how backward selection can be performed efficiently using our data structures.We also analyze the complexity of the complete stepwise selection procedure, including the complexity of choosing which of the eligible dges to add to (or delete from) the current model, with the aim ofminimizing the Kullback-Leibler distance of the resulting model from the saturated model for the data. Introduction Undirected graphical models have become increasingly popular in areas such as information retrieval, statis tical natural language processing, and vision, where they are often referred to as maximum entropy or Gibbs models, and are viewed as having various repre sentational and statistical advantages. New tools for model selection and parameter estimation are being developed by researchers in these areas [PPL97,Hin99, •Part of the work was done while the author was vis iting Bell Laboratories. ZWM97]. General undirected models, however, have some serious disadvantages, in particular they require an invocation of Iterative Proportional Fitting (or re lated iterative algorithms) to find maximum likelihood estimates, even in the case of fully-observed graphs. As the inner loop of more general parameter estimation or model selection procedures (e.g., the M step of an EM algorithm), these iterative algorithms can impose serious bottlenecks. Decomposable models are a restricted family of undi rected graphical models that have a number of ap pealing features: (1) maximum likelihood estimates can be calculated analytically from marginal proba bilities, obviating the need for Iterative Proportional Fitting, (2) closed form expressions for test statistics can be found, and (3) there are several useful links to directed models (every decomposable model has a representation as either an undirected or a directed model), inference algorithms (decomposable models are equivalent to triangulated graphs), and acyclic database schemes [BFMY83]. Decomposable models would therefore seem to provide a usefully constrained representation in which model selection and parame ter estimation methods can be deployed in large-scale problems. Moreover, mixtures of decomposable mod els provide a natural upgrade path if the representa tional strictures of decomposable models are consid ered too severe. Although decomposable models are a subclass of undi rected graphical models, the problem of finding the optimal decomposable model for a given data sam ple is known to be intractable and heuristic search techniques are generally used [Edw95]. Most proce dures are based on some combination of (i) forward selection, in which we start with a small model and add edges as long as an appropriate score function increases [Hec98], and (ii) backward selection, where starting with a larger model, edges are deleted from the model. Since the intervening models encountered in the search must also be decomposable, care must be taken such that deletion or addition of edges does not result in a non-decomposable model. Backward selection procedures for decomposable models are well known in the literature [Wer76,Lau96], but efficient forward selection procedures have not yet been devel oped. One of the goals of the current paper is to fill this gap. This paper is a theoretical paper that makes two main contributions. First, we provide a simple characteriza tion of the edges that can be added to a decomposable model (or, equivalently, the chordal graph correspond ing to the model) while resulting in another decom posable model. Second, based on this characterization we present an efficient algorithm for enumerating all such edges for the current model in O(n 2 ) time, where n is the number of attributes. We provide a careful analysis of the running time complexity of the overall forward selection procedure, including the time taken for choosing which of the eligible edges to add to the current model. We use the minimization of KL diver gence as our metric, but the results we present can be extended to any other locally computable metric (e.g., [GG99]). Though our main focus is the new forward selection procedure, we also show that the algorithms are easily extended to backward selection or to a combination of forward and backward selection. The techniques and data structures we propose also naturally extend to the problem of finding strongly decomposable models in mixed graphs. The remainder of the paper is organized as follows. In Section 2, we derive a simple characterization for the edges that can be added to a chordal graph while maintaining its chordality. In Section 3, we describe the data structure that we use for finding such edges efficiently and discuss how it is maintained in the pres ence of additions to the underlying model graph. In Section 4, we analyze the overall complexity of the stepwise selection procedures for the KL divergence metric. We briefly discuss how the data structures can be extended for doing backward selection as might be required for a procedure that alternates between for ward and backward selection in Section 5. In Section 6, we discuss how these algorithms can be extended for the case of mixed graphs and strong decomposability. We conclude with Section 7. Figure 1: Structure of the subgraph induced by Va, Vb and Sab 2 Characterizing Edges Eligible for Stepwise Selection There is a classical characterization of the edges that can be deleted from a chordal graph 1 such that the resulting graph remains chordal: Theorem 2.1 [Wer76, Lau96] Given a chordal graph g, an edge can be deleted from the graph while main taining its chordality iff the edge belongs to exactly one of the maximal cliques of the graph. To complement this result, we propose the follow ing characterization of the edges that can added to a chordal graph without violating its chordality (the proof is given in Appendix A): Theorem 2.2 Given a chordal graph g = (V, E), an edge (va, vb) tt E can be added to the graph while maintaining its chordality iff it satisfies the following properties: 1. There exists a subset of nodes Sab s;;; V -{va,vb}, such that (v4,u),(vb,u) E E,Vu E Sab i i.e., Va and Vb are connected to all vertices in Sab ; 2. The set Sab is the minimal separator for Va and Vb in g (note that, since g is chordal, this implies that Sab is a clique); i.e., removing the nodes in Sab and all their incident edges from g separates va and Vb and no proper subset of Bab has this property. maintain two auxiliary data structures (i) a clique graph [GHP95] corresponding to the current chordal graph, (ii) a boolean n X n matrix, C:�, indexed by the attributes of data which maintains information about whether a pair of nodes is eligible for addition. Clearly, Definition and Properties Definition: A clique graph of a chordal graph Q = (V, E), denoted by CGo, has the maximal cliques of the chordal graph as its vertices, and has the prop erty that given any two maximal cliques, c1 and c2, there is an edge between these two cliques iff c1 n c2 separates the node sets Ct \ Ct n C2 and C2 \ C1 n C2. Note that this usage of the term "clique graph" is non-standard; we are following the terminology of [GHP95].2 We will note some properties of clique graphs. Update Algorithm Let Q = (V, E) be the original chordal graph and let Go be the clique graph. Let the new edge that is added be (va, vb)· Also, let (Ca, Cb) be the corresponding edge in the clique graph (the edge from which this pair of nodes was obtained) and let Sab = Ca n Cb· Finally, let Q' and CGo' be the new model and clique graphs. Addition of the edge (va, vb) creates a new maximum clique Cab = 8ab + Va + Vb as shown in Figure 2. It is possible that Ca c Cab (or Cb c Cab), in which case, Ca (Cb) will not be a maximal clique in the new chordal graph. We assume for now that this does not happen. In that case, adding the edge (va, Vb) to Q results in a partial clique graph structure as shown in Figure 2. Note that the new clique will be connected to Ca and Cb in CGcy. After this node has been created and added to the clique graph, the update algorithm has to delete those edges from the clique graph which do not satisfy the clique graph property (Section 3.1.1). It is easy to see that no new edges between already existing maximal cliques in the clique graph will have to be added. Given this, the update algorithm is as follows (see Appendix B for the correctness proof): 1. Let !}" = Q -8ab· Find all nodes that are con nected to Va and to Vb in Q" and maintain this information in two arrays indexed by the vertices of Q. 2. Deciding whether to keep an edge (C1, C2) E CGg in CGo': Let 812 = C1 n C2. If 8 12 t= 8ab, then keep this edge. Otherwise, consider the graph Q-812(= g"). If va is connected to cl \812 in this graph and Vb is COnnected to C2 \812 or vice versa, do not keep this edge in CGg'. Otherwise, keep it. This check can be performed in 0(1) time using the arrays constructed in Step 1. if Va is connected to cl \ Sab and Vb is connected to C2 \ Sab or viceversa then 10. Complexity of Overall Forward Selection Procedure The overall forward selection procedure has two parts: 1. Enumerating all edges that can be added to the current model. • S' C Sab + V a: The edge (C', Ca) belongs to CGg' and also to CGg. This implies that pairs ofnodes (v.,,v�),v., E C'\S',v� E Sab +va \S' were eligible for addition in Q as well and as such, the entropies needed for these pairs have already been computed. Thus, the only new entropies that need to be computed are for pairs of nodes, ( Vx, Vb), Vx E C' \ S' and again we only need to compute H(S' + Vx) and H(S' + Vx + vb). • S' = Sab + vb or S' C Sab + vb: The analysis for these two cases is similar. Thus, the only new entropies that need to be computed are those corresponding to the pairs of nodes of the form ( Va, vx ) or ( Vb, Vx) and we need to compute at most two entropies for every such pair. Since there are at most n -na + n -nb such pairs, the total number of new entropies that need to be computed is at most 2(n-na) + 2(n-nb)· I Note that this theorem assumes that all the entropies required for forward selection in the current model are already computed, and hence it does not apply to the very first step. For example, if we start with the null model (empty model graph), then we need to compute (�) entropies to decide which edge to add in the first step. Backward Selection In this section , we briefly outline how to extend our data structures for doing backward selection as might be required for a procedure which alternates between forward selection and backward selection. Details are deferred to the full version of the paper. For efficient enumeration of edges eligible for deletion and update of the clique graph data structures, we need to make two changes: • A matrix of size n x n, indexed by the vertices of the model, is maintained for enumeration of edges eligible for deletion. The entry in the matrix corresponding to a pair of nodes ( u, v) tells us whether the edge is present in the model, and if yes, whether it is eligible for deletion. • The binary index on the separators is augmented to keep the intersection sets for every possible pair of maximal cliques. This information is required to "re-insert" those edges in the clique graph that might have been deleted in the reverse step of adding this edge to the graph. The algorithms for clique graph update described ear-lier have to be modified to maintain these data struc tures as well, but these updates can also be performed in O(n2) time during both backward selection and for ward selection. Also, it can be proved that: Theorem 5.1 The number of new entropies that need to be computed after deleting an edge from the underlying model graph (i.e., after performing one backward selection step) is at most (ISabl -1) and those are all of the form H(Sab \ {vx}), for v., E Sab · 6 Mixed Graphs The results in this paper extend readily to the case of mixed graphs and strong decomposability, via the following theorem : Theorem 6.1 [Lau96] Given a strongly decompos able mixed graph G = (V, E), with V = Vd u Vc, where Vc is the subset of vertices corresponding to the con tinuous variables and Vd is the subset of vertices cor responding to the discrete variables, then the graph Decomposable models possess several important char acteristics that make them an appealing class of statis tical models, as has been observed in applied contexts ranging from word sense disambiguation [BW94] A Proof of Theorem 2.24 The proof of correctness of Theorem 2.2 involves two parts: (i) proving that the graph created after adding such an edge is chordal, (ii) proving that for any chordal graph 9' = (V, E') with E c E', 9' can be obtained starting with 9 by repeated application of this theorem. Before we proceed with this proof, we will need some definitions: Definition: An elimination order on a graph 9 is a permutation of its vertices. Definition: Given an elimination order, v1, ... , Vn, the elimination scheme for this order is as follows: Starting with v1, remove v1 from the graph and add all Now consider the following partial elimination order for Q: ... , Va, nt, . .. , n1, St, ... , Sk, Vb. We claim that this order can be achieved by the LexBFS algorithm, starting with Vb, if the ties are broken correctly and hence, it is part of a perfect elimination order. The node just preceding Vb can be any neighbor of vb and hence we can choose it to be Sk-Now note that, Sk-l is connected to both Sk and vb and as such, we can break the next tie in its favor irrespective of the rest of the nodes and so on for Sk-2, .•. , s1. Now, after this is done, the rest of the neighbors of vb will follow in some order (this order is not relevant to us and without loss of generality, we can assume it to be n1, . .. , n!). This partial elimination order can be replaced by .. . , nt, ... , n�, Va, St, ... , Sk, Vb without compromising the "perfect-ness" of the elimination order, since it does not add any new edges unless the earlier elim ination order is not perfect. Finally, this elimination order is perfect for 9' as well and hence, Q' is chordal. Proof: This theorem follows from the proof that the backward selection procedure is exhaustive [FL89]. Proof: Consider the graph Q-S12 (Figure 4). Since S12 is a separator for node sets C1 \ S12 and Cz \ S12, there will be at least two connected components in this graph. Let H consist of all the nodes reachable from some node in Ct \ S12 and let Pz consist of the nodes reachable from some node in Cz \ S12 and let P3 consist of rest of the nodes. So C1 \ S12 c Pt and c2 \ s 12 c p2 . Clearly unless Va E Pt and Vb E p2 or Va E P2 and Vb E P1, the addition of edge ( Va, vb) does not affect the edge (C1, Cz). Now, assume Va E P1 and vb E Pz. In that case, S12 separates Va and Vb and hence Sab C S12 (this follows since Sab is the minimal separator for the nodes Va and Vb)-But, since Va is reachable from Ct \ s12 in 9-s12 and Vb is reachable from c2 \ Stz in 9-St2, Sab separates Va and Vb only if Sab also separates cl \ 8 12 and Cz \812. Hence, S12 C Sab· Therefore, 812 = Sab, which contradicts our assumption. I Theorem B.2 Given an edge (Ct, C2) E CGg with separator 812 = Ct n Cz, if S12 = 8ab, then this edge is not in CGg' only if Va is connected to C1 \ S12 and Vb is connected to Cz \ S12 in 9-Sab or vice versa. Proof: Let P1, P2 and P3 be defined as in the preced ing proof. This theorem follows from the observation that unless Va E P1 and vb E P2 (or vice versa), the addition of the edge (va, Vb) does not have any effect on separability of Ct \ 812 and C2 \ S12 by 812-I B.2 Correctness of Step 3: We use the following property of clique graphs in prov ing correctness of this step:
2013-01-10T08:23:22.000Z
2001-08-02T00:00:00.000
{ "year": 2001, "sha1": "4694f6d4d704436fece4fb422c8eac14763dd658", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a1dd8b93d144d5ba8b35193c6197d0b5e0bc9762", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
3718946
pes2o/s2orc
v3-fos-license
Characterisation and deployment of an immobilised pH sensor spot towards surface ocean pH measurements Characterisation and deployment of an immobilised pH sensor spot towards surface ocean (cid:1) Immobilised pH sensor spot charac- terised over a pH range 7.8 e 8.2. (cid:1) Response time of 50 s at 25 (cid:3) C. (cid:1) Temperature and Salinity dependence investigated. (cid:1) Deployed as an autonomous underway sensor. (cid:1) Achieved shipboard precision of 0.0074 pH in the Southern Ocean, over one month. this purpose because of their simple design and low power requirements. The technology is increasingly used for oceanic dissolved oxygen measurements. Wepresentadetailedmethodontheuseofimmobilised fl uorescenceindicatorspotstodeterminepHin ocean waters across the pH range 7.6 e 8.2. We characterised temperature ( (cid:4) 0.046 pH/ (cid:3) C from 5 to 25 (cid:3) C) and salinity dependences ( (cid:4) 0.01 pH/psu over 5 e 35), and performed a preliminary investigation into the in fl uence of chlorophyll on the pH measurement. The apparent pK a of the sensor spots was 6.93 at 20 (cid:3) C. A driftof0.00014R(ca.0.0004pH,at25 (cid:3) C,salinity35)wasobservedovera3dayperiodinalaboratorybased driftexperiment.Weachieved aprecisionof 0.0074 pHunits,andobservedadriftof 0.06 pHunitsduringa testdeploymentof5weekdurationintheSouthernOceanasanunderwaysurfaceoceansensor,whichwas corrected for using certi fi ed reference materials. The temperature and salinity dependences were accountedforwiththealgorithm,R ¼ 0 : 00034 (cid:4) 0 : 17 $ pH þ 0 : 15 $ S 2 þ 0 : 0067 $ T (cid:4) 0 : 0084 $ S (cid:1) (cid:3) $ 1 : 075.This study provides a fi rst step towards a pH optode system suitable for autonomous deployment. The use of a short duration low power illumination (LED current 0.2 mA, 5 m s illumination time) improved the lifetime and precision of the spot. Further improvements to the pH indicator spot operations include regular application of certi fi ed reference materials for drift correction and cross-calibration against a spectrophotometric pH system. Desirable future developments should involve novel fl spots with improved response time and apparent pK a values closer to the pH of surface ocean waters. Introduction During the period 2002e2011, global average atmospheric carbon dioxide (CO 2 ) concentrations increased by~2.0 ppm per year; the highest rate of increase since monitoring began in the 1950s [1]. Atmospheric CO 2 concentrations are expected to continue to rise, with the ocean absorbing ca. 24% of the anthropogenically emitted CO 2 [2,3]. The current increase in atmospheric CO 2 is causing a surface ocean pH decrease of~0.002 pH units per year [4e6], and a long-term pH decrease from a pre-industrial pH of 8.25 to today's pH of 8.1 [7]. The decrease in surface ocean pH has been observed at time series stations in both the North Atlantic and North Pacific Oceans [4]. Under the IPCC business-as-usual CO 2 emission scenario (IS92a) [8], Caldeira and Wicket have predicted further surface ocean decreases of up to 0.8 pH units by the year 2300 [9]. To monitor ocean pH and determine potential effects on ecosystems, in situ pH sensors are desirable. In this paper, we present an optode pH sensor based on fluorescent lifetime detection, for high resolution autonomous monitoring of surface ocean waters. The pH sensor was deployed as an autonomous shipboard system for surface water measurements in the Southern Ocean. The Southern Ocean is an important sink for CO 2 [10] due to low water temperatures and deep water formation, and is likely to suffer detrimental ecosystem effects as a result of ocean acidification [11]. The research cruise was a good platform to assess the suitability of the pH sensor for open ocean measurements, and marked a first step towards its application an in situ sensor. The research cruise was undertaken as part of the United Kingdom ocean acidification (UKOA) programme that investigated the effects of pCO 2 gradients in surface waters on biogeochemical processes, calcification and ecosystem functioning. Dissolved inorganic carbon (DIC) and total alkalinity (TA) measurements were undertaken, providing multiple opportunities for sensor validation. pH sensors for oceanic waters Optodes consist of a pH sensitive compound immobilised in a support matrix (termed sensor spot) and are typically placed at the end of a waveguide or fibre optic cable, which provides a channel for the excitation and emission light to travel [12]. pH optodes are not unique to environmental sensing; oxygen optodes are based on the same measurement principle, and have been deployed regularly for water column observations on CTD (conductivity, temperature & depth) rosette frames and Argo profiling floats, thereby demonstrating the great potential of the optode technology [13]. However, pH optodes have thus far only been deployed in situ in sediments yielding a precision of 0.0057 pH [14]. The motivation for characterisation of an optode pH sensor for deployment in open ocean waters using fluorescent lifetime detection was based on the perceived advantages of this approach over other technologies, recently reviewed in detail by R erolle [15]. Other widely used techniques for pH measurements in seawater include potentiometric and spectrophotometric approaches [15]. Potentiometric pH systems are highly portable, with a measurement rate of 1 Hz [16], a precision of 0.003 pH units in the laboratory [17] and a shipboard accuracy of 0.01 pH [18]. However, prolonged measurements in high ionic strength solutions lead to inexact determination of the liquid junction and reference potentials of glass electrodes [16] resulting in measurement drift (0.05 pH month À1 ) [18] and significant systematic errors [19]. Electrode drift can be tackled by regular recalibration using spectrophotometric measurements (monthly for individual electrode reference potential, and daily for electrode intercept potentials) [20] or calibration with pH buffers [21]. With seawater solution calibrations, pH electrodes have been successfully deployed on research cruises in estuarine and coastal environments with a precision of ca. 0.004 pH [22,23], and in situ in highly dynamic hydrothermal vent environments with a precision of ±0.06 pH [24]. Recently developed ion-selective field-effect transistors (ISFET) are a major advance in potentiometric pH measurements, and have been successfully tested in seawater [25] allowing analysis of up ca. 20 samples per minute [26] with a precision of 0.005 pH [27]. The sensors are known to drift between 0.03 and 0.05 pH upon initial deployment, though the exact magnitude of the drift will depend on the ISFET materials and packaging [28]. Long conditioning periods (1.5 months) prior to calibration can reduce this drift [25] and each sensor requires a full individual calibration prior to deployment [29]. Spectrophotometry is particularly well suited to seawater pH measurements and has been widely implemented in situ [15]; recent examples include the SAMI-pH instrument [30] and a high precision microfluidic system [31]. Spectrophotometric pH measurements have improved since their initial deployments in seawater, with precisions recorded as low as 0.0007 pH [32,33] and several systems achieving a precision of ca. 0.001 pH unit [34e38]. The use of wet chemical spectrophotometry requires indicator storage, as well as valves and pumps to propel sample and indicator solutions through the system. The reagents may have limited lifetimes (~1 year) and specific storage requirements (e.g. exclusion of UV), while bubbles and particles introduced in the fluidic system can interfere with the quality of the pH measurement [39]. Despite these potential issues, spectrophotometric SAMI-pH has been successfully deployed in situ for more than 2 years [40] and a microfluidic system with measurement frequencies of 0.5 Hz [35] has been demonstrated. An ideal ocean pH sensor combines the limited calibration requirements of the spectrophotometric sensors with the simplicity and lack of mechanical components of the potentiometric sensors. Optodes are a newly emerging technology designed with both of these intended advantages. Immobilised sensing spots The pH sensing spots contain a pH-sensitive fluorescent dye (indicator) immobilised in a gas impermeable membrane attached to a support matrix. The practical measurement range is within ±1.5 pH units of the pK a value of the indicator [41], where the pK a is the isoelectric point at which the concentration of the acidic form of the indicator equals the basic form. The sensor spot's indicator pK a can be altered with different immobilisation approaches, such as the type of membrane and support matrix used [12]. Immobilisation in more hydrophilic membranes such as cellulose results in smaller pK a changes compared to more hydrophobic membranes such as polyurethane hydrogels [42]. Crosslinking within the membrane may change the pore size, while charges in the membrane can affect both the sensitivity and pK a [43]. The immobilisation technique also needs to be considered. Ionic associations between the membrane and the pH indicator 8-hydroxypyrene-1,3,6-trisulfonic acid (HPTS), for example, increases the apparent pK a of HPTS, whereas the use of more covalent interactions lowers the apparent pK a [44]. For applications in surface ocean samples, the pK a of the immobilised indicator should be close to pH 7.7 to cover the oceanic pH range. The majority of immobilised pH indicators are based on fluorescein [45] and pyranine derivatives such as HPTS, due to their stability upon immobilisation, and a free pK a ca. 7.3 which allows for measurements between pH 6 and 9 [44]. The pH of the sample solution determines the fluorescence emission of the pH indicator; the protonated and deprotonated forms of the indicator dye fluoresce at different wavelengths. Methods based on the measurements of fluorescence intensity alone have several inherent problems, such as sensitivity to light source fluctuations, background light and solution turbidity [46,47]. The additional immobilisation of a pH-insensitive fluorescent reference dye alongside the indicator dye allows measurement by dual lifetime referencing (DLR) or intensity ratiometric methods, thereby reducing the problems from intensity only measurements [46,48]. Intensity ratiometric methods utilise the intensity ratio between the indicator and reference dye emissions, requiring that the reference dye has similar optical properties to the indicator [41]. For DLR, the reference dye must be pH-insensitive, have similar or overlapping excitation and emission frequencies to the pH indicator dye, and a longer luminescent decay time. There are two main DLR techniques: frequency-domain (f-DLR) and time-domain (t-DLR) [49]. The f-DLR technique continuously illuminates the spot with amplitude modulated light, and uses the phase-angle between the excitation and the dye fluorescence [50] to determine pH. The t-DLR technique takes a ratio of two "windows" of measurement [46]: one during the excitation of the spot with the light source and one during the fluorescence decay [48]. The t-DLR is a well-established methodology [46] which gives an instant visual response to pH changes. Both f-DLR and t-DLR techniques are not affected by light intensity fluctuations, optical alignment changes, and luminophore concentration [46]. In this study, we apply the t-DLR method to the sensing spots. The pH sensor spots provide reproducible results [51] without the need for moving components, fragile electrodes or wet chemical reagents. The sensing spots are small (7 mm diameter) and require limited maintenance. Accuracy is improved by regular measurements of a certified reference material (CRM) to determine potential drift (see Section 2). The spots can be used directly with no sub-sampling of discrete solutions, and the technique has a comparable measurement time (5e200 s; see Table 1) to in situ spectrophotometric methods (ca. 60 s [32]e180 s [33]). Response times are diffusion controlled, and are proportional to the thickness of the membranes used (1e20 mm). Recent work with immobilised pH sensors in marine sediments has demonstrated a less favourable precision (from repeated measurements of CRMs) [52] compared with spectrophotometric measurements; average of 0.02 pH [53] and 0.001 pH [31], respectively, discussed further below. Ambient light and excitation light exposure causes the indicator molecules to bleach, and thereby become less sensitive for pH measurements, eventually requiring replacement of the spot. If the reference dye and the pH indicator bleach at different rates then drift will be observed, affecting the accuracy of the pH measurements. This light sensitivity has limited the application of pH sensing spots to marine sediments (Table 1) [14,51,53,54]. Advances in oxygen optode technologies [13], and their use throughout the water column indicate the potential suitability of optodes for oceanic water pH measurements. In this work, we have characterised the pH sensor-spot technique for seawater application and present the first open ocean application. We first investigate the temperature and salinity dependences of the pH optode, then constrain the response time and provide an initial estimate for the longevity of the spot. We further discuss a test deployment in the Southern Ocean, and subsequently evaluate our system with respect to potential applications. Table 1 Details of reported spot-based pH optodes where, DHFA ¼ 2 0 , 7 0 -dihexyl-5(6)-N-octadecyl-carboxamidofluorescein, DHFAE ¼ 2 0 , 7 0 -dihexyl-5(6)-N-octadecyl-carboxamidofluorescein ethyl ester, DHPDS ¼ Disodium 6, 8-dihydroxy-1, 3-pyrenedisulfonate, HPTS ¼ 8-hydroxypyrene-1, 3,6-trisulfonic Acid, DSLR ¼ digital single-lens reflex camera and CTAB ¼ cetrimonium bromide. R ¼ fluorescence ratio. Additional instrumentation and reference materials for pH measurements A glass electrode (ROSS Ultra ® combination pH electrode with epoxy body) with a benchtop pH meter (Thermo Scientific Orion 3*) was used for reference potentiometric pH determinations (precision of 0.01 pH). The glass pH electrode used internal temperature compensation derived from the Nernst equation [57]. National Institute of Standards and Technology (NIST) pH buffers (pH 4, 6, 7, 10; SigmaeAldrich) and Certified Reference Material (CRM) tris (2-Amino-2-hydroxymethyl-propane-1,3-diol) buffer in artificial seawater (Batch 10, pH tot 8.0924, salinity 35, 25 C) from Prof. A. Dickson at Scripps Institute of Oceanography (USA) were used to calibrate the pH glass electrode for low and high salinity measurements, respectively. A lab-on-a-chip microfluidic spectrophotometric pH sensor with thymol blue indicator (indicator concentration of 2 mmol L À1 , precision of 0.001 pH), was used as a reference for the higher salinity pH analyses. The thymol blue extinction coefficients were determined in the laboratory (e1 ¼ 0.0072, e2 ¼ 2.3, e3 ¼ 0.18) and the indicator's dissociation constant (pK 2 ¼ 8.5293) was taken from Zhang and Byrne [58]. The reader is referred to R erolle et al. (2013) for further detail on the instrument and analytical approach [15]. The temperature of the samples during laboratory measurements was controlled using a water bath (Fluke Hart Scientific 7012, ±0.1 C). All pH values reported in this paper are on the total pH scale. Temperature of the solutions was checked prior to measurement with a DT-612 dual input K-type thermometer (ATP, ±0.1 C). pH buffer solutions The pH optode sensor was characterised at different pH values, salinities and temperatures. Non-equimolar tris pH buffers were prepared in artificial seawater according to Pratt [59]. For this purpose 1 mol kg À1 magnesium chloride (MgCl 2 ) and calcium chloride (CaCl 2 ) solutions were calibrated using gravimetric Mohr titrations [60]. Hydrochloric acid (HCl, 1 mol kg À1 ) was calibrated using a gravimetric borax titration [61]. Other salts (sodium chloride, sodium sulfate, tris, and potassium chloride) were dried at 110 C for 1 h prior to weighing. Deionised water (MilliQ, Millipore, >18.2 MU cm À1 ) was used to prepare and dilute all solutions. All chemicals used in the preparation of artificial seawater were of analytical grade from SigmaeAldrich. Stock buffer solutions (50 ml) over a pH range of 7.0e8.3 were made by combining tris salt (0.08 mol kg(H 2 O) À1 ), hydrochloric acid and small amounts of sodium chloride with deionised water. The pH was altered by adjusting the ratio of acidic to basic tris (HCl to tris salt), keeping the concentration of tris constant (0.08 mol kg(H 2 O) À1 ) and varying the HCl concentration. The small amount of sodium chloride in the stock buffer was varied to account for the changing ionic strength contribution from the HCl. These 50 ml buffer solutions were then made up to the desired salinity with 25 ml of stock artificial seawater, in turn made up from sodium chloride, sodium sulfate, magnesium chloride, calcium chloride and potassium chloride. To study the temperature dependence of the sensor over the temperature range of 5e25 C, an array of 8 tris buffers with pH ranging from 7 to 8.3 were prepared at a fixed ionic strength of 0.7 M. The pK a of tris has a strong temperature dependence (0.03 pH C À1 ) [62]. The pH range of the buffers was therefore selected to obtain the desired pH range of 7.6e8.3 over the varying temperature range of the experiment. To study the salinity dependence of the optodes, an array of 7 tris buffers with pH values ranging from 7.8 to 8.2 with salinities of 5, 25 and 35 were prepared, following the method of Pratt [59]. 200 mL batches of lower salinity (5 and 25) artificial stock seawater without HCl and tris were prepared by dilution of concentrated artificial seawater (142 ml and 16 ml of the salinity 35 seawater made up to 200 ml in deionised water). The stock buffer solutions were the same as salinity 35 seawater, regardless of the final salinity desired for the buffer solution. The 25 ml of stock seawater was added to the buffer solutions to make up the analysis solutions. The ratio between the salts was kept constant, and only the ratio of salt to water was varied to provide the different salinities. pH optode hardware A 5 mm diameter blue light emitting diode (LED, 470 nm, Farnell) with excitation filters (SemRock single bandpass filter 475 nm and SemRock short pass edge filter 532 nm) was used at low intensity (0.2 mA, 0.72 mWatts) to excite the reference and indicator dyes within the immobilised sensor spot for periods of 5 ms, thus minimizing bleaching. The blue excitation light was reflected off a dichroic beam splitter (SemRock single edge BrightLine, 560 nm) through a fibre optic cable (600 mm diameter multimode optical fibres, Thorlabs, 3.8 mm diameter tubing, 1 m length) to the pH sensitive spot. The excited fluorophores subsequently emitted a fluorescence signal (630 nm) that decayed over time. The red light passed through the dichroic beam splitter, and three emission filters (SemRock single bandpass filter 609 nm, SemRock short pass edge filter 785 nm and SemRock long-pass edge filter 568 nm) before entering the detector. Due to the low level of light involved, the detector was a photon multiplier tube (PMT; Hamamatsu). The spot interrogation system used a field-programmable gate array (Xilinx Spartan-3 XC3S400-5PQ208C FPGA) to control the PMT and LED. Custom-made electronics (Sensoptics Ltd 1 , SGS 42000) were used with dedicated software (Sensoptics Photon Counter V1.2) to record the fluorescence decay curve. A diagram of the hardware setup is shown in Fig. 1. The sensor spot (PreSens, non-invasive pH spot, SP-HP5-D7) was glued to a clear poly (methyl methacrylate) (PMMA) disc using silicone rubber glue (RS Silicone rubber glue, 692-542). Tests showed a negligible autofluorescence effect from the PMMA and glue. The set-up was left to dry for 2e3 days in the dark before being fastened using epoxy glue (Intertronic, OPT500149) to a polyether ether ketone (PEEK) head attached to the fibre optic cable (600 mm diameter multimode optical fibre, Thorlabs, 3.8 mm diameter tubing, 1 m length). The fibre optic cable was wrapped using a rubber coating to minimise light loss and ambient light penetration, and connected directly to the PMT at the distal end of the fibre. The PEEK head with sensor attached was stored in artificial seawater (at the specific salinity under investigation) to soak for at least half a day prior to measurements, as recommended by Stahl and co-workers [55]. Preconditioning allowed the sensor to adjust to the salinity/ionic strength of the measurement solution, and prevented leaching of indicator and reference dyes during measurements, which (if it occurred during the measurement) would result in signal reduction [55]. This also allows any chemicals from the glues and epoxy to leach prior to characterisation and deployment. To minimise photo bleaching of the indicators by ambient light, the sensor was covered with thick, dark, 'blackout' material and the laboratory work was performed in a dark room. Field measurements were undertaken with the pH optode positioned in a custom-made dark chamber. Analytical protocol for pH measurements pH indicators are weak acids and bases and hence have a capacity to respond to changes in solution pH through proton exchange. In high pH solutions, the indicator donates protons and takes its basic form, and in low pH solutions, the indicator accepts protons and takes its acidic form. The different forms of the indicator fluoresce differently allowing the pH of the solution to be determined. To produce a single pH data point, the light source (blue light emitting diode (LED)) is pulsed using a square wave mode, with the LED on for 5 ms (t ex ) and the LED off for 20.5 ms (t em ). The LED light excites the pH indicator causing the dye molecules to fluoresce. When the LED is off, the fluorescence gradually decays. The fluorescence decay is recorded in 100 ns bins for 20.5 ms. This is repeated 19,608 times, and averaged to give an average fluorescent decay profile every 0.5 s. Higher precision and an improved signal to noise ratio are achieved by recording each sample for 200 s and integration of the profiles at 10 s intervals. With the LED on, the emission (t ex ) is a combination of fluorescence from the pH sensitive (A pH ) and reference fluorophores (A ref-ex ), shown in Fig. 2 with blue and red lines respectively. The variation in fluorescence recorded in the first 5 ms is dominated by the indicator dye, and is related to the pH of the solution. We assumed that the integration of the fluorescence intensity when the LED is off, (t em ) is entirely derived from the decay of the pHinsensitive reference dye emission (A ref-em ) due to the shorter decay lifetime of the pH sensitive fluorophore. The intensity of the first 5 ms of the profile (excitation period where the LED is on) is summed for each profile and referred to as t ex . The intensity of the last 20.5 ms (emission period when the LED is off) is summed and referred to as t em . The ratio of t ex over t em (R, Equation (1)) is converted to pH using Equation (2) which includes terms (aeg) for temperature and salinity dependence (specified in Section 3.6). The apparent pK a (here onwards denoted as pK a 0 ) of the indicator, is the pK a where concentrations are used without the relevant activity coefficients to correct for the non-ideality of real solutions. The apparent pK a therefore displays not only temperature dependence, but also dependence on factors that affect the activity coefficients, such as ionic strength [63]. Without detailed knowledge of the indicator, the relevant activity coefficients could not be used when determining the pK a and therefore pK a 0 was used in this study. To determine the pK a 0 of the pH indicator, and therefore the measurement range of the sensor, the optode was immersed in tris/ 2-aminopyridine buffer solutions with a salinity of 5, prepared according to Section 2.1, with pH values ranging from pH 9 to 5. The pK a 0 was determined using Equation (3) [64] and 4 [65], according to the method of Salgado and Vargas-Hern andez [66] (see Fig. 3). The term A in equation and Fig. 3 is the fluorescence intensity caused by a specific solution pH, A Basic is the maximum fluorescence output at pH 9, corresponding to the conjugate base form of the indicator (A À ), and A Acidic is the minimum fluorescence output at pH 5, corresponding to the acidic form (HA). The total indicator concentration at basic pH and maximum fluorescence is proportional to (A Basic À A Acidic ). The concentration of the basic form of the indicator [A À ] at any pH is proportional to (A À A Acidic ). The concentration of acid form [HA] at any pH is proportional to (A Basic À A Acidic ) À (A À A Acidic ) ¼ (A Basic À A). The ratio of the abundance of acidic and basic forms of the indicator is equal to one when pK a 0 ¼ pH as indicated by Equation (4) ½HA To investigate the effects of temperature variations on pH measurement, the set of tris pH buffers (as in Section 2.2) at salinity 35 was equilibrated in the water bath for a period of 15 min followed by pH measurements using both the reference glass electrode (as detailed in Section 2.1) and the sensing spot. The buffers remained in the water bath (as above, Fluke Hart Scientific 7012) during the measurements. The temperatures used for the calibration procedure were 5, 10, 15 and 25 C. Temperatures of the samples were verified with a DT-612 thermometer (as above ATP, ±0.1 C). Each sample was measured three times and the results averaged (see Fig. 4). If the pH as recorded by the reference glass electrode over the three repeats deviated more than 0.003 pH units, the samples were not averaged and the samples outside the deviation limit (0.003 pH) were treated as separate samples. This deviation limit (±0.003 pH) is equivalent to the temperature induced pH change in the tris buffer from the uncertainty in the thermometer measurement (±0.1 C). To investigate the effects of salinity on the pH spot, the set of buffer solutions prepared at different salinities (as in Section 2.2) were equilibrated in the water bath at 25 C for a period of 15 min followed by pH measurements using both the reference glass electrode (as detailed in Section 2.1) and the sensing spot. Each sample was measured three times and averaged (see Fig. 5). Samples of salinity 35 were also measured with the spectrophotometric pH system (Section 2.1). The temperature was maintained at 25 C, with the samples incubated for 15 min prior to measurement. To study the response time of the sensor, repeat alternating measurements were made of two tris buffered seawater solutions (salinity 35) with pH values of 7.2 and 8.5 at 25 C. These pH values are at the limits of the intended measurement range. The measurements were recorded for a period of 200 s before the optode was rinsed with deionised water and transferred into the next solution; this process was repeated 25 times. The time required for the optode to reach 97% of its final stable R (t 97 ) from first being placed in the solution is quoted as the response time, similar to the method of Tengberg and co-workers [67]. Precision of the pH optode measurements was determined from analysis of the pH CRM tris buffer and a CRM for Total Alkalinity (TA)/Dissolved Inorganic Carbon (DIC) (Prof. A. Dickson, Scripps) at 25 C [68]. In order to determine the lifetime of the spot, a new spot (glued to PMMA disc and left to precondition for 2 days prior to use) was attached to the optode. It was then illuminated continuously for one hour with the LED at 0.72 mWatt (0.2 mA, normal excitation level). A 200 s measurement was taken before and after the continuous illumination to assess the change in the response. This was performed in a water bath at 25 C, in artificial (salinity 35) seawater. The sample was not changed between measurements and continuous illumination. The continuous excitation of the spot for one hour amounted to 720 Â 10 6 LED cycles. In case of a typical pH measurement, the spot is only excited for one fifth of the total Response of the pH spot to temperature variations. Samples of tris buffered artificial seawater were analysed at temperatures between 5 C and 25 C. Reference pH was determined using a glass pH electrode calibrated with tris buffered CRM [79]. Where error bars cannot be seen, they are smaller than the size of the marker. measurement and the hour-long continuous illumination was hence equivalent to 7200 decay profiles, 367 measurements of 50 s (response time) or 92 measurements assuming a 200 s measurement time. An additional test was performed with the optode recording 200 s measurements with the short illuminations, as opposed to the continuous illumination. The spot was set to run for 3 days in buffered artificial seawater in a sealed container to maintain constant pH at 8.1. The spot response was recorded for 200 s every 15 min, for 254 measurements. An investigation into the effect of chlorophyll-a on measured R was undertaken using Emiliania huxleyi (obtained from the Roscoff culture collection (RCC), strain number RCC1228). The E. huxleyi was cultured at 16 C, under 100 mE light and diluted to specific chlorophyll-a concentrations (0.13, 0.68, 1.02, 2.05, 3.42 and 6.84 mg L À1 ) with salinity 35 seawater. The pH of the seawater was measured with the reference glass electrode prior to measurement with the optode. Measurement of the solutions was performed in the dark, and at 25 C in a water bath for both the electrode and the optode. The pH was not constant during the characterisation and the effect on R was removed by normalising the ratio (R) to a pH of 8.09 using the following equation Where pH soln is the pH of the solutions determined by electrode, and R is the measured ratio and R n is the normalised ratio. Cruise deployment details The sensor was deployed aboard the R.R.S. James Clark Ross in the Southern Ocean in the period JanuaryeFebruary 2013 (cruise JR274) as part of the UKOA Programme (http://www.surfaceoa.org.uk). Over the period January 22 to 26, 2013, pH measurements were undertaken along a diagonal transect north of South Georgia (54e49 S, 38e40 W). The pH sensor was placed in the ship's main laboratory connected to the continuous underway seawater supply, which had an intake at ca. 7 m depth, and measurements were conducted without filtration. Temperature and salinity measurements in the underway seawater supply were conducted using a thermosalinograph (SeaBird Electronics, Inc, SBE 45 thermosalinograph fluorometer) fitted in the preparation laboratory. Samples for DIC and TA were collected at hourly intervals along this transect. Measurements of pH tot from the DIC/TA CRMs were undertaken at the halfway of each spot deployment on the cruise; two spots were used across the cruise. The spot was changed on 23/01/13. pH was calculated from certified values of DIC and TA using CO 2 SYS [69] (Matlab v2.1) with carbon dissociation constants from Roy (1993) [70], sulphate dissociation constants from Dickson (1990) [71] and borate dissociation constants from Lee (2010) [72]. The calculated pH from the DIC and TA CRM measurements was compared to the sensor-measured pH and any drift from the certified value was corrected using the ratio between the measured and the calculated value. This calculation has an estimated error of ±0.0062, which is comparable to the current precision of the pH optode. pH sensor spot characterisation To determine the pK a 0 of the pH indicator, the optode was immersed in buffer solutions ranging from pH 9 to pH 5. This produced a sigmoidal-shaped fluorescent ratio (R) response (Fig. 3) from the varying fluorescence as the immobilised indicator transitioned from the basic form to the acidic form. The signal flattened at the acidic and basic ends of the sigmoid due to the fulfilment of the proton donation and acceptance capacity of the indicator molecules. The value of R increased with increasing pH, indicating that the basic form of the indicator fluoresced more intensely compared with the acidic form. The most sensitive region for pH observations is where there is the highest change in R per change in pH. This occurs in middle of the sigmoidal shaped fit and demonstrates the viability of the sensor for seawater measurements (average surface ocean pH range~7.9 to 8.2) [7]. A sigmoidal shaped response upon pH variations has also been reported for other pH optode systems with immobilised indicators [49,53,65,73,74]. The obtained pK a 0 value was 6.93 at 20 C, as determined using the approach detailed in Section 2.4, and denoted by the green dotted line in Fig. 3. Immobilisation of the fluorescent compound in the sensor spot caused the pK a 0 to be lower than pK a 0 of the freeform of HPTS (7.3) [74], and similar to the immobilised values reported by Hakonen [44]. This pK a 0 is lower than the pH of seawater, but the measurement range is considered to be between ±1.5 pH units of pK a 0 , and hence covers the typical surface ocean pH range. Nevertheless, precision and accuracy may deteriorate at the edges of this range, i.e. near pH ca. 5.4 and 8.4. Temperature and salinity effects on the pK a 0 of free (non-immobilised) pH indicator have been evaluated by other workers [75,76] and pH calibrations with immobilised sensors have been reported, although none specifically for open ocean use. This study's novel application of the optode and immobilised sensor requires characterisation at temperatures and salinities relevant to these environments. Temperature dependence The temperature of artificial seawater with tris pH buffer was varied between 5 C and 25 C in order to quantify the temperature effect on the fluorescent indicator response. An increase in R was observed when comparing solutions with the same pH analysed at increasing temperatures (Fig. 4). The overall temperature dependence was determined from the gradient of the linear regression, À0.046 pH C À1 over the temperature range of 5e25 C. This increase in R with temperature can be attributed to a decreased quantum yield at higher temperatures for both fluorophores from increased internal and external conversions of the fluorescence energy in addition to the temperature influence on the pK a 0 of the indicator [53]. The decreased quantum yield has a minor effect on the pH sensitive fluorophore due to the short decay time but due to the longer lifetime of the pH insensitive dye, a larger effect is seen when integrating the t ex time window. The observed temperature dependence of the pH spot is similar to that reported for an immobilised sensor spot by Schroeder et al. [53], but smaller than reported by Kirkbright et al. [77]. The temperature dependence requires the accurate recording of temperature and a correction of the final pH readings. Hakonen et al. [78] calibrated at two temperatures 25 C and 15 C, and extrapolated with a linear correlation to extend the system for use at lower temperatures, and found an error in pH of 0.01 pH units as a result of sharp temperature gradients. Schr€ oder found pH errors of ca. 0.03 pH C À1 , and did not correct for temperature variations smaller than 5 C [53]. An alternative approach to avoid the complications of temperature corrections is to analyse pH in samples at a constant temperature [14,54,56]. With the ultimate aim to deploy our pH optodes in situ, the latter two approaches (measure at a constant temperature and apply no correction for deviations less than 5 C) were deemed unsuitable. The temperature was included in the algorithm to convert R to in situ pH, with measurements over a wider range of temperatures compared to Hakonen et al. to better characterise the dependence, with the ultimate aim to correct for temperature-induced variations (Section 3.6). Salinity dependence Solutions of artificial seawater (S ¼ 35) with tris pH buffer were diluted to salinities (5 and 25, see Section 2.2) and analysed at a constant temperature of 25 C (Fig. 5). There was a significant difference between the R values at salinity 5 and both salinities 25 and 35 (student t-test t ¼ À2.765 n ¼ 11, two-tailed p ¼ 0.0184 and t ¼ 12.875, n ¼ 12, two-tailed p ¼ 2.2 Â 10 À8 respectively). The value of R increased with salinity with an overall dependence of À0.01 pH psu À1 , similar to the dependence reported by Schroeder et al. [53] and pH-salinity error of Hakonen et al. [78], 0.008 pH. The increase in R with salinity is due to changes to the indicator pK a 0 caused by surfaceesolution interactions between the spot and the surrounding buffer solution [79]. Theoretical considerations (Debye-Hückel) indicate that an ionic strength increase is accompanied by an apparent pK a 0 decrease [74,75], and consequently for a constant pH there is an increasing concentration of the conjugate base form of the indicator. This causes an increase in R with salinity (Fig. 5). The lack of a significant difference between the lower salinity solutions (5 and 25), as shown in Fig. 5, indicates the presence of a secondary process. The ionic strength within the microenvironment of the spot is not just a consequence of the ionic strength of the external solution; charged molecules in the membrane and surrounding the indicator dye, will affect the pK a 0 and thereby influence R. The pH at the surface of the optode (pH surf ) will be different to that of the bulk sample solution (pH bulk ), and is controlled by the surface potential (j) which is determined by the ionic species present at the interface between the optode surface and the bulk solution [79]. The charges in the membrane allow the surface potential to be unaffected by variations in the ionic strength of the sample solutions. This process thereby stabilises the observed pH surf . However, at full seawater salinity (ca. 35), the sample ionic strength is greater than the apparent ionic strength at the surface. Therefore, the bulk ionic strength will influence the surface potential and causes the apparent indicator pK a 0 to decrease and the R to shift to higher values [79]. Typical seawater salinities (35) cause a much larger change to R compared to brackish and estuarine salinities (e.g. 5e25). This indicates that the foil could be used without salinity correction in low salinity environments. Schr€ oder [53] did not compensate for the salinity effects on the optode response as the changes observed were within the desired accuracy (0.02 pH units), while Hakonen et al. corrected for salinity deviations using log linear transformations. In order to obtain high quality surface ocean pH measurements for monitoring the changes to the oceanic carbonate system, the shift in R as a result of salinity/high ionic strength is included in the pH determinations (Section 3.6). Chlorophyll influence Fluorescent compounds present in seawater can potentially influence the sensor response by increasing the fluorescence counts in the t em portion of the fluorescence decay curve. An investigation into chlorophyll-a interference was undertaken, involving pH analyses of solutions with increasing concentrations of chlorophyll-a from the coccolithophore, Emiliania huxleyi. The pH of the solutions was not controlled and normalisation was performed (see Section 2) to determine if there was an influence from the increasing chlorophyll. A decrease in R n (up to 0.15 units, ca. 0.9 pH units) was observed at enhanced chlorophyll concentrations (between 2.05 mg L À1 and 6.5 mg L À1 ). These concentrations are significantly higher than generally observed in the open ocean (ca. 0.1e1 mg L À1 www.earthobservatory.nasa.gov), but may be encountered in coastal waters. At lower chlorophyll concentrations (0e1 mg L À1 ), the response showed a smaller increase in R n (up to 0.05 units, ca. 0.3 pH units). The complex sensor response to chlorophyll fluorescence did not allow this to be included in a dependency algorithm (Equation (5)). Elimination of chlorophyll fluorescence was therefore deemed more appropriate. This could be undertaken by manufacturing a blackout layer on the surface of the optode, although this would increase the response time of the sensor, or by removal of phytoplankton cells through on-line filtration of seawater prior to analysis. Obviously, in waters below the sun-lit layer in the ocean, where there are no phytoplankton, this chlorophyll interference is not relevant. Multi-linear regression for fluorescent signal conversion to pH In order to convert the fluorescent signals obtained by the optode sensor to pH, a stepwise multi-linear regression was performed on data from the temperature and salinity investigations (n ¼ 120). The regression yielded Equation (2), where T is temperature in C and S is salinity, and R is the ratio as determined from the optode output with the coefficients specified below. R ¼ 0:00034À0:17$pHþ0:15$S 2 þ0:0067$TÀ0:0084$S $1:075 The equation was derived for the pH range 7.6e8.3, a temperature range of 5e25 C, and a salinity range of 5e35. It has an adjusted r 2 of 0.973 (n ¼ 120), a standard error of pH is 0.005. The temperature and salinity data were obtained using a single spot. Differences between spots may occur during manufacturing process, which might create an offset when used with the above equation, but should not affect the temperature and salinity dependence determined here. To account for these offsets, CRM measurements should be used for calibration. As the chemistry of the commercial optode spots is assumed stable and repeatable, the temperature and salinity dependence calibration need only be performed for one spot (as in this study) and may then be applied to others. A future re-calibration using a spectrophotometric system, would allow for improved accuracy in the characterisation of the temperature and salinity dependence due to the greater precision of the spectrophotometric system compared with the potentiometric pH measurements. This approach will be similar to the use of a correction coefficient by Yang et al. [80], who applied this to broadband spectrometer measurements, thereby relating the data back to a narrowband calibration. This is postulated to improve the accuracy and precision of the sensor spot. Metrology Results from the response time experiment are presented Fig. 6. The upper line of measurements represents pH 8.5 (ratio~1.13) and the lower line represents pH 7.2 (ratio~0.73). The points in between represent the equilibration of the spot in the solution. The time required for the optode to reach 97% of its final stable R (t 97 ) was 50 s. This is comparable to results from similar sensors in marine sediments (Table 1 [14,51,54]; 5e200 s). The standard deviation of the ratio from the mean pH for each solution had an average value of 0.003 (n ¼ 18), equivalent to 0.03 pH units (Fig. 6). Comparison between measurements of salinity 35 tris buffers at 25 C with the spectrophotometric system and the optode before temperature and salinity correction showed a good agreement (Fig. 7), with no statistically significant difference (student pairwise t-test, t ¼ 0.737, df ¼ 14, two-tailed p-value 0.473). The optode sensor algorithm (Equation (5)) yielded pH values that were 0.103 ± 0.03 pH units higher in the range 7.5e8.3 compared to the spectrophotometric sensor. A final calibration step of involving a CRM is consequently still required for the optode spot prior to use. An experiment with continuous illumination of the spot was undertaken to give an indication of the number of measurements a single spot could make before bleaching of the indicator dye significantly affects the quality of the determination of R. A change in the ratio of <1% was observed from measurements before and after the continuous illumination for 1 h, similar to what is reported by other workers [14]. This indicates that the foil is stable for a minimum of 92 continuous measurements of 200 s. These estimates are the lowest limits of the lifetime of this spot due to the constant illumination, which is not the normal use of the foil. The sensor drift was evaluated over three days with consecutive 200 s measurements (Fig. 8 Cruise data Over 3000 data points were collected on a research cruise in the Southern Ocean, involving the use of two spots (~1500 data points each). The first spot was replaced during the cruise due to concerns of light exposure and consequent photo bleaching during maintenance of another system coupled to the same underway seawater supply. Certified reference material was analysed at the halfway point of the run of each spot to correct for manufacturing offsets and drift as discussed above. The total drift of the spots measurements over the course of the cruise for both spots combined was 0.06 pH units before correction using the CRM measurement. This drift exceeds the error in the calculated pH (±0.0062) from the CRM values. The shipboard precision (0.0074 pH, n ¼ 10) was comparable to ISFET sensors [27]. The underway observations by the sensor of surface ocean pH tot across the Southern Ocean transect were in the range pH 7.90e8.39, which is a larger range to that reported by Bellerby (pH 8.04e8.28 in situ) [81]. These observations demonstrate the range of environments visited during our cruise, which included regions with high primary productivity and consequently enhanced CO 2 uptake and enhanced pH values (e.g. pH 8.3e8.4 north of South Georgia, ca. 52 N, 38 W, see Fig. 9). The enhanced chlorophyll-a concentrations encountered in the Southern Ocean (maximum observed ca. 11.2 mg L 1 ), may have resulted in a decrease in R, and in future on-line filtration prior to pH analysis should preferably be undertaken. The intense blooms are less common in other open ocean regions, due to a lack of macronutrients to support enhanced phytoplankton growth. Current limitations and future directions The pH spots provided continuous underway data in a challenging ocean region for which there is a lack on carbonate chemistry data. The day-to-day operation of the sensor was simple and trouble free. The precision of the spot in its current configuration is 0.0074 pH (n ¼ 10), comparable to ISFET sensors but inferior to spectrophotometric pH systems. One limitation of the optode sensor is the observed drift in response over several weeks at sea. The precision and drift issues are likely caused by the low apparent pK a 0 of the indicator (6.93 at 20 C) in relation to the intended open ocean seawater range of pH 7.8e8.3, and the increased chlorophyll concentrations. To overcome these issues it is advisable to investigate alternative indicator dyes with more basic apparent pK a 0 values and potentially greater stability. Schr€ oder et al. [53] investigated two potential immobilised indicators, which were carboxyfluorescein derivatives, that showed similar temperature and salinity dependences to those observed in this study, but had pK a 0 values (8.16 and 8.57) more suited to seawater measurements. Furthermore, the use of a spectrophotometric sensor in the calibration procedure will improve the precision on the reference pH measurements, and thereby moving the precision and accuracy of the optode closer to that of spectrophotometric systems. Efforts to develop improved pH spots for ocean measurements are currently undertaken in academic-industry collaborations as part of international research projects (e.g. SenseOcean and Atlantos, European Union Funded), and novel pH optodes will emerge over the coming years. Further developments with the optode presented herein should focus on the use of spectrophotometric system (instead of pH glass electrode) in calibrations to reduce the pH error in the optode algorithm, and more detailed investigations into the chlorophyll dependence of R, particularly at low chlorophyll concentrations. While more frequent use of CRM materials should be undertaken to correct for and remove drift in the system during deployment, application of an indicator with pK a 0 values closer to average seawater pH (8.1) may largely eliminate this requirement. In future, pH optode deployments (with the spot used in this study or others) should utilise a high performance temperature sensor alongside the optode to provide a more realistic determination of measurement temperature for the conversion from R to pH. In this study, we used filters, dichroic beam splitter and a PMT for the more specialised laboratory experiments and shipboard deployment, to investigate the effectiveness of the t-DLR technique and to employ low intensity excitation levels. However, for future in situ deployments, the use of a photodiode as a detection system and a reduced number of filters should be investigated in order to simplify the system and allow more widespread applications, as with the oxygen optodes. Conclusion This work has evaluated an optode pH sensor for measurement of ocean pH. We investigated temperature and salinity dependences, metrology and the longevity of a commercially available pH sensor spot across a pH range 7.6e8.2. The lifetime of the spot was improved with the use of low optical power for excitation and repeated short illumination times, while the response time of the spot was observed to be 50 s. The temperature dependence (À0.046 pH C À1 from 5 to 25 C) and salinity dependence (À0.01 pH psu À1 over salinity 5e35) were accounted for using a calibration algorithm. This simplicity is an advantage compared to the individual calibrations required for ISFET and glass pH electrodes. The algorithm was tested through deployment as an underway sensor in the Southern Ocean, which displayed strong pH, chlorophyll and temperature gradients. A precision of 0.0074 pH was observed at sea, but the optode demonstrated drift of 0.06 pH over the period of the cruise (4 weeks), which was corrected for using CRM measurement. In a lab based experiment we found a drift of only 0.00014 R over 3 a day period (ca. 0.0004 pH, at 25 C, salinity 35), suggesting that with further improvements to the deployment system drift may not be a significant issue. We suggest further investigation into alternative pH indicators with more suitable pK a 0 values for surface ocean measurements, to improve the precision and limit any potential drift. With regular CRM calibrations, the spot characterised in this study is suitable for coastal deployments where pH precision requirements are lower. Optode technology is still in its infancy, and this study along with the now widespread use of oxygen optodes demonstrates the potential of this technology. Ongoing developments in spot technology will aim to deliver optode pH sensors over the coming years for oceanic deployments. R.R.S. James Clark Ross for their support during cruise JR274. The Analytical Chemistry Trust Fund, funded by the Royal Society for Chemistry and the Natural Environment Research Council (NE/ I019638/1), supports this work. The Natural Environment Research Council as part of the UK Ocean Acidification Program NE/H017348/ 1 funded the R.R.S. James Clark Ross JR274 cruise. We would also like to thank two anonymous reviewers for their constructive comments, which greatly improved this manuscript.
2018-04-03T05:44:51.299Z
2015-10-15T00:00:00.000
{ "year": 2015, "sha1": "859dc689392f7dab32e920a2b4ff646844a681e0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.aca.2015.09.026", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "ef5ff6b08bf291f6eeec93b0bb043f6e52845bf9", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
245546824
pes2o/s2orc
v3-fos-license
Ultrasound‐controlled drug release and drug activation for cancer therapy Abstract Traditional chemotherapy suffers from severe toxicity and side effects that limit its maximum application in cancer therapy. To overcome this challenge, an ideal treatment strategy would be to selectively control the release or regulate the activity of drugs to minimize the undesirable toxicity. Recently, ultrasound (US)‐responsive drug delivery systems (DDSs) have attracted significant attention due to the non‐invasiveness, high tissue penetration depth, and spatiotemporal controllability of US. Moreover, the US‐induced mechanical force has been proven to be a robust method to site‐selectively rearrange or cleave bonds in mechanochemistry. This review describes the US‐activated DDSs from the fundamental basics and aims to present a comprehensive summary of the current understanding of US‐responsive DDSs for controlled drug release and drug activation. First, we summarize the typical mechanisms for US‐responsive drug release and drug activation. Second, the main factors affecting the ultrasonic responsiveness of drug carriers are outlined. Furthermore, representative examples of US‐controlled drug release and drug activation are discussed, emphasizing their novelty and design principles. Finally, the challenges and an outlook on this promising therapeutic strategy are discussed.  of  scientific community for several decades, aiming to realize efficient and low-toxicity cancer therapy. Over the past few decades, rapid advances in nanotechnology have led to various active/passive targeted nanodrug delivery systems (DDSs) being applied to increase drug accumulation in tumors while decreasing the cytotoxicity of drugs to normal cells. [6] Nevertheless, the inefficiency of enhanced permeability and retention effects and the randomness of ligand-receptor interactions make it difficult to achieve tumor-specific targeted enrichment. [7] Nor does Fick's law, which controls the diffusion of drugs, works only to designated cells, tissues, or organs. [8] Moreover, there is growing recognition in clinical and preclinical research that cancer patients need not zero-grade, sustainable drug release, but an on-demand supply of therapeutic agents. [9] Therefore, dosage-, temporal-, and spatial-controlled DDSs have been exploited vigorously in recent years, which not only significantly heightens the selectivity of drugs but also effectively reduces their side effects. [10] In general, the responsive DDSs are either sensitive to characteristic endogenous stimuli, such as pH, redox, enzymes, or ATP, or respond to external physical stimuli, including temperature, electromagnetic fields, light, or ultrasound (US). [11,12] Among them, US has many fascinating features, including non-invasiveness, non-ionizing radiation, high tissue penetration depth, and spatiotemporal controllability, for biomedical applications. [13,14] Particularly, US is considered an excellent tool for remote controlling drug release and activation, thereby preventing pharmacological toxicity from unnecessary damage to healthy tissues ( Figure 1). [15] Conceptually speaking, US is a mechanical wave of periodic vibration with frequencies beyond the range of human hearing (>20 kHz). [16] In medicine, US with frequencies between 20 and 100 kHz is defined as low-frequency ultrasound (LFUS), which is usually used to initiate sonophoresis and influence leakage of drug carriers through mechanical destruction. [17,18] Whereas high-frequency ultrasound (HFUS) with frequencies greater than 1 MHz can reach deep into the tumor region by penetrating the skin and most tissues with a focused beam. [19,20] Meanwhile, HFUS can also prompt cellular uptake of therapeutics by temporarily increasing cell membrane permeability and further induce localized mild heat to trigger drug release from the vehicle. [21,22] More notably and most recently, polymer mechanochemistry utilizes the mechanical forces produced from US to regulate drug activity by breaking or rearranging labile bonds at the intended sites, offering a broad prospect for precise drug activation. [23] This review summarizes the recent progress in selective control of drug release and drug activation by US for cancer therapy. First, we outline the typical mechanisms by using US for drug release and drug activation, including mechanical and thermal effects. Second, the main factors affecting the ultrasonic responsiveness of drug carriers are summarized, such as bond strength, molecular weight and degree of polymerization, molecular weight distribution, polymer shape and structure, supermolecular assembly, as well as heterogeneous interface. Furthermore, representative examples of US-controlled drug release and drug activation are discussed, with an emphasis on their novelty and design principles. Finally, the challenges and perspectives in developing of UScontrolled DDSs for cancer treatment are provided.  MECHANISM OF US-CONTROLLED DRUG RELEASE AND DRUG ACTIVATION US-responsive DDSs require biocompatible materials as carriers capable of releasing or activating therapeutic drugs through specific protonation, hydrolysis, phase transition, and molecular or supramolecular conformational changes. During this process, the pressure oscillation generated by US will affect the steady-state of the drug carrier, and the accompanying mechanical and thermal effects are often more remarkable for the further conformational changes. . Mechanical effect US-induced mechanical effect is derived from stable cavitation generated by the continuous oscillation of microbubbles or inertial cavitation formed by rapid growth and bursting of microbubbles. [24] Stable cavitation is generally caused by low amplitude (intensity or power) US. The continuous oscillation of the microbubbles creates velocities in the fluid that induce shear stress to destroy the carrier to release the encapsulated drug. [25] At the same time, it also transiently forms holes through the cell membrane, resulting in the released drugs flowing into the cell. [26] Inertial cavitation occurs when the intensity of US applied is high enough. Specifically, the collapse of microbubbles produces shock waves with amplitudes above 10,000 atmospheres. [27] Despite the duration of the blast wave being short, the resulting pressure gradient is sufficient to damage the drug carrier with low mechanical strength, allowing it to release its cargos. [28] In addition, collapsing microbubbles near the interface experience inhomogeneity, leading to the formation of high-speed microjets. [29] The shock waves resulting from microjets can enhance the permeability of cell membranes and blood vessels. [30,31] Noteworthy, the cavitation on the mechanical stimulation of blood vessels is also conducive to drugs across the blood-brain barrier (BBB) for treating central nervous system related diseases. [32] Instead of releasing the physically trapped drug through mechanical destruction, US can activate drugs by breaking the mechanochemical unstable bond within the carriers. In polymer mechanochemistry, collapsing microbubbles generated by acoustic cavitation create a mechanical elongational flow, stretching polymer chains and eventually inducing the breakage. [33] Thus, the chemical properties of some polymers can be modulated at the molecular level. [34] Primordially, the concept was originated from the discovery of mechanical degradation of polymers by Staudinger et al. in the 1930s. [35][36][37][38] Later in 1989, Langer et al. found that the US could prompt the release of small molecules in long chains F I G U R E  Schematic diagram of US-controlled drug release and drug activation. US-induced thermal and mechanical effects to control drug release and drug activation (inner ring). The structure-activity relationship of polymers on ultrasonic responsiveness (outer ring) of macromolecules. [39] Simply speaking, a mechanic-sensitive polymer needs the introduction of a mechanophore, a forcesensitive molecular unit with a mechanically unstable bond, a tension ring, or an isomerization bond, into the polymer chain. [40] As Moore et al. reported in 2005, functionalized polyethylene glycol (PEG) was selectively split at the weak azo bond in the chain center in response to the US. [41] Similarly, peroxide (O─O) bond, coordinate bond, and disulfide (S─S) bond with low bond dissociation energy are also prone to cleavage when subjected to consecutive molecular strain. [42][43][44] In contrast, strong bonds with high dissociation energy, such as carbon-carbon (C─C) bond, carbon-oxygen (C─O) bond, and carbon-nitrogen (C─N) bond, are challenging to be cleaved by ultrasonic mechanical force (sonomechanical force). [45] . Thermal effect During the ultrasonic wave propagation in the medium, part of the acoustic energy will be absorbed by the medium and converted into heat. [46] Besides, acoustic cavitation can result in the other two forms of thermal effect. One is the continuous thermal effect arised from the sustained oscillation of cavitation bubbles, which can cause the thermal energy deposition in the sound field region. [47] The other is the instantaneous thermal effect, that is, the sudden collapse of the cavitation bubble results in local overheating. [48] The temperature elevation usually occurs when the parameters of focused ultrasound (FUS) or high intensity focused ultrasound (HIFU) are set at moderate sound pressures, prolonged irradiation times, and high duty cycles. Considering the accidental damage to surrounding normal cells caused by long-term hyperthermia, US-responsive thermosensitive DDSs should maintain stable at physiological temperature (∼37 • C), while rapidly releasing the drugs within the tumor (∼40 to 42 • C) heated locally by US. [49] It requires that at least one component of the carrier material could be changed quickly and non-linearly with temperature increase. Such DDSs typically refer to liposomes, nanoparticles (NPs), or polymer micelles exhibiting low critical solution temperatures. [50][51][52] In terms of liposomes, acoustic-thermal sensitivity generally results in phase transitions of lipid composition or conformational changes of the lipid bilayer. [53,54] For example, Dreher et al. synthesized a low-temperature-sensitive liposome (LTSL) containing lysolecithin lipid for delivery of the chemotherapeutic drug doxorubicin (DOX). [55] Upon irradiation by HIFU, the LTSL dissolved once the temperature exceeds its phase transition temperature (∼40 to 42 • C). The result showed that, after HIFU irradiation, the DOX concentration in tumors treated with LTSL was 3.4-fold than that without HIFU.  FACTORS AFFECTING THE ULTRASONIC RESPONSIVENESS US-responsive DDSs should be as sensitive to the external US as possible to mitigate undesirable tissue bioeffects aroused by sound pressure, acoustic cavitation, and acoustic heating. [56,57] Thus, a detailed understanding of the fundamental factors that control the ultrasonic sensitivity of carrier materials is essential to broaden their biomedical applications. This section will discuss the structure-activity relationship of bond strength, molecular weight and degree of polymerization, molecular weight distribution, polymer shape and structure, supermolecular assembly, and heterogeneous interfaces with US responsiveness, respectively. . Bond strength Mechanical bond fracture is the source of mechanoresponsive-polymer failure, so it is reasonably necessary to explore the mechanochemical behavior of chemical bonds with various bond strengths. [58] Representatively, employing the force response of non-scissile gem-dichlorocyclopropane (gDCC) mechanophores embedded in the polymer, the relative mechanical strengths of a series of scissile weak bonds were compared, including the C─N bond (24-30 kcal/mol) of azodialkylnitrile, the C─S bond (71-74 kcal/mol) of thioether, and the C─O bond (52-54 kcal/mol) of benzyl phenyl ether ( Figure 2A). [59] In this work, the degree of activation of non-scissile gDCC mechanophores provides criteria for measuring the mechanical strength of these weak bonds under US. In simple terms, the fewer ring-opening events that occur in gDCC mechanophores triggered by the pulsed US formed along the backbone, the more vulnerable the weak bonds are to break. Notably, a statistical analysis of the amount of ring opening of gDCC mechanophore displayed that the mechanical strengths of the three weak bonds are C─N bond (weakest) < C-S bond < C─O bond (strongest), which are not consistent with their thermodynamic strength. The rehybridization caused by C─O bond fracture leads to the deterioration of mechanochemical coupling, which is the main reason for the increase of mechanical strength. Thus, the choice of mechanophore should not only consider the thermodynamic strength but also comprehensively analyze the consequences of secondary structure changes. . Molecular weight and degree of polymerization The attachment of the polymer chain is a crucial part of transferring the mechanical force to the mechanophore. [60] Early studies denoted that chain length was a key element determining the breakdown of polymer chains. [61] In general, the chain length of polymers depends on their molecular weight and degree of polymerization. To choose a better descriptor of mechanochemical transduction in polymer, Moore et al. synthesized five kinds of acrylate monomers with different ester substituents that could be served as side-chain units ( Figure 2B). [62] Since different compositions or structures of side chains could cause polymers with similar molecular weights but have varying degrees of polymerization, the mechanochemical effects of molecular weight and degree of polymerization might be appraised independently. Owning to its US-induced mechanochemical ring-opening transformation accompanied by absorption change, the spiropyran was chosen as an auxiliary for quantifying mechanochemical activity in this study. [63] For each of the five discrepant polymer chains (i.e., PMA, PEA, PnBA, PiBA, and PtBA), the activation rates increased linearly with the augment of molecular weight upon US irradiation. Meanwhile, polymer chains with the same molecular weight exhibited quite different levels of activation. In comparison, polymer chains with the same degree of polymerization but diverse molecular weights exhibit similar ultrasonic transduction efficiency, suggesting that the degree of polymerization rather than molecular weight is the better descriptor of US-induced mechanochemical transduction. . Molecular weight distribution Ordinarily, ultrasonic degradation of homopolymer in solution is initiated from or near the center of the chain, where the stress tends to accumulate under elongational flow. [64] Hence, the mechanophore is always arranged to reside in the middle of the chain to equalize the molecular weight. This in turn evoked curiosity about how the molecular weight distribution (i.e., polydispersity index, PDI) affects the mechanochemical activity. To this end, Craig et al. proposed a coumarin dimer (CD) mechanophore to produce a fluorescent coumarin chain-end polymer to quantify mechanochemical activation efficiency. [65] As expected, polymers with high PDI values have a significant likelihood of forming off-centered mechanophores, which may easily lead to random breakage of molecular chains at non-specific sites. A higher proportion of fractures occurred at CD when the relative molecular weight distribution was narrower, possibly because a lower PDI value was deemed to satisfy the condition that the mechanophore was located at the chain center ( Figure 2C). Additionally, with the increase of the proportion of CD units at the chain center, the activation efficiency increased simultaneously, manifesting that the distance between the mechanophore and the F I G U R E  Factors affecting ultrasonic responsiveness of carrier materials. (A) Effect of bond strength on mechanochemical activation. Reproduced with permission. [59] Copyright 2015, American Chemical Society. (B) (i) Relationship of molecular weight and (ii) degree of polymerization to mechanochemical activation of polymers. Reproduced with permission. [62] Copyright 2016, American Chemical Society. (C) Influence of molecular weight distribution on scission of mechanophore. Reproduced with permission. [65] Copyright 2015, Royal Society of Chemistry. (D) A general depiction of chain breaks in polymers of different shapes. Reproduced with permission. [71] Copyright 2014, American Chemical Society. (E) Molecular structure affects polymer mechanochemistry. Reproduced with permission. [74] Copyright 2013, Springer Nature. (F) Ultrasonic response of symmetric copolymer nanomicelles. Reproduced with permission. [80] Copyright 2015, American Chemical Society. (G) Response of polymer-grafted NPs to mechanical activation. Reproduced with permission. [86] Copyright 2014, American Chemical Society chain center could be one of the independent variables that interfere with the activation efficiency. The results can also be explained as that the force enhances near the center of the polymer chain and attenuates as the distance from the center increases. [66,67] Therefore, molecular weight distribution is another subtle but important consideration in designing and fabricating mechanochemical responsive systems. . Polymer shape and structure The transformation of polymer shape will bring about the variation of molecular weight distribution, which will affect the US-induced chain-breaking rate. For example, star-shaped polymers show enhanced shear stability than linear polymers with the same molecular weight. [68,69] This result is attributed to star polymers' low effective molecular weight, whose multibranched structure offsets part of the molecular weight, resulting in a low chain fracturing rate. [70] Similarly, Boydston et al. synthesized a series of well-defined linear and threearm polymers, and demonstrated that the fracture rate of the mechanophore is determined by the effective molecular weight rather than the total molecular weight ( Figure 2D). [71] Thus, it is not surprising that the activation rates of the threearm star polymer and their linear polymer analogs are consistent, even though the molecular weight of the star polymer is 1.5 times that of the linear polymer. Theoretically, the sonomechanical effect on the mechanophore embedded in the polymer backbone hinges not only on the shape of the polymer chain but on its structure. [72,73] In a related study, single molecular force spectra were directly used to quantify the ring-opening forces of gem-dibromo and -dichlorocyclopropane fixed along with the skeletons of cis-polynorbornene (PNB) and cis-polybutadiene (PB) ( Figure 2E). [74] The critical force for isomerization of PNB scaffolding was decreased by about one-third compared to the PB framework due to the more efficient mechanochemical coupling of the PNB backbone, which allows it to act as a lever to enhance polymer mechanochemistry. . Supermolecular assembly US activation of mechanophores is often unsatisfactory because of the high threshold molecular weight for launching mechanochemical activation. [75,76] The conventional method of ameliorating the activation of the mechanophore is to increase the effective molecular weight of the polymer chain to improve the degree of polymerization at both ends of the mechanophore, which is impeded by the complexity of synthetic methods. It has been reported that the mechanophore in the entangled or swollen state has higher mechanochemical activity than in the stretched form. [77][78][79] Enlightened by this, Du et al. assembled amphiphilic symmetric triblock polymers ([spiropyrane-(t-BA 88 -b-NIPAM 62 ) 2 ], P2) into micelles with entangled chain as the core and stretched chain as the outer shell through supramolecular self-assembly technique ( Figure 2F). [80] Due to the fundamental variations in the physicochemical properties of the micellated polymer chains, the sonomechanical force transmitted along the polymer chain has more favorable mechanochemical transduction, which is entirely different from that of its dissolved counterpart. By characterizing the UV-Vis absorption of merocyanines from the conversion of spiropyran, they observed that the mechanical response of P2 micelles was five times higher than that in the stretched state. Moreover, the mechanical responsiveness was improved dramatically with the exaltation of micellar degree. The reason could be the increasing micellization degree makes the spiropyran closer to the solidified state, which has better mechanical sensitivity than dissolution. Given this study, the mechanophore could display high mechanochemical activity even in relatively low molecular weight chains. In addition, recent studies have found that temperature and solvent are two major regulatory factors affecting ultrasonic responsiveness, as they determine the thermodynamic state (metastable or stable) of the nanoassembly. [81,82] In general, metastable polymer micelles assembled in a specific solvent have better ultrasonic responsiveness than their stable state. Also, as the self-assembly temperature approaches the glass transition temperature, the ultrasonic responsiveness gradually increases due to the improved mobility of the polymer chain. In another study, Heuts et al. synthesized block copolymers consisting of long hydrophilic polyacrylic acid (PAA) and short hydrophobic polybutyl acrylate (PBA) linked by an anthracene-maleimide Diels-Alder (DA) adduct. [83] Unlike the symmetrical block polymers described above, the shortchain of PBA makes it challenging to form serviceable tangles at the self-assembled micelle core. [84] Even so, the hydrophilic PAA chains still cleaved from the block copolymers after sonication. This could be due to the establishment of micellar nuclei with a much higher viscosity than water under the noncovalent interaction between PAA and PBA, thus enlarging the profile length of supramolecular aggregates and augmenting the acting area of the elongational force. The availability of relatively short PBA chains indicated that these DA adducts do not need to be entangled before activation. Indeed, since micellar aggregation is the driving force for mechanochemical activation, partial mechanophores cannot be cleaved if the polymer is not assembled into a micelle structure. Consequently, supramolecular scaffolds such as micelles are practical tools for ameliorating the mechanochemical activity of the mechanophore. . Heterogeneous interface So far, the exploration of mechanophores has focused on homogeneous systems in which the bond breakage occurs at the center of the polymer chain. [85] Considering the growing application of composite materials in biomedical fields, the mechanochemical behaviors at the heterogeneous interface should not be neglected. Using cycloadduct of maleimideanthracene (MA) as model mechanophores, Moore et al. prepared MA-anchored poly(methyl acrylate) (PMA) grafted silica NPs (SiO 2 NPs-MA-PMA) to verify the selective activation of mechanophores at heterogeneous interfaces ( Figure 2G). [86] By synthesizing a series of PMA chains with different molecular weights, they found that the activation of the polymer-bound mechanophores is still molecular weight dependent. Also, the threshold molecular weights at the heterogeneous interface that can activate SiO 2 NPs-MA-PMA are similar to those of their homopolymer counterparts. By comparing the morphological variation of SiO 2 NPs-MA-PMA before and after US, the interface between the polymer and NPs obviously isolated after the mechanophore was activated. Besides, the NPs were changed from hexagon to irregular shape, which is likely due to the PMA chain fracture that affected ester group anchored on SiO 2 NPs. More importantly, the results provide theoretical basics for expanding the polymer mechanochemistry study to the nanoscale domain. After ascertaining the activation potential of the mechanophore at heterogeneous interfaces, the researchers investigated the effect of graft density on the mechanochemical activity. [87] Based on the MA-mechanophore, SiO 2 NP S -MA-PMA with grafting densities of 0.27, 0.18, and 0.05 chain/nm 2 were synthesized. It should be noted that, for the same molecular weight, the system with the lowest graft density exhibited the most robust mechanochemical activity, while the system with the highest graft density showed the lowest activation rate. Moreover, when the graft density decreased by 37%, the mechanochemically activated retro cycloaddition was accelerated distinctly, which was equivalent to an additional increase of 10 kDa in molecular weight. In other words, the threshold molecular weight in connection with mechanical activation decreases as the graft density decreases, which is in sharp contrast to the studies mentioned above. The difference is that the polymer in this work is in a highly stretched state. Further increase in graft density contributes to enhanced interactions between adjacent chains, including interchain entanglement, which dilutes the concentration distribution of forces in the polymer chain, thereby preventing the polymer chain from breaking. [88] Overall, graft density is like a double-edged sword that needs to be considered comprehensive, and the stretched state of the polymer chain could eventually dominate the mechanochemical activity at the heterogeneous interface.  US-CONTROLLED DRUG RELEASE With the incremental understanding of the factors affecting the ultrasonic responsiveness of carrier materials, a wide variety of US-responsive DDSs have been triumphantly constructed. [89] Encapsulating drugs in these DDSs allows them to be immune to the surrounding physiological environment and only work on specific sites controlled by US. This section will pay attention to nano-micro systems triggered by US for controlled drug release, including microbubbles, liposomes, and silicon-based NPs. . Microbubbles Microbubbles are inflatable spheres (1-8 μm) dispersed in an aqueous medium, which have been extensively used as contrast agents for US imaging. [90,91] More importantly, the collapse of microbubbles can be used to control drug release by reducing the cavitation threshold. [92] When stimulated by a near-resonant US frequency, the microbubble oscillates like a cavitation bubble, which may burst in the end. [93] Similar to cavitation nuclei, microbubbles can strengthen energy deposition in tissues and cells, temporarily destroying the cell membrane integrity through sonoporation, thereby facilitating intracellular drug transportation. [94] A representative example of microbubbles in drug delivery is to enhance the targeted treatment of brain tumors. The existence of the blood-brain barrier (BBB) results in difficulties of drug spreading through the bloodstream to the central nervous system. [95] Fortunately, microbubbles can open the BBB without damage by loosening the tight junctions of capillary endothelial cells under acoustic cavitation. [96] For instance, Yeh et al. designed a microbubble (DOX-SPIO-MB) containing iron oxide NPs and DOX for imaging-guided treatment of brain tumors. [97] The superparamagnetic and acoustic properties of DOX-SPIO-MB enable the system to guide tumor therapy in real-time through magnetic resonance (MR) and US imaging. To improve the delivery efficiency of DOX to cerebral blood vessels, a brain-targeting ligand was modified on the surface of microbubbles. [98] The results showed that the BBB was successfully opened simultaneously as the acoustic cavitation-induced drug release, which expedited drug delivery and enhanced the therapeutic effect. Apart from delivering chemotherapeutic agents, microbubble can also be used to load photosensitizers for synergistic chemotherapy and photodynamic therapy (PDT). It has been reported that reactive oxygen species (ROS) produced by PDT can motivate intracellular drug transport, accelerate drug release, and elevate cytotoxicity. [99][100][101] Kim et al. fabricated a composite system (DOX-NPs/Ce6-MB) consisting of DOX wrapped in human serum albumin NPs and chlorin e6 (Ce6) encapsulated in microbubbles, in which the NPs were bound to the surface of microbubbles for US-triggered local drug delivery ( Figure 3A). [102] Once US was applied, the microjet generated by the explosion of microbubbles can boost the penetration of the locally released NPs into the tumor. Under subsequent laser irradiation, Ce6 induces a large amount of ROS production in tumor cells for synergistically enhanced chemotherapy for anti-tumor therapy. . Lipidosomes The relatively large size of the microbubbles leads to a short lifespan, which makes them unsuitable for tumor targeting and retention. [103,104] Encouragingly, benefiting from the advantages of NPs, such as small size, large specific surface area, good pharmacokinetics, excellent targeting ability, and accessible surface functionalization, various nano-DDSs with controllable drug release are highly favored. [105][106][107] Among them, US-responsive thermosensitive liposome (TSL) is one of the typical representatives. For example, Dai et al. transformed ceramic liposomes into HIFU-responsive liposomes (HTSC) by using sol-gel reactions and self-assembly techniques ( Figure 3B). [108] Owing to the surface coating of organosiloxane atoms, HTSC has better structural stability than conventional TSL, preventing the drug's early leakage. Under the high temperature generated by HIFU irradiation, the cumulative release of DOX encapsulated in the HTSC could be reached to 90%, strikingly suppressing the tumor growth. For non-thermoresponsive liposomes, Yechezkel et al. demonstrated the feasibility of controlling drug release using LFUS-induced mechanical effects. [109] They first confirmed a positive correlation between liposomal drug release and ultrasonic amplitude. When the irradiation amplitude was in the range of 0-1.3 W/cm 2 , the cumulative drug release manifested a relatively slow linear increase. Once the irradiation amplitude was upregulated, the drug release rate was markedly improved about four times, which could be interpreted as the initiation of inertial cavitation. Noteworthy, the drug release curves for continuous or pulsed LFUS irradiation were roughly the same for the same irradiation time, implying that drug release only depended on the actual exposure time. Afterward, they further verified the effectiveness of cisplatinloaded liposomes for local drug delivery in tumors. [110] The results revealed that nearly 70% of cisplatin was released in tumors exposed to LFUS, but only 3% of drug release was observed in the group without LFUS. Since multi-interval US is helpful to avoid interrelated thermal damage, which is F I G U R E  Representative examples of US-controlled drug release. (A) US-triggered drug release based on DOX-NPs/Ce6-MBs complex. Reproduced with permission. [102] Copyright 2018, Elsevier. (B) Drug release from HTSC under HIFU sonication. Reproduced with permission. [108] Copyright 2015, American Chemical Society. (C) Drug release from polymer-grafted MSNs triggered by US heating. Reproduced with permission. [113] Copyright 2015, American Chemical Society. (D) HIFU-induced silica shell cracking of CPT/PFOB@SNCs for controlled release of CPT. Reproduced with permission. [116] Copyright 2014, John Wiley & Sons. (E) US-responsive polymersomes for facilely controlled drug delivery. Reproduced with permission. [119] Copyright 2020, Elsevier superior to single-use continuous US, the study is beneficial for the clinical applications of US-based drug delivery strategies. . Silicon-based NPs Compared with liposome-based DDSs, mesoporous silica NPs (MSNs) has more robust structural stability under physiological conditions. [111] Nevertheless, the weak affinity between hydrophobic drugs and the silanol group in the mesoporous channel may lead to low drug loading and premature drug leakage. [112] Considering that 2-tetrahydropyranic methacrylate (THPMA) is a hydrophobic monomer with an unstable aldehyde group, which can be cleaved to hydrophilic methacrylate (MAA) by acoustic heating. Vallet-Regıí et al. coupled a THPMA-containing copolymer p(MEO 2 MA-co-THPMA): poly(2-(2-methoxyethoxy) ethyl methacrylate-co-THPMA to the surface of MSNs, making it as a gatekeeper to control the drug release ( Figure 3C). [113] In the typical physiological environment, the copolymer exhibited a collapsed state on the surface of MSNs, and the drugs were utterly trapped in the micropores. Upon HFUS irradiation, the copolymer was pyrolyzed and cleaved into p(MEO 2 MA-MAA) and tetrahydropyranol, which became more hydrophilic to open the gates and release the captured DOX. Except for the gating strategy, Chen et al. prepared an organic-inorganic hybrid framework to intensify the interactions between mesoporous organosilicone and drug molecules. [114] Various organic molecules can be integrated into the skeleton of silica (SiO 2 ) through non-covalent interactions, including electrostatic and hydrophobic interactions. For example, MSN with phenyl-conjugated hybridizations has strong adsorption of hydrophobic paclitaxel (PTX), leading to a high drug loading. [115] After external US irradiation, the non-covalent interaction between the drug and the carrier was easily interrupted meantime accelerating the drug release. It is worth noting that the drug release strategy is reversible, and controlled pulsatile drug release can be achieved by switching ON/OFF of HIFU. Owning to its high mechanical strength and heat resistance property, SiO 2 can be used as a skeleton to load drugs and as a covering layer to prevent premature drug leakage. For example, Shi et al. manufactured a multifunctional nano-DDS for HIFU-triggered drug release and synergistic tumor ablation ( Figure 3D). [116] The chemotherapeutic drug camptothecin (CPT) and US-sensitive perflubron (PFOB) were encapsulated in poly(lactic-co-glycolic acid) (PLGA) nanocapsules. Afterward, ultrathin SiO 2 was functionalized as a shell to prevent CPT from escaping before reaching the tumor. Once the nanosystem is subjected to HIFU, the encapsulated PFOB can be transformed into microbubbles by the vaporization of acoustic droplets. As a result, the produced microbubbles effectively reinforced the acoustic cavitation and directly smashed the SiO 2 coating through mechanical effect, promoting the explosive release of CPT. . Polymer-based NPs Polymer-based NPs have been widely used for remote control of drug release in cancer therapy due to their excellent physiological stability and ultrasonic responsiveness. Among them, polymer vesicles and micelles are two typical representatives. [117] Earlier, Du et al. reported a polymer vesicle composed of poly(ethylene oxide)-block-poly[2-(diethylamino)ethyl methacrylate-stat-2-tetrahydrofuranyloxy)ethyl methacrylate]. [118] After US irradiation, the size of these vesicles was significantly reduced due to the rapid rupture and recombination of the vesicle membrane. The 1 H NMR spectra showed that physical but not chemical changes took place during the reassembly process. In a follow-up study, they further developed a novel US-responsive vesicle for controlled drug release via self-assembly of block copolymer poly(ethylene oxide)-block-poly(2-(diethylamino)ethyl methacrylate)-statpoly(methoxyethyl methacrylate) ( Figure 3E). [119] The results showed that the drug release rate of DOX-loaded nanovesicles around the nucleus was observably accelerated under sonication, effectively inhibiting tumor growth in mice (tumor mass decreased by 95%) and minimizing systemic side effects. In comparison, unlike the physical rearrangement of such vesicles, the breakdown of intermolecular hydrogen bonds in vesicles assembled by ultrathin multiblock copolyamides also resulted in the release of hydrophobic drugs trapped in their cavities. [120] Owing to their enhanced thermodynamic stability, polymer micelles have a more robust structure than their vesicle counterparts. As we described in the previous section, HIFU can convert hydrophobic THPMA units into hydrophilic MAA. Based on this, Zhao et al. prepared a polymer nanomicelle with poly(THPMA) as the core and poly(ethylene oxide) as the shell, which triggered the release of their molecular cargoes through phase transformation under US irradiation. [121] Additionally, Xia et al. introduced mechanically unstable ester bonds into polymer micelles as US-responsive mechanophores. [122] Under HIFU irradiation, polymer micelles can be hydrolyzed and trigger the release of pyrene payloads, laying the foundation for the establishment of US-responsive DDSs based on chemical bond transformation. Notably, the thermosensitive liposomes modified with copolymer poly(N-isopropylmethacrylamide-co-N-isopropylacrylamide) can release more than 60% of the encapsulated anticancer drugs upon US irradiation, which was attributed to the local hyperthermia (42 • C) caused by the rupture of acoustic cavitation bubbles. [123]  US-CONTROLLED DRUG ACTIVATION Although flourishing progress achieved, the conventional methods to control drug release are still facing several challenges, such as early drug leakage, high toxicity, and poor therapeutic efficacy. To this end, controlled drug activation strategies have been exploited recently to overcome these bottlenecks. This section will focus on the latest advances in the use of sonomechanical force to break covalent or non-covalent bonds for controlled drug activation. . Covalent bond scission for drug activation In polymer mechanochemistry, the fracture of intramolecular covalent bonds will change the chemical properties of the molecule itself. [124] To apply this concept to the drug activation systems, we recently realized sonomechanical force induced small-molecule drug activation through the cleavage of covalent bonds within a polymer. [23] In brief, two polymers centered with mechanically unstable S─S bonds were synthesized following the principles of mechanochemistry ( Figure 4A). One was the polymer P UMB labeled with fluorescent umbelliferone (UMB), and the other was the polymer P CPT that carries the anticancer drug CPT. In theory, the functional center components of the polymer are inactivated because of the steric hindrance of the long-chain polymer arms. Upon ultrasonication, the inactive components in P UMB and P CPT are activated by intramolecular 5-exo-trig cyclization caused by mechanical fracture of S─S bond. The results signified that the cell survival rate of P CPT treated group was inversely proportional to the time of US irradiation, indicating the sonomechanical force treatment was positively correlated with the activation degree of CPT. Likewise, an S─S bond was embedded in the center of the polymer chain that elicits the retro DA reaction by sonomechanical force, achieving the effective release and activation of conjugated furosemide and DOX ( Figure 4B). [125] To clarify the universality of the strategy, the activating candidates was further extended to a variety of amino-or hydroxy-terminal drugs by combining the drug molecule with the β-carbonate linker adjacent to the mechanically activated S─S bond, including CPT, N-butyl-4-hydroxy-1,8-naphthalimide (NAP), gemcitabine (GEM) and UMB ( Figure 4C). [126] The indicative fluorescent reporting the functional molecules were successfully released and activated under US irradiation, confirming the feasibility and universality of regulating drug activity through chemical bond transformation. Most recently, we further explored the effect of β-carbonate and -carbamate linkers on the mechanochemical responsiveness of disulfide-centered F I G U R E  Representative examples of US-controlled covalent bond scission for drug activation. (A) Ultrasonic cracking of the polymer centered on the S─S bond to activate the embedded CPT. Reproduced with permission. [23] Copyright 2021, Springer Nature. (B) US-induced fracture of polymer mechanochemical S─S bonds to activate drugs. Reproduced with permission. [125] Copyright 2020, American Chemical Society. (C) Mechanochemical activation of disulfide-based multifunctional polymers for theranostic drug release. Reproduced with permission. [126] Copyright 2020, Royal Society of Chemistry. (D) Mechanochemical activation of the fluorophore from disulfide central polymers. Reproduced with permission. [127] Copyright 2021, Chinese Chemical Society polymers ( Figure 4D). [127] The results showed that hydroxysubstituted NAP was effectively released from its β-carbonate connectors within two days after US treatment. In contrast, the amino-substituted NAP was released considerably slowly from its β-carbamate linkers over several weeks, suggesting the leaving group properties of the respective amines were relatively poor. This study advances the exploration of forceinduce therapeutics with different release rates for further biomedical applications. . Non-covalent bond scission for drug activation The scission of the covalent bond by sonomechanical forces is based on the destruction of the solid carrier-cargo covalent interactions. In addition to covalent bonds, non-covalent interactions are ubiquitous and alternative for constructing US-sensitive drug activation systems. [128] From this perspective, we proposed two strategies for activating drugs through the controlled breakage of non-covalent bonds within macromolecules or nanocomponents. [23] The first approach relies on the selective recognition of the RNA aptamer (APT) to its target molecule. As discussed above, the polymerization of aptamers (P APT ) with high molecular weight provides a mechanical-sensitive polymer carrier to bind the antibiotics and meantime inhibit their activities ( Figure 5A). The infliction of sonomechanical force can impair the non-covalent interactions, further breaking part of the covalent bond of the RNA backbone, thereby releasing and activating the drug molecule. Inspired by the pharmacological action mechanism of vancomycin (Van), the second strategy relies on the supramolecular binding of Van with its targeting peptide sequence (DADA). First, we constructed a polymers-NPs (PN) assembly by attaching DADA-linked gold NPs (Au DADA ) to Van-terminated polymers (P Van ) via hydrogen bond reciprocity. The results indicated that sonomechanical force could selectively break down multi-hydrogen bonding in PN structures to release and activate drugs. Furthermore, to achieve a more effective drug activation response, Van-coated NPs (Au Van ) were synthesized, then assembled with Au DADA into NPs-NPs (NN) morphology ( Figure 5B). Exhilaratingly, we found that, upon US treatment, the minimum inhibitory concentration (MIC) of NN was almost the same as that of free Van and was much lower than that of PN, indicating that the NPs-based system has high ultrasonic sensitivity for drug activation. In another study, Schmidt et al. reported the absorption and release of non-covalently encapsulated drugs in octahedral Pd cages with polymer chains anchored at each vertex ( Figure 5C). [129] The progesterone or ibuprofen was packaged and deactivated in a hydrophobic nano-cavity of [23] Copyright 2021, Springer Nature. (B) Au NPs, DADA, and Van were assembled into Nanoparticle-Nanoparticle systems to activate the antibacterial properties of Van. Reproduced with permission. [23] Copyright 2021, Springer Nature. (C) Mechanochemical release of non-covalently bound drugs from polymer-grafted supramolecular cages. Reproduced with permission. [129] Copyright 2021, John Wiley & Sons. (D) US-induced unfolding of GFP modified by supercharged polypeptide. Reproduced with permission. [131] Copyright 2020, John Wiley & Sons. (E) US induces reversible NP disaggregation leading to thrombin release for catalysis. Reproduced with permission. [132] Copyright 2021, John Wiley & Sons. (F) US-regulated dehybridization of metallo-base paired DNA structures. Reproduced with permission. [133] Copyright 2021, Royal Society of Chemistry the supramolecular container. Since the star-shaped structure is susceptible to the shear force, the encapsulated drug was wholly released when the coordinated nanocages were dissociated by US. The successful construction of non-covalent USactivated systems based on small molecule drugs also paves the way for the treatment of cancer and other diseases. In addition to drug activation, there are several studies on the activity regulation of biomacromolecules through US-induced non-covalent bond dissociation, and their disorders and abnormalities are also closely associated with Alzheimer's disease, cardiovascular disease, and cancer. [130] Recently, we demonstrated the first example of US-controlled functional change of green fluorescent protein (GFP), quenching its fluorescence without changing the secondary structure of GFP ( Figure 5D). [131] After that, we designed two noncovalent systems that can specifically activate the catalytic activity of thrombin by US. [132] In this study, the experimental US (20 kHz) and clinical-focused US (5 MHz) were used to selectively destroy the non-covalent interaction between aptamer and thrombin, restoring thrombin activity and catalyzing fibrinogen to fibrin. More importantly, the ultrasonic response process of the NPs-based system was completely reversible ( Figure 5E). Thus, multiple cycles of US-induced "inhibition-activation" of the catalytic activity of thrombin can be realized. Besides, we recently reported the use of US to reversibly dehybridize the metallo-base-paired DNA structure, which provides a new strategy for remotely regulating the transformation and dynamic assembly of DNA structure ( Figure 5F). [133]  CONCLUSION AND PROSPECTS With the development of US-responsive DDSs, US has gradually evolved into an on-demand tool for remote drug release control in cancer therapy. Moreover, recent advances in TA B L E  Summary of US-sensitive components, action mechanism, and ultrasonication parameters of different US-responsive systems discussed in the text US-responsive DDSs have shown their great potential for the treatment of neurodegenerative diseases, diabetes, thrombosis, and COVID-19. [134] More importantly, sonomechanical forces have been shown to be capable of drug activation by selectively splitting mechanochemical bonds. An overview of representative stimuli-responsive systems containing USsensitive components for controlling cargo release and activation was presented in Table 1. Without a doubt, these efforts are made possible by a thorough understanding of the various factors that influence the mechanochemical activity of carrier materials, which can help to accelerate the design and facilitate potential applications of US-controlled DDSs. Despite the unique advantages of using US to control drug release and drug activation, several challenges still need to be overcome before further clinic applications. One of the most critical issues is biosafety. As mentioned above, the extensive interplay of US with biological tissues may lead to undesirable safety risks. To this end, the FDA has formulated two criteria to quantify the tissue effects induced by US, namely thermal index (TI) and mechanical index (MI). [135] The former represents the energy required to raise the tissue temperature by 1 • C, while the latter refers to a combination of ultrasonic parameters such as frequency and amplitude. On these grounds, current efforts should focus on the widespread use of US with high frequency, low amplitude, short irradiation time, and few duty cycles to achieve drug release and drug activity safely. From this point, the best option could be the clinical available medical US equipment. For example, clinically available HIFU has been attested recently to promote mechanochemistry conversion, underlining its clinical translational potential for US-induced drug activation. [136] Nevertheless, the optimal choice of ultrasonic parameters remains confusing due to the indisputable fact that the ideal purpose is to maximize the mechanical effects while minimizing adverse bioeffects. Therefore, developing safe and universal US guidelines for controlled drug release and drug activation remains a subject of future research. Given that most of the currently reported US-responsive DDSs are proof-of-concept studies, the following emphasis will be placed on bridging the gap from bench to bedside. Routinely, the construction of sonomechanical force-activated DDSs counts on polymer chains, and its further applications are restricted by complex synthesis steps, low mechanochemical sensitivity, and poor drug delivery efficiency. As discussed above, several polymer-NPs composite systems have been established to demonstrate their effective mechanochemical activation in vitro. One alternative direction could be the integration of mechanochemical polymer with nanomaterials to broaden their biomedical applications. Furthermore, highly aggregated NPs such as nanomicelles have higher mechanochemical activity than their stretched polymer counterparts. Combined with its preponderance with small size, we supposed that NPs could offer great potential for US-induced drug delivery in the clinic. Considering that nanomaterials have good biocompatibility, high drug loading, and accessible surface functionalization, novel US-responsive nano-DDSs with the simple preparation process and controllable release or activation of their cargo under clinically compatible US should be ulteriorly developed. Concomitantly, new force-sensitive mechanophores should be explored and discovered to cross translational medicine boundaries to cater to diverse clinical demands. In summary, compared with the traditional controlled drug release strategies, the research on US-sensitive DDSs, especially the sonomechanial drug activation platform, is still in its infancy. Nonetheless, the superior properties and clinical translational potential make it of considerable value for widely biomedical applications. We believe that, with the continuous optimization of US-sensitive DDSs, the side effects of chemotherapy drugs can be eliminated as much as possible. In the near future, we anticipate that more attempts based on US activation strategies will be successful and encouraging for cancer treatment, which will provide valuable references for improving the drug efficacy and reducing its associated side effects. C O N F L I C T O F I N T E R E S T There are no conflicts to declare.
2021-12-30T16:04:10.883Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "1b0c7f08c3333ee1553fbac3a6b2859b09411371", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/EXP.20210023", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "606aeb55a30fb3ecdb66292f706644cee3dcd16a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
245012988
pes2o/s2orc
v3-fos-license
One‐Pot Biocatalytic In Vivo Methylation‐Hydroamination of Bioderived Lignin Monomers to Generate a Key Precursor to L‐DOPA Abstract Electron‐rich phenolic substrates can be derived from the depolymerisation of lignin feedstocks. Direct biotransformations of the hydroxycinnamic acid monomers obtained can be exploited to produce high‐value chemicals, such as α‐amino acids, however the reaction is often hampered by the chemical autooxidation in alkaline or harsh reaction media. Regioselective O‐methyltransferases (OMTs) are ubiquitous enzymes in natural secondary metabolic pathways utilising an expensive co‐substrate S‐adenosyl‐l‐methionine (SAM) as the methylating reagent altering the physicochemical properties of the hydroxycinnamic acids. In this study, we engineered an OMT to accept a variety of electron‐rich phenolic substrates, modified a commercial E. coli strain BL21 (DE3) to regenerate SAM in vivo, and combined it with an engineered ammonia lyase to partake in a one‐pot, two whole cell enzyme cascade to produce the l‐DOPA precursor l‐veratrylglycine from lignin‐derived ferulic acid. General methods All commercially available reagents, analytical standards and solvents were purchased from Merck KGaA (Darmstadt, Germany), Alfa Aesar (Morecambe, England), VWR (Lutterworth, England) or Fluorochem Ltd (Hadfield, UK) and used without further purification.Escherichia coli DH5α and BL21 (DE3) cells, NEBuilder® HiFi DNA Assembly Master Mix, Q5® High-Fidelity 2X Master Mix, 1 kb Plus DNA Ladder and all restriction enzymes were purchased from New England Biolabs (Ipswich, MA, USA).Oligonucleotides were synthesized at Eurofins, genomics (Ebersberg, Germany).Plasmid DNA Miniprep kit, DNA gel extraction kit and PCR purification kits were purchased from Qiagen (Düsseldorf, Germany).HPLC filter vials 0.45uM PVDF with a pre-slit cap were bought from Thomson (California, USA).Expression vector pET-28b was purchased from Novagen (Darmstadt, Germany) and was used for gene expression.Plasmids pKD46 and pKD3 was obtained from the E. coli Genetic Resources at Yale CGSC.Genes EjOMT and Cr-metE were codon optimized and purchased from Life Technologies Limited (Paisley, UK) as GeneArt Strings™ DNA fragments.Plasmid DNA was transformed into E. coli cells via heat shock method or electroporated using a Gene Pulser Xcell Microbial System (Bio-Rad) in 0.1 cm cuvettes and set to 1.80 kV.Plasmids constructs and PCR amplicons were sequence verified at Eurofins MWG Operon (Ebersberg, Germany).Plasmid maps were constructed and visualised using Snapgene (San Diego, USA). PCR thermocycling conditions PCR amplifications were performed using Q5® High-Fidelity 2X Master Mix protocol 1 .All primers were designed to have annealing temperature of 60°C.The following thermocycling conditions were used: (1) 98°C for 1 min, (2) 25 cycles: 98°C for 30 s, 60°C for 30 s, and 72°C 20-30 seconds/kb (adjusted accordingly), and (3) 72°C for 5 min.The amplified DNA was purified either using an agarose gel or a Qiagen PCR purification kit. Codon optimized EjOMT DNA sequence The protein sequence ID for Eriobotrya japonica O-methyltransferase is UNIPROT: A0A1B4Z3W1 or GenBank: BAV54107.1.The lower-case sequence represents the DNA base overhangs that anneal to the DNA fragment from the cut plasmid vector pET28b using NdeI/XhoI restriction enzymes.The two constructs were assembled using NEBuilder® HiFi DNA Assembly Master Mix 2 following the manufacturer's protocol and subsequently transformed into commercial DH5α E.coli cells and plated onto kanamycin plates. Construction of plasmids pJG-OMT1-5 Plasmids pJG-OMT1-5 were constructed from the backbone of the commercial pACYC-Duet plasmid with the chloramphenicol resistance marker substituted to an ampicillin resistance marker to yield pJG-OMT. pJG-OMT1 Plasmid pJG-OMT was cut using restriction enzymes NdeI/XhoI.The metK gene was cloned from an E. coli K-12 strain using the following primers (the lower-case sequence represents the DNA base overhangs): MetK Fw: ttaagtataagaaggagatatacatATGGCAAAACACCTTTTTACGTCCG MetK Rv: gcggtttctttaccagactcgagTTACTTCAGACCGGCAGCATCG The cloned metK gene and cut pJG-OMT vector was assembled following the NEBuilder® HiFi DNA Assembly Master Mix protocol 2 .Finally, multiple cloning site 1 (MCS1) was deleted via the inverse PCR 3 method using the following primers: ΔMCS1 Rv: AGGGAGAGCGTCGAGATCC ΔMCS1 Fw: TTGTACACGGCCGCATAATC S4 The mtnN gene (genbank: U24438.1) and luxS gene (genbank: AAC75734.1)was cloned from an E. coli K-12 strain using the following primers (RBS underlined): MtnN Fw: GAGATATACCATGAAAATCGGCATCATTGGTGCA MthN Rv: TAACAACGGCATTATATATCTCCTTAAGCTTTTAGCCATGTGCAAGT LuxS Fw: GGAGATATATAATGCCGTTGTTAGATAGCTTCACAG LuxS Rv: GCGGCCGCCTAGATGTGCAGTTCCTGCAACTTC Plasmid pJG-OMT was amplified using the following primers: Vector Fw: CATCTAGGCGGCCGCATAATGCTTAAG Vector Rw: CCGATTTTCATGGTATATCTCCTTATTAAAGTTAAACAAAATTATTTCTACAGGGG The amplicons were assembled following the NEBuilder® HiFi DNA Assembly Master Mix protocol. 2Finally, multiple cloning site 2 (MCS2) was deleted via the inverse PCR 3 method using the following primers: ΔMCS2 Rv: CCGCTGAGCAATAACTAGC ΔMCS2 Fw: GATTATGCGGCCGTGTACAA The lower-case sequence represents the DNA base overhangs that anneal to the DNA fragment from the cut plasmid pJG-OMT2 using NdeI/XhoI restriction enzymes.The codon optimised Cr-metE gene and cut pJG-OMT2 vector was assembled following the NEBuilder® HiFi DNA Assembly Master Mix protocol 2 . pJG-OMT4 The second multiple cloning site (MCS2) of the pJG-OMT2 vector was cut using NdeI/XhoI restriction enzymes.The Ec metK gene was subcloned into pJG-OMT2 using the same protocol in the construction of pJG-OMT1 (See above). pJG-OMT5 Point mutation (I303V) was introduced in the Ec-metK gene using the quikchange method 4 and the following primers with mutation in bold: MetK I303V Fw: GGTTTCCTACGCAGTAGGCGTGGCTGAACC MetK I303V Rv: GGTTCAGCCACGCCTACTGCGTAGGAAACC Afterward, 10 U of DpnI was added to the reaction mixture and was incubated for 1 h at 37 °C to digest parental DNA and 2uL used to transform E. coli DH5α cells. Deletion of metJ gene in E. coli BL21 (DE3) cells: The disruption of the methionine repressor protein MetJ was based on the λ red homologous recombination procedure for creating a knock-out mutant as described by Datsenko et al. 5 , with several modifications.A DNA fragment containing a selectable antibiotic resistance gene: chloramphenicol was amplified by PCR using plasmid pKD3 as a template and the primers listed below which contained 50 homologous base pair extensions.Arabinose induced BL21 (DE3) cells containing the λ red recombinase expression plasmid pKD46, was electroporated with 100 ng of amplified DNA (pretreated with DpnI) left to incubate in an orbital shaker at 37°C for 2 hr and subsequently plated on LB agar in the presence of 30 μg mL Protein expression and purification Plasmid pET28b-EjOMT was transformed into E. coli BL21 (DE3).A fresh colony was used to inoculate LB medium (3 mL) containing kanamycin (50 µg mL -1 ).This freshly prepared overnight culture was grown at 200 rpm at 37°C, and was used to inoculate 500 mL of LB medium supplemented with kanamycin (50 µg mL -1 ) in a 2 L baffled flask at 200 rpm at 37°C.The recombinant protein expression was induced by adding isopropyl-β-D-1thiogalactopyranoside (IPTG) (0.5 mM, final) when OD 600 reached 0.8-1.0.The cell cultures were then incubated at 18°C for 18 h.The cells were harvested by centrifugation at 4°C (3,250 g, 20 min) and were resuspended (1g in 10 mL) in lysis buffer (50 mM Tris-HCl, 5 mM imidazole, pH 7.0) and lysed in an iced bath by ultra-sonication by Soniprep 150 (20 s on, 20 s off, for 20 cycles, at 30% amplitude).After centrifugation (4°C, 16,000 g, 20 min) the clarified lysate was used for protein purification via a Ni-NTA agarose column.The bound enzyme was washed with 20 mL wash buffer (50 mM Tris-HCl, 30 mM imidazole pH 8.0), and eluted with 50 mM Tris-HCl, 250 mM imidazole at pH 7.0.The collected fractions were concentrated in Vivaspin TM filter spin membrane columns (10,000 MWCO).The purified enzyme was washed several times and buffered exchanged with 50 mM potassium phosphate buffer at pH 7.4.The purity was analysed by SDS/PAGE and the protein was more than 95% pure and the protein stock was determined by the Bradford assay using bovine serum albumin as standard. Homology model of EjOMT YASARA (version 18.4.24)was used for energy minimization.Overlay of the lowest energy homology model and LnCa9OMT structure was performed and visualized with PyMol Molecular Graphics System, Schrçdinger,LLC. Analytical scale biotransformations and LC-MS analysis Unless otherwise specified, all assays were performed in 2 mL Eppendorf tubes, at 30°C in biotransformation buffer 50 mM KPi at pH 7.4.To the addition of 1mM substrate 1a-23a (from a 50 mM DMSO stock solution) was added 2 mM S-Adenosyl-L-methionine disulfate tosylate (from a 50 mM stock solution) and purified EjOMT (1 mg mL -1 , purified as described above) in a final volume of 0.5 mL.After 18 h, the reaction mixture was quenched by adding 0.5 mL of MeOH and centrifuged at 13K rpm for 5 mins to pellet protein debris.Finally, 400 μL of supernatant was passed through a Thomson HPLC filter vial (0.45uM PVDF). Whole cell biocatalysis E.coli strains were made electrocompetent according to published protocols 6 .100uL of electrocompetent cells containing 1 μL (60 ng μL -1 ) of EjOMT mutant I133S/L138V/L342V was co-transformed with 1 μL (60 ng μL -1 ) of pJG-OMT5 via electroporation.The cells were treated to 1 mL of SOC media and incubated in an orbital shaker for 1 hr at 37° C and plated on appropriate antibiotic LB plates.For example, BL21 (DE3) ΔmetJ::cam cells was electroporated with plasmids EjOMT mutant I133S/L138V/L342V and pJG-OMT5 and plated onto agar plates containing: 25 μg mL -1 carbenicillin 15 μg mL -1 kanamycin and 15 μg mL -1 chloramphenicol.3 mL of LB media was inoculated with a single colony and left to grow overnight at 37 °C.The seed cultures were then used to inoculate 500 mL of LB media in 2L baffled flasks with appropriate antibiotics and cultivated at 37 °C in an orbital shaker (180 rpm).Once the OD 600 reached 0.8, the cells were induced by 0.5 mM IPTG and the temperature was reduced to 18 °C and left for 18 hrs.The cell culture was harvested by pelleting at 4000 rpm and washed twice with fresh 25 mL LB media.The resting E. coli cells (3.6g) were resuspended in M9 media (500 mL) and placed in a 2 L Erlenmeyer flask containing 0.5 mM IPTG with appropriate antibiotics for plasmid maintenance.DL methionine powder was added directly to flask at final concentration of 10 mM and 1.25 mL of ferulic acid substrate (5mM final concentration) in DMSO from a 2M stock solution was added and placed in at orbital shaker at 30 °C (180 rpm).The whole cell biotransformation was monitored and sampled periodically by removing 0.5 mL of cell culture and quenching the reaction with 0.5 mL of methanol and centrifuged at 13K rpm to remove cell debris.The supernatant was passed through a Thomson filter vial (0.45uM PVDF) and analyzed via HPLC. Codon optimized Cr-MetE DNA sequence (protein sequence UNIPROT ID: Q42699) -1 of chloramphenicol.Colony PCR was performed to confirm the incorporation of the chloramphenicol resistance gene in E. coli BL21 (DE3) genome. and 13 C NMR spectra of isolated L-veratrylglycine S17 HRMS spectra of isolated L-veratrylglycine 1 H
2021-12-12T16:38:49.880Z
2021-12-09T00:00:00.000
{ "year": 2022, "sha1": "32fa87c6801df503f5f793dfca427627810e9f6c", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ange.202112855", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "050485106f7f548356fed35a9e78cbff8410ea31", "s2fieldsofstudy": [ "Biology", "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
22787050
pes2o/s2orc
v3-fos-license
Historicizing Indian psychiatry Our historical endeavour to map Indian psychiatry has largely remained linear, positivistic and evolutionary. Whether it starts from the ancient times or modern, it shows our past as a tale of victory for the western science, without questioning the borrowed paradigm. The use of historical methods for serious enquiry of psychiatry has been ignored. Emergence of a new genre of historicism that is critical of both colonialism and psychiatry as a universal science, has raised hopes to critically review the emergence of psychiatric knowledge. INTRODUCTION The history of psychiatry in India has been interpreted in only one way. The development of modern psychiatry has always been seen, especially in colonial and post-colonial times, through the prism of a western science which makes evolutionary and linear progress and where we are perpetually located in a situation of lack. This reductionism and positivistic way of interpreting a past needs to be questioned. To explore how the past of Indian psychiatry is represented, the interdisciplinary method of social science is required and historical texts selected from colonial and post-colonial times need to be gleaned. In the recent critique of history of science (particularly in the colonies where 'sciences' arrived as a tool for expansion of the empire and to govern the colonized) researchers have raised crucial questions on the omnipotence of the colonial science. [1][2][3][4] Analytical tools adopted in these studies challenge the 'reality' claimed in the historical narratives of the colonial rulers and expose the underlying assumptions. HISTORICISM AND FOUCAULT'S WORK It is important to say something on historicism first and the rupture brought in the history of psychiatry by Michel Foucault's work. 5 This would help us to understand better the emergence of current scholarship. Historicism is a critical movement that insists on the prime importance of historical context to the interpretation of texts of all kinds [emphasis mine]. It has developed as a reaction to the practice of deducing truth from first principles about how people must organize themselves socially and politically. Natural laws governing human behaviour at all times are formulated, and cultures evaluated by the degree to which they are appropriate to this ideal pattern. Historicists oppose VIEWPOINT this tradition which, primarily associated with the Enlightenment, stretches in different versions, from the seventeenth century natural-law theorists to the sophistications of Kant and Hegel. Historicists maintain that human nature is too variable for such legislation to be universally applicable. Historicists work to evolve a model for apprehending cultural and social diversity that is different from the scientific, law-governed paradigm of the Enlightenment. 6 In the sense, this new historical awareness means acknowledging the autonomy of a past in the present. From the South Asian perspective, social scientist Dipesh Chakrabarty has dealt with the issue of historicism critically in the context of theory. Here Chakrabarty proposes a different dimension to historicism influenced by Foucault. He wrote: Scholars contemplating the subjects called 'Indian history' have often relieved, as it were, the old passions of the 'struggles of the Enlightenment with superstition' that Hegel writes about in his Phenomenology. They have assumed that for India to function as a nation based on the institutions of science, democracy, citizenship, and social justice, 'reason' had to prevail over all that was 'irrational' and 'superstitious' among its citizens. Historicism has been a very close ally of such thought… How would history, the rational-secular discipline, understand and represent such practices? … But sympathetic or not, these accounts all foreground a separation-a subject-object distinction-between the academic observer-subject and the 'superstitious' persons serving as the object of the study. 7 The history of psychiatry never remained the same after Michel Foucault. All the work done before him can be clubbed as various kinds of evolutionary histories that described asylum practices and its institutional growth, which was to become the discipline of psychiatry later. By seriously questioning these kind of histories, Foucault opened up possibilities for a new range of studies on mental hospitals and psychiatry. Foucault was k een to see how discourses* in a specific context produce episteme that govern our thinking in a particular way. Though he generated hostility from mainstream medical historians, Foucault's views were received positively by a section of continental philosophers and social scientists who could see the radicalism with which he related power to a set of discursive practices that depended on the omnipotence of Enlightenment and Reason. It would be useful to look at the context of the emergence of Foucault's major work on madness. What makes Foucault's work on history of madness so special is his attempt to write the history of both concepts and institutions in a way that blurs the distinction between the two. 9 Foucault focused on what goes on instead of looking at the rationally conceived object of knowledge. RECENT INSIGHTS ABOUT OUR PAST The new historiography that has emerged for the past two decades on science and medicine, particularly on psychiatry and psychoanalysis in India, has examined colonialism in all its complexity. Ashis Nandy 10 was the first who analysed the psychology of colonialism in India provocatively without following a historical method. He borrowed the interpretative tools of psychoanalysis and 'focused on the living traditions, emphasizing the dialectic between the classical, the pure and the high-status on the one hand, and the folksy, hybrid and the low-brow on the other'. 10 His work is no less seminal than Frantz Fanon's 11 -the psychiatrist and revolutionary who focused on the concept of negritude and analysed colonial power in Algeria. Nandy 10 looked at that form of colonization, 'which at least six generations of the Third World have learnt to view as a prerequisite for their liberation. This colonialism colonizes minds in addition to bodies and releases forces within colonized societies to alter their cultural priorities once for all' 10 [emphasis mine]. Gyan Prakash 4 has offered a radical analysis of colonialism and modernity, which goes beyond the colonial/national binary and explores the issue of how scientific reason worked out a form of governance evoking a new imagination of modern nation-state that mimicked the European model. He said: Compelled to use universal reason as a particular means of rule, the British positioned modernity in colonial India as an uncanny double, not a copy, of the European original-it was almost the same but not quite. In the colonial context universal claims of science always had to be represented, imposed, and translated into other terms. This was not because Western culture was difficult to reproduce, but because it was dislocated by its functioning as a form of alien power and thus was forced to adopt other guises and languages. 4 Prakash has innovatively worked out his analysis using Foucauldian tools to bring out how the project of Indian science was, from the very beginning, problematized. He has discussed in detail the issue of translation of science and showed the existence of a critical discourse that was aware of the power differentials in this process. He has argued that in Indian modernity we are negotiating the polarities of secular and religious, community and state, science and culture, whose hybridization have formed the stuff of its historical existence. In the domain of history of psychiatry in India too, we observe the emergence of this critical historiography. Waltraud Ernst, a psychologist turned social historian, has done major work [12][13][14][15] to analyse the issues of early institutionalization, racism, gender and colonial structures. Christiane Hartnack's work has looked into the context of the emergence of psychoanalysis under colonial dominance and the Indian response. 16 Her critical analysis centred around Girindrasekhar Basu, the first psychoanalyst outside the western world, and other scholars proposing new theories of the Indian psyche. Girindrasekhar has attracted scholarly attention recently after Nandy's insightful and imaginative analysis. 17,18 While in 'Savage Freud' his reading of Girindrasekhar's life delved into an internal critique of Indian psychiatry, the other article has used the symbol of Girindrasekhar's oeuvre as a critique of modern historiography. Kakar 19 also has observed how Girindrasekhar has modified the Freudian version of psychoanalysis through his innovative practice. Basu 20 followed up Nandy's analyses on Girindrasekhar and interpreted his Bengali works to bring out his critique of the received science. Two studies have looked critically into the issue of cannabis use and the colonial power. Mills 21 has meticulously screened the colonial records of asylums for Indian patients in the nineteenth century, specifically focusing on the cannabis use. His analysis questioned the scientific hypothesis of 'hemp insanity' and showed how colonial power influenced this postulation. The other study by Basu 22 has focused on the questionnaire used by the Indian Hemp Drugs Commission and analysed the evidences given by indigenous medical practitioners and showed how the history of 'cannabis psychosis' is constructed. There is thus a growing corpus of contemporary research on Indian science and medicine that has opened up new possibilities to re-examine the concept of a 'humanitarian', 'universal' science. These possibilities help us to see our specificity and conceptual growth that make our history not a simple tale of progress, but a complex discourse. Studies in the history of psychiatry have raised doubts about the success of the Enlightenment project and explored conceptual critiques from the historical contexts. TEXTS FROM COLONIAL INDIA In an editorial in the Indian Medical Gazette 23 the medical progress in the nineteenth century was reviewed with a brief passage on 'Lunatic Asylums', which said: *This is derived from the word discourse as used by Foucault. He seeks to account for the creation of objects within discourse 'by relating them to the body of rules that enable them to form as objects', and which thus constitute 'the conditions of their historical appearance'. This is significant in that he stresses the constitutive role of discursive practices in forming and determining objects, rather than the converse. 8 It is not, however, till within recent years that we find proper arrangements for the due care of the insane and for the teaching and training of the asylum attendants. We have on previous occasions referred to the changes which are about to take place in the management of the asylums of India, and with the new century we have every reason to expect that a new era is dawning for the insane in India. 23 Lodge Patch reviewed the nineteenth century Indian psychiatry historically. He followed the development of Punjab Mental Hospital from 1840 to 1930. 24 His views are typically colonial, where Indians are transformed through custodial care in asylums from their ignorance and superstition. He is astonished that Indians are not interested in the salvation of the mental health of their country, though he did not mention how Indians view mental salvation according to different traditions. He reduced all this into 'their belief on spirit'! To him it was a humanitarian task to impose the European model for 'development'. Berkeley-Hill 25 in his autobiography wrote a chapter on 'Ranchi European Mental Hospital'. His narrative described the internal contradictions of colonial administration and posed him as a major protagonist for developing Indian psychiatry on the latest scientific advancements. His history talked of his struggles with the higher colonial authority to bring modern changes. In the epilogue of his book, he wrote: Berkeley-Hill was a founder member of the Indian Psychoanalytical Association and a prolific writer. But his writings strongly supported colonial ideology and he used psychoanalysis as a tool of governance. 26 Girindrasekhar Bose's article 27 was written more as a scientific review of the past 25 years' work done in psychology than as a historical narrative. It, however, reflects the contours of a nationalistic science. Colonial reports on asylums and articles in medical journals follow a similar pattern of narrative strategies. All present a view of Indian psychiatry that is becoming modern and civilized, emerging out of the status of a 'savage' society. TEXTS FROM POSTCOLONIAL INDIA The Indian Journal of Neurology and Psychiatry (later Indian Journal of Psychiatry) in its fourth year of publication carried a long article on the history of psychiatry in India and Pakistan. 28 Varma was the first Assistant Superintendent of the Indian Mental Hospital at Kanke, Ranchi. He wrote a comprehensive chronological account of the changes brought in the Indian psychiatric institutions in Bengal, Bihar, Orissa, Madras, Calicut, Waltair, Mysore, Bombay, Sindh, Assam, Central Provinces, Punjab and Amritsar. Varma neither questioned the paradigms of western science nor did he look critically whether independent India would need more hospitals or any other alternative models. In his presidential address at the Annual Conference of the Indian Psychiatric Society, Venkoba Rao deliberated on the ancient Indian thoughts on psychiatry. 29 He attempted to relocate our psychiatric past from the pre-Vedic period to the post-Vedic Ayurvedic treatises, to show that it is in continuation with a uniform pattern all over the world. He tried his best to fit many indigenous categories with the western ones. But he never posed the problem to the practitioners of modern Indian psychiatry as to why and how we became alienated from such ancient learning. Two years later Rao 30 would again talk about the sacred Hindu text Gita in relation to mental sciences and rediscover parallels with western psychiatry. He did not doubt that the atma and kaya duality of Gita can be different from a Descartian mind/body split, which is foundational to modern philosophy and cognitive science. He has also ignored Girindrasekhar Bose's significant contributions to Indian philosophy. 31 In both the articles he has not questioned how these ancient sciences struggled with the western concept of mind and its disorders for its existence. What are its current forms? Rather, a seamless ancient past is invoked more as a nostalgic memory, whose remnants we cherish in the contemporary psychiatric discourse. Somasundaram's article 32 on the Indian Lunacy Act, 1912 is another example of a linear historical narrative that lamented why it took so long to change a colonial law, but failed t o notice that Section 377 of Indian Penal Code which criminalizes homosexuality is not deleted. In a brief article, Michel Weiss 33 pointed out the colonial hegemony over indigenous practices in the nineteenth century India. He observed that: Competition between the Western and more popular indigenous medical practices was intense, and toward the end of the 19th century, the British asylum superintendents tended to look with increasing condescension and outright disdain at indigenous practices… This attitude…sanctioned the predominance of Western values in education and English over Indian languages and culture. 33 Perhaps it was not before Chakraborty 34 that a conceptual issue to judge our psychiatric knowledge through history and culture was raised. Brief in its scope, the article eruditely dealt with the complexity produced by discourses on colonialism, culture and psychiatry, and posed new theoretical questions. CONCLUSION The ahistorical and acultural understanding has prevented us from exploring the conceptual issues that are specific to Indian psychiatry. We are still struggling to erase the lack in Indian psychiatry in relation to the Euro-American one. Different perspectives of historicism have helped us to look at the epistemological struggles that took place at various levels of discourses of Indian psychiatry; Girindrasekhar Basu is one example. This critical investigation hints at the forgotten or silenced knowledge of Indian psychiatry that contributes to its conceptual development. If we need to know why the outcome of treatment of schizophrenia is better in our country compared to the developed ones, then we not only have to study the history and culture of schizophrenia in India, but also how schizophrenia is constructed in our society. Perhaps it is not too late to develop theoretical concepts of Indian psychiatry by exploring interdisciplinary historical studies visible at the horizon.
2018-04-03T04:24:26.584Z
2005-04-01T00:00:00.000
{ "year": 2005, "sha1": "1f79537b91c6afd6a6e80c8cb1016389cd726264", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc2918300", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "626b6937fb8f36fed4e546840c23c882a12ca49c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
30561248
pes2o/s2orc
v3-fos-license
Mitochondrial Ubiquinone Homologues, Superoxide Radical Generation, and Longevity in Different Mammalian Species* Rates of mitochondrial superoxide anion radical (O·̄2) generation are known to be inversely correlated with the maximum life span potential of different mammalian species. The objective of this study was to understand the possible mechanism(s) underlying such variations in the rate of O·̄2 generation. The hypothesis that the relative amounts of the ubiquinones or coenzyme Q (CoQ) homologues, CoQ9 and CoQ10, are related with the rate of O·̄2 generation was tested. A comparison of nine different mammalian species, namely mouse, rat, guinea pig, rabbit, pig, goat, sheep, cow, and horse, which vary from 3.5 to 46 years in their maximum longevity, indicated that the rate of O·̄2 generation in cardiac submitochondrial particles (SMPs) was directly related to the relative amount of CoQ9 and inversely related to the amount of CoQ10, extractable from their cardiac mitochondria. To directly test the relationship between CoQ homologues and the rate of O·̄2 generation, rat heart SMPs, naturally containing mainly CoQ9 and cow heart SMPs, with high natural CoQ10 content, were chosen for depletion/reconstitution experiments. Repeated extractions of rat heart SMPs with pentane exponentially depleted both CoQ homologues while the corresponding rates of O·̄2 generation and oxygen consumption were lowered linearly. Reconstitution of both rat and cow heart SMPs with different amounts of CoQ9 or CoQ10 caused an initial increase in the rates of O·̄2 generation, followed by a plateau at high concentrations. Within the physiological range of CoQ concentrations, there were no differences in the rates of O·̄2generation between SMPs reconstituted with CoQ9 or CoQ10. Only at concentrations that were considerably higher than the physiological level, the SMPs reconstituted with CoQ9 exhibited higher rates of O·̄2 generation than those obtained with CoQ10. These in vitrofindings do not support the hypothesis that differences in the distribution of CoQ homologues are responsible for the variations in the rates of mitochondrial O·̄2 generation in different mammalian species. with CoQ 10 . These in vitro findings do not support the hypothesis that differences in the distribution of CoQ homologues are responsible for the variations in the rates of mitochondrial O 2 . generation in different mammalian species. A current hypothesis of aging postulates that oxidative stress/damage is a major causal factor in the attrition of functional capacity occurring during the aging process (1)(2)(3)(4)(5)(6). The basic tenet of this hypothesis is that there is an intrinsic imbalance between the reactive oxygen species (ROS), 1 that are incessantly generated in the aerobic cells and the antioxidative defense against them, thereby resulting in the accrual of steady-state levels of oxidative molecular damage. The direct evidence in support of this hypothesis is that the augmentation of antioxidative defenses by simultaneous overexpression of Cu/Zn superoxide dismutase, which converts superoxide anion radicals (O 2 . ) into H 2 O 2 , and catalase, which removes H 2 O 2 , retards the age-associated increase in the levels of molecular oxidative damage and extends the life span of Drosophila melanogaster by one-third (7,8). Although there are several intracellular loci for the generation of O 2 . (the first molecule in the ROS series), it is widely accepted that the mitochondrial electron transport chain is the main source of O 2 . (9,10). Previous studies in this laboratory have indicated that the rate of mitochondrial O 2 . generation varies greatly, even in the same type of tissue, among different mammalian species and is inversely related to the maximum life span potential (MLSP) of the species (11,12). The inverse relationship between the rate of O 2 . generation and MLSP was found to hold in a sample of mammalian species as well as a group of dipteran insect species (11)(12)(13). The question that arose out of these studies and that is also the subject of this investigation is what is the underlying mechanism for the variations in the rates of mitochondrial O 2 . generation in different species? Although opinions vary (14), a number of experimental studies in the literature suggest that ubiquinones modulate the rate of mitochondrial O 2 . /H 2 O 2 generation (10,(15)(16)(17)(18). Ubiquinones (2,3-dimethoxy-5-methyl-6multiprenyl-1,4-benzoquinone), or coenzyme Q (CoQ), is a quinone derivative with a chain of 1-12 isoprene units in the different homologue forms (CoQ n ) occurring in nature. Relatively short-lived mammalian species such as the mouse and the rat primarily contain CoQ 9 , whereas the larger long-lived mammals such as man predominantly exhibit CoQ 10 (19). The present study tests the hypothesis that variations in the rate of O 2 . by cardiac submitochondrial particles (SMPs) in different mammalian species are related to the relative CoQ 9 and/or CoQ 10 content. The hypothesis was prompted by the fact that longevity of non-primate mammalian species tends to be inversely correlated with the rate of mitochondrial O 2 . generation and directly correlated with the body mass. Animals-Hearts were obtained from mouse (Swiss), rat (Harlan Sprague Dawley), guinea pig (Hartley Albino), rabbit (New Zealand White), pig (Yorkshire), goat (Angora), sheep (Rambouillet), cow (Holstein), and horse (mixed) which range from 3.5 to 46 years in MLSP (21,29). All the animals were young, healthy, sexually mature adult males. The approximate ages of the animals were: mouse, rat, and guinea pig, 4 months; rabbit, 7 months; pig, 6 -7 months; goat and sheep, 1 year; cow and horse, 3 years. In smaller animals the entire heart was processed; however, in the pig, cow, and horse the hearts were cut into smaller pieces, and representative samples were selected. The values for the rates of O 2 . generation in different species are partially based on the result of previous studies in this laboratory (12,22). MLSP values for different species, obtained from the literature (20,21), in years, are: mouse, 3.5; rat, 4.5; guinea pig, 7.5; rabbit 13; goat, 18; sheep, 20; pig, 27; cow, 30; horse, 46. Isolation of Mitochondria and Preparation of SMPs-Mitochondria were isolated by differential centrifugation as described by Arcos et al. (23). Briefly, pieces of the heart were homogenized in 10 volumes (w/v) of isolation buffer containing 180 mM KCl, 0.5% bovine serum albumin, 10 mM MOPS, 10 mM EGTA Tris base, pH 7.2, at 4°C. The homogenate was centrifuged at 1,000 ϫ g for 10 min, and the supernatant was recentrifuged at 17,500 ϫ g. The resulting mitochondrial pellet was washed and resuspended in 0.25 M sucrose, 1 mM EGTA, 10 mM MOPS, pH 7.2. To prepare SMPs, the mitochondrial pellet was resuspended in 30 mM potassium phosphate buffer, pH 7.0, and sonicated three times, each consisting of a 30-s pulse burst, at 1-min intervals at 4°C. The sonicated mitochondria were centrifuged at 8,250 ϫ g for 10 min to remove the unbroken organelles; the supernatant was recentrifuged at 80,000 ϫ g for 45 min, and the resulting pellet was washed and resuspended in 0.1 M phosphate buffer, pH 7.4, as described previously (12). Extraction and Quantitation of Coenzyme Q-CoQ was extracted from mitochondria using a hexane:ethanol mixture as described by Takada et al. (24). Briefly, 50 l of mitochondrial suspension, containing ϳ100 g of protein and 50 l of double-distilled H 2 O were mixed with 750 l of hexane:ethanol (5:2) for 1 min using a vortex mixture. The mixture was centrifuged for 3 min at 1,200 ϫ g, and 450 l of the hexane layer was collected, dried under helium, and dissolved in 100 l of ethanol. Quantitation of ubiquinones was performed by HPLC by the method of Katayama et al. (25). The ethanol extract (10 -20 l) was chromatographed on a reverse phase C 18 HPLC column (25.0 ϫ 0.46 cm, 5 m, Supelco), using a mobile phase consisting of 0.7% NaClO 4 in ethanol:methanol:70% HClO 4 (900:100:1) at a flow rate of 1.2 ml. The electrochemical and UV detectors consisted of an ESA Coulochem II and a Waters Associates Model 440 absorbance detector at a wavelength of 280 nm. The setting of the electrochemical detector was as follows: guard cell (upstream of the injector) at ϩ200 mV, conditioning cell at Ϫ550 mV (downstream of the column), followed by the analytical cell at ϩ150 mV. The concentrations of ubiquinones were estimated by comparison of the peak area with those of standard solutions of known concentration. Coenzyme Q Depletion and Repletion-Submitochondrial particles were depleted of native CoQ by pentane extraction, as described by Maguire et al. (26) and selectively repleted with exogenous CoQ 9 or CoQ 10 . Aliquots of SMPs (100 l containing ϳ250 g of protein) were freeze-dried and extracted 6 times, each for 45 min at 4°C, with 1 ml of pentane containing 15 M ␣-tocopherol (for the last extraction pure pentane was used). The pentane layer was removed by centrifugation and discarded. In some cases the pentane layer was collected and brought to dryness under a stream of helium and resuspended in 100 l of ethanol, and the CoQ content was measured by HPLC. Various amounts of CoQ 9 or CoQ 10 were added to pentane-extracted and/or freeze-dried SMPs, dried under helium, resuspended in 100 mM potassium phosphate buffer (pH 7.4), and sonicated for up to 3 s in a Branson 2200 sonicator. CoQ depletion and incorporation of exogenous CoQ 9 or CoQ 10 into SMPs membranes were confirmed by hexane:ethanol extraction and HPLC analysis, as described above. Measurement of Superoxide Anion Radical Generation . generation by SMPs was measured as superoxide dismutase-inhibitable reduction of acetylated ferricytochrome c (27), as described previously (12). The reaction mixture contained 10 M acetylated ferricytochrome c, 6 M rotenone, 1.2 M antimycin A, 100 units of superoxide dismutase/ml (in the reference cuvette), and 10 -100 g of SMP protein in 100 mM potassium phosphate buffer, pH 7.4. The reaction was started by adding 7.5 mM succinate, and the reduction of acetylated ferricytochrome c was followed at 550 nm. Measurement of Oxygen Consumption- The rate of respiration of submitochondrial particles was measured polarographically using a Clark-type electrode at 37°C. The incubation mixture, to measure state 4 respiration, consisted of buffer (154 mM KCl, 3 mM MgCl 2 , 10 mM KPO 4 , 0.1 mM EGTA, pH 7.4) and 30 -100 g of SMP protein; 7 mM succinate and/or 7 mM NADH were used as substrates, and 1.2 M antimycin A, 6 M rotenone, or 0.5 mM TTFA were employed as specific respiratory inhibitors. Variations in the Distribution of CoQ Homologues in Mitochondria of Different Species-Comparisons of the concentrations of CoQ 9 and CoQ 10 extracted from the heart mitochondria were made in nine different mammalian species, namely mouse, rat, guinea pig, rabbit, pig, goat, sheep, cow, and horse. The data, presented in Table I and Fig. 1, indicate that both the total as well as the relative concentrations of CoQ 9 and CoQ 10 in heart mitochondria vary greatly in different species. The total concentration of mitochondrial CoQ, i.e. CoQ 9 ϩ CoQ 10 , varied about 2-fold in different species with the rank order: Although all nine species examined in this study contained both CoQ 9 and CoQ 10 the ratio of CoQ 10 /CoQ 9 varied Ͼ 600-fold. In species such as the mouse and the rat almost 90% of mitochondrial CoQ occurred as CoQ 9 while in the guinea pig CoQ 9 and CoQ 10 were present in roughly equal amounts. In mitochondria from rabbit, pig, goat, sheep, cow, and horse, CoQ 10 was the predominant form, with CoQ 9 constituting ϳ1.3 to 4.0% of the total CoQ content. Correlation between CoQ and Superoxide Anion Radical Generation-To determine the relationship between mitochondrial CoQ content and the rate of O 2 . generation in different species, the amounts of CoQ 9 and of CoQ 10 were plotted against the average rates of O 2 . generation by SMPs, partially determined in the context of previous studies (12). As shown in Fig. 1A, the amount of CoQ 9 was directly correlated and that of CoQ 10 was inversely correlated (Fig. 1B) . Generation in Rat Heart SMPs-Repeated extractions with pentane were found to exponentially deplete the amount of native CoQ 9 from the rat heart SMPs (Fig. 2, inset); the amount remaining after six serial extractions was about 4.5% of the total amount extractable by hexane. In contrast, apparently due to the much lower natural content of CoQ 10 , only three extractions with pentane were sufficient to deplete SMPs of CoQ 10 to a level below the detection threshold of 0.2 M (i.e. 0.015 nmol/mg of SMP protein). To determine the effect of pentane extractions on the functional state of the SMPs, rates of oxygen consumption and O 2 . generation were determined after each extraction procedure. The rate of succinate-supplemented oxygen consumption was highest in the unextracted SMPs, decreasing linearly following each extraction procedure, reaching 25% of the initial value after seven successive extraction procedures (Fig. 2). Addition of antimycin A and TTFA greatly reduced (to Ͻ2%) the rate of oxygen consumption by the depleted SMPs, whereas rotenone had no effect, indicating that O 2 consumption observed was specifically due to succinate oxidase activity. NADH did not, in most instances, stimulate the rate of oxygen consumption by the depleted SMPs. A similar study was conducted on the effect of various pentane extractions on the rate of O 2 . generation by the SMPs. Again, the rate of O 2 . generation was highest in the unextracted SMPs and progressively declined with each sequential pentane extraction, reaching 45% of the control value after six extraction procedures, where less than 5% of the original CoQ was present (Fig. 3). Overall the results of the depletion experiments indicated that even after six or seven serial extractions with pentane the SMPs exhibited succinate oxidase activity and were able to generate O 2 . albeit at rates lower than the unprocessed SMPs. Effects of Reconstitution of Rat Heart SMPs with CoQ Homologues-Rat heart SMPs that had been extracted with pentane six times, as described above, were reconstituted with different amounts of CoQ 9 and CoQ 10 . Reconstitution with increasing amounts of CoQ 9 or CoQ 10 caused an initial steep increase in the succinate-supplemented rate of oxygen consumption, which was followed by a plateau (Fig. 4B, inset). No significant differences in the rates of oxygen consumption were observed between the SMPs reconstituted with equal amounts of CoQ 9 or CoQ 10 . Reconstitution of the depleted rat heart SMPs with increasing amounts of CoQ 9 resulted in an initial sharp rise in the rate of O 2 . generation, followed by a more gradual increase (Fig. 4A, inset). At the highest concentration of repleted CoQ 9 used in FIG. 2. Rates of oxygen consumption in rat heart SMPs after partial depletion of CoQ. Freeze-dried SMPs were depleted of their native CoQ homologues by repeated pentane extractions. After each extraction procedure, the rate of oxygen consumption was measured polarographically with a Clark-type electrode using 7 mM succinate as a substrate. The inset shows the SMP CoQ content remaining after each extraction procedure with pentane. The (remaining) CoQ content was determined in hexane:ethanol extracts by HPLC. Data are mean Ϯ S.E. of three independent experiments. S.E. is not shown in the inset. FIG. 3. Rates of O 2 . generation in rat heart SMPs following CoQ depletion. Freeze-dried SMPs were depleted of their native CoQ homologues by repeated pentane extractions. After each extraction procedure, the rate of O 2 . generation was measured as superoxide dismutaseinhibitable reduction of acetylated ferricytochrome c as described in the legend of Fig. 1. CoQ content of SMPs, after successive extractions, is shown in Fig. 2 (Fig. 4A, inset). The differences between CoQ 9 -and CoQ 10 -reconstituted SMPs emerged only at concentrations considerably greater than the in vivo level (Table I). Effects of Reconstitution with CoQ 9 and CoQ 10 on Bovine Heart SMPs-In contrast to the rat, bovine cardiac SMPs contain a relatively high amount of CoQ 10 and a small amount of CoQ 9 (see Table I). Depletion of bovine SMPs by six serial extractions with pentane achieved a 96% extraction of CoQ 10 and virtually the entire amounts of the detectable CoQ 9 . Reconstitution of these depleted SMPs with varying concentrations of CoQ 9 or CoQ 10 indicated different patterns for the two homologues for the rate of oxygen consumption and of O 2 . generation. Augmentation of SMPs with relatively low amounts of CoQ 9 or CoQ 10 caused a sharp increase in both the rate of oxygen consumption (Fig. 5B, inset) and O 2 . generation (Fig. 5A, inset), but at higher concentrations these rates leveled off. Within the physiological range of CoQ content (Table I), there were no differences in the rates of O 2 . generation between SMPs reconstituted with CoQ 9 or CoQ 10 (Fig. 5A, inset). However, the maximal rates of O 2 . generation were greater for CoQ 9 than for CoQ 10 . For example, in the bovine SMPs, reconstituted with 50 nmol of CoQ 9 , the rate of O 2 . generation was 35% greater than in the SMPs reconstituted with an equal amount of CoQ 10 (Fig. 5A). To further determine whether CoQ 9 and CoQ 10 content above the in vivo level had a different effect on the rate of O 2 . generation, freeze-dried unextracted bovine SMPs were augmented with CoQ 9 or CoQ 10 . As shown in Fig. 6, the rates of O 2 . generation were stimulated to a greater extent by the addition of CoQ 9 than CoQ 10 . The differences in the rate of O 2 . FIG. 4. Rates of oxygen consumption and O 2 . generation in CoQ-depleted/reconstituted rat heart SMPs. Freeze-dried SMPs were depleted of native CoQ homologues by six repeated pentane extractions and reconstituted with specific amounts of CoQ 9 or CoQ 10 in pentane. The reconstituted SMPs were dried and suspended in phosphate buffer, and rates of O 2 . generation, shown in A, were measured as superoxide dismutase-inhibitable reduction of acetylated ferricytochrome c. Rates of oxygen consumption, shown in B, were determined polarographically with a Clark-type electrode using 7 mM succinate as a substrate. The insets depict the relationship between rates of O 2 . generation and oxygen consumption by SMPs and CoQ concentrations, within the physiological range. Data are mean Ϯ S.E. of three independent experiments. FIG. 5. Rates of oxygen consumption and O 2 . generation in CoQ-depleted/reconstituted bovine heart SMPs. Freeze-dried SMPs were depleted of native CoQ by six repeated extractions with pentane and reconstituted with specific amounts of CoQ homologues as described in Fig. 4 Results of this study should be interpreted in light of the fact that the preparatory procedures involving freeze-drying and pentane extractions, although widely employed (15-17, 26, 27), have an irreversible effect on the functional state of SMPs. For example, freeze-drying of SMPs followed by depletion and reconstitution with the original (natural) amount of CoQ (28) are relatively minor. Nevertheless, the relative length of the polyisoprenoid chain and the resultant effects on the hydrophobicity of the molecule have been shown to have an effect on the location of the molecule within the phospholipid bilayer of the cell membrane. Although the relative position of CoQ 9 and CoQ 10 in the phospholipid bilayer has not been precisely determined, the CoQ homologues with relatively short polyisoprenoid chains are believed to lie closer to the surface of the bilayer, whereas the long chained ones are thought to be nearer to the center of the bilayer (29 -32). For example, studies by Kagan et al. (33) have shown that short chain ubiquinols are relatively more efficient in inhibiting Fe 2ϩ -ascorbate-induced lipid peroxidation, suggesting that the polyisoprenoid chain length has an effect on the interaction between the quinols and the ROS present in the aqueous phase. Studies by Matsura et al. (34) on rat and guinea pig hepatocytes also implicate major differences in antioxidant efficiency between the reduced CoQ 9 and the reduced CoQ 10 . In response to the hydrophilic radical initiator, 2-2-azobis-(2-amidinopropane) dihydrochloride, CoQ 9 was found to be preferentially oxidized as compared with CoQ 10 homologue and thus may be more accessible to ROS present in the surrounding aqueous phase. This mechanism, albeit hypothetical, may underlie the relatively higher rates of O 2 . generation in highly CoQ 9 -rich SMPs observed in this study. Functional differences between CoQ 9 and CoQ 10 have also been reported by Edlund et al. (35), who found that treatment of mumps virus-infected cultured neurons with CoQ 10 protected the cells from degeneration, whereas no effects were observed in response to CoQ 9 treatment. Results of the present in vitro studies demonstrate that CoQ 9 and CoQ 10 cannot be explained on the basis of relative CoQ 9 or CoQ 10 content.
2018-04-03T01:44:02.654Z
1997-08-01T00:00:00.000
{ "year": 1997, "sha1": "ceee2dcc1a25db90a5e1123c83dd62eec3c3e141", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/272/31/19199.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "5f131e033fe6dfc13b6d8fc1e5ed1ebd8686bc7f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4743017
pes2o/s2orc
v3-fos-license
Inflammatory , anti-inflammatory and regulatory cytokines in relatively healthy lung tissue as an essential part of the local immune system Background. The innate and adaptive immune systems in lungs are maintained not only by immune cells but also by non-immune tissue structures, locally providing wide intercellular communication networks and regulating the local tissue immune response. Aims. The aim of this study was to determine the appearance and distribution of inflammatory, anti-inflammatory and regulatory cytokines in relatively healthy lung tissue samples. Material and Methods. We evaluated lung tissue specimens obtained from 49 patients aged 9–95 years in relatively healthy study subjects. Tissue samples were examined by hematoxylin and eosin staining. Interleukin-1 (IL-1), interleukin-4 (IL-4), interleukin-6 (IL-6), interleukin-7 (IL-7), and interleukin-10 (IL-10) were detected by an immunohistochemistry (IMH) method. The number of positive structures was counted semiquantitatively by microscopy. Non-parametric tests were used to analyse the data. Results. IL-1-positive cells were mostly found in the bronchial cartilage and alveolar epithelium. Immunoreactive lung macrophages were also found. The numbers of IL-4, IL-6, IL-7, and IL-10 containing cells were also found in the bronchial epithelium (in addition to those previously listed). The number of positive structures varied from occasional to moderate, but was graded higher in cartilage. Overall, fewer IL-1-positive cells and more IL-10-positive cells were found. Almost no positive structures for all examined cytokines were found in connective tissue and bronchial glands. Conclusions. Relatively healthy lung tissue exhibits anti-inflammatory response patterns. The cytokine distribution and appearance suggest persistent stimulation of cytokine expression in lung tissue and indicate the presence of local regulatory and modulating patterns. The pronounced cytokine distribution in bronchial cartilage suggests the involvement of a compensatory local immune response in the supporting tissue. INTRODUCTION The complexity of the lung immune system extends beyond immunocompetent cells and their wide intercellular signalling networks.Lung tissue immunity harbours necessary structures and forces for pathogen recognition, also maintaining various immune response types and tissue repair.In general, the lung innate and adaptive immune system in its steady state exhibits a generally anti-inflammatory environment 1 .Failure of immune response regulation might lead to a disrupted normal lung tissue architecture and structure due to minor and major structural changes 2 . The barrier functions of epithelial tissue in the lung environment are essentially responsible for a major portion of tissue homeostasis 3 .Epithelial tissue works as an outline barrier in the respiratory system 3,4 .Apart from true immune cells, epithelial tissue and its communication with other tissue structures design and shape signalling pathways to produce local immunity 5 .We can highlight the novel understanding of immune system, for which immunogenic properties have been described in other than immunocompetent cells.Interestingly, epithelial tissue is among the structures that possess immunogenicity 6,7 . The airway epithelium works as a physical barrier in lungs to maintain effective mucociliary clearance 8,9 .Mucus covers the upper surface of the epithelium and actively serves as an environment capable of preventing airways from possible inhaled harm 8,10 . Macrophages resident in non-immune tissue participate in tissue-specific homeostasis.Interestingly, tissuespecific macrophages able to secrete extensive types of various biological molecules provide unique landscapes both in steady and active states, mirroring the ontogenetic and microenvironment background related to their role in shaping local homeostasis 11,12 .Moreover, tissue macrophages residing in certain organ-specific environments display high plasticity and may shape their cellular identity 13 . The interleukin-1 (IL-1) family has been found to have auto-inflammatory and autoimmune properties; however, IL-1 itself is a pro-inflammatory cytokine.IL-1 regulates the innate and adaptive immune system towards the initiation and amplification of the immune response into inflammation and regulates tissue damage.As a result of the IL-1 presence in tissue, the recruitment of macrophages is a local immune response 14 . IL-4 is a key cytokine in the development of allergic inflammation and an important local factor participating in cell signalling due to allergen exposure 15 .IL-4 increases fibroblast proliferation and the secretion of extracellular matrix components leading to fibrosis 16 .IL-4 also highly promotes the pathogenesis of both allergy and asthma 17 . IL-6 is a pro-inflammatory cytokine; however, the important anti-inflammatory and regenerative nature of IL-6 has also been described 18 .Furthermore, the IL-6-mediated profibrotic properties through the direct activation of fibroblasts and numerous indirect mechanisms have been reviewed 16,19 . IL-7 is also a cytokine with pleiotropic functions; however, IL-7 mostly functions as an immunoregulatory cytokine.Wide research has established IL-7 as the regulator of lymphocyte overall development, maintenance and homeostasis 20 .IL-7 increases lymphoid cell lineage re-population and generation, serving as a lymphoid regenerative factor 21 .Cytokine IL-7 is involved in the early development and further maintenance of T cells 17,21 . Immunity concepts have been reviewed with interest when considering the possible non-immune cell response to various inflammatory mediators as well as the production of their own pro-inflammatory, regulatory and antiinflammatory mediators, not only by macrophages and in surface structures but also deeper in the tissue underlying the epithelium. Thus, the aim of the present study was to perform a routine histological analysis and determine the appearance and relative distribution of interleukins (IL)-1, IL-4, IL-6, IL-7, and IL-10 in relatively normal lung tissue material. Patients We evaluated lung tissue material obtained during a post mortem autopsy from forty-nine patients aged 9 to 95 years in relatively healthy study subjects.Among the diagnoses of the study subjects were heart disease (mostly sudden cardiac death) or death of the patients by either intentional self-harm (suicide) or unintentional major injury due to trauma.Tissues with surgically and histologically clear resection margins of suspected cancer patients were included in this study.Patients with severe respiratory conditions and failure, with any history of acute or severe chronic respiratory pathology, and treated with medications conflicting the possible results of this study, as well as tissue samples with pathological findings determined in routine histological analysis, were not included in this study. All authors hereby declare that all experiments were examined and approved by the appropriate ethics committee and were therefore performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki.This study was approved by the Ethical Committee of Pauls Stradins Clinical University Hospital dated January 23, 2013. Methodology Soft tissue material pieces with an approximate size of 1 cm 3 were taken during the autopsy and further fixed in a mixture of 2% formaldehyde and 0.2% picric acid in 0.1 M phosphate buffer (pH 7.2).Afterwards, the tissue samples were rinsed in thyroid solution, containing 10% sucrose for 12 h.Continuing the rinsing, tissue specimens were embedded in paraffin.Six to seven micrometre (μm) thin tissue sections were cut to proceed with staining.Routine histological staining with hematoxylin and eosin was used for each case to obtain review photographs of each tissue material.All the tissue specimens were examined by bright field microscopy. From sixty-nine tissue materials in total, we excluded twenty tissue specimens because pathological findings (e.g., massive immune cell infiltrations, bleeding, emphysema) were observed by bright field microscopy, despite the clear medical history. We examined the appearance and distribution of cytokine-containing structures, where the relative number of positive immunohistochemical structures in the epithelial tissue (precisely epithelial cells, both ciliated and secretory), connective tissue (fibroblasts), bronchial glands (both serous and mucous glandulocytes), bronchial cartilage (chondroblasts, chondrocytes), and alveolar epithelium, as well as cytokine-containing alveolar macrophages, were graded semi-quantitatively 23,24 . To perform the statistical analysis, we used nonparametric statistical methods. • From the parameters analysed in descriptive statistics, the median (Mdn) value and interquartile range (IQR) were determined.• The Wilcoxon matched pairs Signed Rank Test was conducted to determine whether there was a difference in the number of positive structures of one examined cytokine (dependent variable) in two different tissue types (independent variables, e.g., related groups), as well as to evaluate the positive ranks, negative ranks and ties.The data are medians unless otherwise stated.• Spearman's rank-order correlation was run to determine the relationship between the number of cytokine-containing structures in one type of tissue and in a different tissue, as well as with age.We evaluated the correlation coefficient as follows: The closer r s (Spearman's rho) is to +1, the stronger the positive correlation.The closer r s is to -1, the stronger the negative correlation.The r s value of 0.00-0.30 is regarded as a negligible correlation; 0.30-0.50,as low; 0.50-0.70,as moderate; 0.70-0.90, as strong; and 0.90-1.00,as a very high (very strong) correlation 25 . The statistical analysis was performed using the statistical program SPSS Statistics, version 22.0 (IBM Company, Chicago, USA).In all the statistical analyses, P values < 0.05 were considered statistically significant. Routine histology Among the minor pathological findings, we evalu-ated emphysematous lung tissue regions not larger than one visual field of bright field microscopy in two elderly patients.In the tissue materials of three patients, increased smooth muscle cell proliferation was seen as a minor pathological finding in the wall of a conducting bronchus or bronchiole.Neoangiogenesis with few smallcalibre capillaries in the lung tissue parenchyma was seen in one case.Sclerotized blood vessels were found in the tissue materials of three elderly patients; however, these findings were rare in the whole tissue material.We found hemosiderin-containing macrophages and dust-containing macrophages (Fig. 1a) in three patients.Interestingly, we found goblet cell hyperplasia in the tissue material of one patient as a fragment of the lining epithelium with a size not exceeding one visual field at magnification X200.Goblet cell hyperplasia was seen as a unique lining epithelium covering the wall of a largecalibre bronchus.Normal ciliated cells were partially replaced by an abundance of goblet cells. Immunohistochemistry The number of IL-1-positive cells in the bronchial epithelium and connective tissue varied from no positive structures (0%) seen in the visual field to few-to-moderate (+/++ or 37.5%) IL-1-containing structures (Fig. 1b).Almost no positive (0%) structures for IL-1 were detected in bronchial glands.Surprisingly, IL-1-containing structures in the bronchial cartilage were found more frequently.The number of IL-1-positive alveolar epithelial cells varied from no positive structures in the visual field to a moderate number (++ or 50%) observed.The count of IL-1 immunoreactive macrophages was also found to be variable, numbering from occasional (0/+ or 12.5%) to almost numerous (+++ or 75%) IL-1-containing macrophages.In summary, more IL-1 positive tissue structures were found in bronchial cartilage and alveolar epithelium, ranging from occasional (0/+ or 12.5%) to few (+ or 25%) immunoreactive structures seen in the visual field.No positive (0%) to only a few (+ or 25%) positive cells were found in the bronchial epithelium, connective tissue and bronchial glands. The appearance and distribution of IL-6 among the tissue groups of this study varied, ranging from occasional (0/+ or 12.5%) structures in the connective tissue and in the bronchial glands and mostly few (+ or 25%) structures in the alveolar epithelium.IL-6-containing structures were more prominent in the bronchial epithelium (Fig. 1c) and in the bronchial cartilage, mostly ranging from few to moderate (+/++ or 37.5%).In addition, few to moderate IL-6-containing macrophages were found in the majority of the examined tissue specimens. Similar findings were seen in the analysis of the appearance and distribution of IL-7-positive structures (Fig. 1d), except for IL-7-containing alveolar epithelium, where few to moderate (+/++ or 37.5%) IL-7 positive epithelial cells were found.Interestingly, from a moderate (++ or 50%) to numerous (+++ or 75%) IL-7 positive structures were found in bronchial cartilage, reaching overall the highest evaluation among all the examined cytokines. Overall, the number of IL-4, IL-6, IL-7, IL-10 positive structures was considerably higher in the cartilage, alveolar epithelium, and bronchial epithelium; furthermore, there was a more pronounced number of lung macrophages.No positive to few structures were found in the connective tissue and bronchial glands (Table 1). Statistical data We found a statistically significant (P < 0.05) lower number of IL-1 positive structures in bronchial glands (0), connective tissue and bronchial epithelium (0.5).A higher number of IL-1-containing structures were found in the bronchial cartilage; in addition, more alveolar macrophages (1.0) were observed to containing IL-1.IL-1positive cells in the bronchial glands were fewer than in the bronchial cartilage and were fewer compared with the number of alveolar macrophages (Fig. 2). A statistically significant (P < 0.05) lower number of IL-4-containing structures were found in the connective tissue, bronchial glands and bronchial cartilage (0.5); however, a higher number of IL-4 positive structures were found in the alveolar epithelium (1.0) and bronchial epithelium, and there was a higher number of IL-4-containing macrophages (1.5).IL-4-positive cells in the bronchial cartilage were fewer than in the bronchial epithelium and were fewer compared with the number of alveolar macrophages (Fig. 3). Significantly (P < 0.05) fewer IL-6-positive structures were found in the bronchial glands (0) and connective tissue (0.5); however, more IL-6-positive structures were found in the bronchial epithelium, bronchial cartilage, and alveolar epithelium (1.0), as well as IL-6 containing alveolar macrophages (1.5).IL-6-positive cells in bronchial glands were fewer than in the alveolar epithelium and were fewer compared with the number of alveolar macrophages (Fig. 4). A statistically significant (P < 0.05) lower number of IL-7-containing structures was found in the connective tissue, bronchial glands (0.5) and alveolar epithelium (1.0).A higher number of IL-7-containing structures were found in the bronchial epithelium (1.5) and in the bronchial cartilage (2.0), as well as a higher number of IL-7 positive alveolar macrophages (1.5).IL-7-positive cells in the bronchial glands were fewer than in the bronchial epithelium or bronchial cartilage and were fewer compared with the number of alveolar macrophages (Fig. 5). Similarly, regarding the IL-7 distribution among the examined tissues, a statistically significant (P < 0.05) lower number of IL-10-containing structures were found in the bronchial glands (0.5), connective tissue and alveolar epithelium (1.0).Furthermore, a higher number of IL-10 -containing structures were found in the bronchial epithelium (1.5) and bronchial cartilage (2.0), as well as a higher number of IL-7-positive alveolar macrophages (1.5).IL-10-positive cells in the bronchial glands and connective tissue were fewer than those in the bronchial epithelium and bronchial cartilage (Fig. 6). A statistically significant (P < 0.05) positive correlation with a strong relationship was found between the number of positive structures for mostly IL-1 and IL-10 in the alveolar epithelium, as well as between the number of positive structures for IL-10 in bronchial epithelium and bronchial glands with positive structures for IL-4, IL-6 and IL-7.Interestingly, there was a strong, positive correlation between positive structures for IL-1 and IL-10 in the alveolar epithelium (r s = 0.778, P < 0.001), alveolar epithelium and bronchial epithelium (r s = 0.715, P = 0.001), as well as alveolar epithelium and bronchial glands (r s = 0.711, P = 0.001), respectively (Table 2). No statistically significant (P < 0.05) positive or negative correlations were determined between the number of positive structures in any tissue groups examined in this study and age. DISCUSSION Our study reveals a more pronounced number of IL-1containing structures in bronchial cartilage, as well as a higher number of IL-1-containing alveolar macrophages; however, fewer IL-1-positive structures were found in the bronchial epithelium, connective tissue and in alveolar epithelium.We also measured a strong correlation of exclusively IL-1-containing alveolar epithelium dependent on other tissue groups containing predominantly IL-10, as well as IL-4, IL-6, and IL-7.Although we may speculate about the substantiation of IL-1 regulating all other cytokines, conversely, the release of IL-1 is also regulated by other pro-inflammatory, regulating and anti-inflammatory cytokines; therefore, the causality of one cytokine affecting another cytokine must be estimated in the context of multiple factors.In summary, epithelial cells are both activated by and further release the following cytokines: IL-1α, IL-4, IL-6 and IL-10 3,5-7,26 .As the first exposed surface, the airway epithelium actively regulates local immunity.Due to antigen exposure, activated epithelial cells might recruit locally inhabiting immunocompetent cells.Epithelial cells use autocrine and paracrine signalling pathways to provide intercellular and intracellular communication 3,4 .In summary, epithelial tissue serves as a first-line defence system in the innate immune system due to the continuous exposure to airborne antigens, sensing potential danger and harm, while initiating, regulating and suppressing the adaptive immunity 3 .Pro-inflammatory cytokines (IL-1, IL-6) and chemokines produced by epithelial cells recruit and activate immune cells in the submucosa 5 .Importantly, specifically IL-1α even boosts the epithelial cell participation in the immune response through the autocrine stimulation 8 .However, to suppress the immune responses caused by the release of IL-1, the epithelium might release anti-inflammatory cytokine IL-10 3 .Therefore, we suggest that in relatively normal lung tissue, the anti-inflammatory environment is produced by the down-regulation of pro-inflammatory cytokines. We found a higher number of IL-4 -containing structures in the bronchial epithelium and alveolar epithelium but a slightly lower number of structures in other tissue groups, as well as alveolar macrophages.In lung tissue, IL-4 upregulates the adhesion molecules on the epithelium while regulating the inflammatory response, promotes increased airway responsiveness, and ensures and increases goblet cell hyperplasia; moreover, enhanced immunoglobulin (Ig) E switching, mast cell involvement, and eosinophilia, as well as higher mucus hyperproduction and overall higher airway tissue responsiveness have been described 8,17 .Although we observed a thickening of the basal membranes in some of the patients, generally, no allergic changes in the tissue were found, assuming the production of IL-4 is occurring for other reasons apart from allergic conditions. Our study shows a more pronounced number of IL-6containing structures in the alveolar epithelium, followed by the number of IL-6-positive structures in the bronchial cartilage and IL-6-containing alveolar macrophages.IL-6 was less found in connective tissue and in bronchial glands, where rare structures and almost none were found, respectively.IL-6 is a pleiotropic cytokine controlling T cell infiltration to the lungs, increasing the mucus secretion, up-regulating the mast cell proliferation, and inducing contractions in lung smooth muscle cells.Importantly, IL-6 mediates the pro-inflammatory, pro-angiogenic and immunomodulatory role of other cells 19 .As observed, IL-6 is produced early during the infection from uninfected cells compared with infected cells, suggesting its regulatory role 26 .As we found, the major sources of IL-6 are structures more exposed to airborne antigens, and the understanding of IL-6 functions may be explained as a more intermediate occurrence in subsequent signalling with a regulatory role, rather than an initiating factor of immune response induction due to antigen stimulation. A higher number of IL-7-containing structures were found in bronchial cartilage, followed by location in the bronchial epithelium and the number of IL-7-positive alveolar macrophages.Fewer IL-7-containing structures were found in the alveolar epithelium, connective tissue and bronchial glands.Although IL-7 is found to be secreted by mostly immune cells, as well as supporting cells affecting the development, differentiation and functioning of lymphocytes 21 , our results suggest other non-immune sources.Surprisingly, bronchial cartilage was the most pronounced source for IL-7 affecting the local regulation.However, the role of IL-7 in local immunity is still unclear because the number of studies describing the role of this cytokine in tissues other than lymphoid is limited. Our study shows that the predominant distribution of IL-10 is higher in bronchial cartilage, ranging from few to numerous IL-10-containing chondrocytes, followed by the number of IL-10-containing alveolar macrophages and the number of structures in the bronchial epithelium.Fewer IL-10-containing structures were found in the alveolar epithelium and bronchial glands.Few to a moderate number of interleukin-containing alveolar macrophages, ranging from no positive alveolar macrophages for any other cytokine to numerous IL-10-containing alveolar macrophages were found.However, a lower number of IL-1-containing alveolar macrophages were found.Interestingly, our study results reveal that all structures at first exposed to antigens sourcing from inhaled air (bronchial epithelium, alveolar epithelium, and alveolar macrophages) contained less IL-1 and more IL-10, affecting their own activity in an anti-inflammatory manner.The anti-inflammatory response to the initiated inflammation shows a signalling background between pro-inflammatory and anti-inflammatory cytokines, suggesting that cells capable of expressing IL-10 have powerful sources of controlling the immune response into the anti-inflammatory direction 27 . As we examined relatively normal tissue without any clinical and microscopic evidence of acute inflammatory condition, our weak findings of IL-1-containing structures confirms the understanding of relatively normal tissue with a preparedness to ignite a full-scale immune response towards inflammation if necessary and dedicated to the status of tissue affected by antigens.Alveolar macrophages unique to lung tissue play a pivotal role in maintaining local immunity.Moreover, alveolar macrophages elicit and further amplify the immune response 28,29 .Importantly, the amount of secreted anti-inflammatory cytokine IL-10 decreases, suggesting the inflammatory response predominance over the tissue anti-inflammatory response, if excited by an antigen.Although epithelial tissue is also a source of cytokines, alveolar macrophages are overall more potent producers of inflammatory mediators than is the epithelium 28 .Antigen exposure causes an increase in acute phase inflammatory cytokines IL-1β, IL-6 produced by alveolar macrophages 6,7 . As an example of cells regulating the immune system, alveolar macrophages are constantly excited by direct exposure to the inhaled antigens through breathing; however, the following pro-inflammatory signalling is strictly controlled and regulated to prevent lung tissue from possible damage due to the full-scale immune response.Not excited by incoming potentially harmful antigens, macrophages function in a relatively silent mode; however, at the same time, alveolar macrophages are ready to be excited.In this state, macrophages suppress the local immunity by producing low levels of cytokines and other signalling molecules 29 .Although alveolar macrophages are found to initiate and maintain the immune response in the inflammatory mode, the contributing factors strongly affect the behaviour of alveolar macrophages.The certain establishment of intercommunication between alveolar macrophages and alveolar epithelial cells directly enhances the immunomodulation in an immunosuppressive way 30 , but we found possible evidence of both paracrine and autocrine regulation by either neighbouring cells or by the cells themselves, respectively. Although we did not analyse the cytokine contents in the mucus covering the epithelium, we inspected the bronchial glands containing all examined cytokines.Except for few cases where a moderate number of interleukin-containing glandulocytes were found, we obtained an almost negative result, with either no positive cells or mostly rare interleukin-containing glandulocytes.Mucus secreted by both secretory cells and bronchial glands works as a complex defence system containing multiple protective molecules; therefore, we hypothesize that the primary source of cytokines in mucus are not glandulocytes in bronchial glands.Importantly, goblet cells as a one of the secretory cell types are resident and specific to the airway epithelium, where they actively participate in innate and acquired immune responses ongoing throughout the nasal cavity, trachea, conducting bronchi and bronchioles 8,9 .Although induced by IL-13, particularly IL-4, IL-6 and IL-7 are secreted from goblet cells in significantly greater amounts compared with ciliated cells, and these cytokines are expected to also be found in the mucus layer.IL-1RA and IL-10 were also secreted from goblet cells in greater amounts at the apical surface and basolaterally, respectively 31 .Goblet cells apart from other cell types in tissue specimens processed for immunohistochemistry were analysed by semi-quantitative grading, where they showed mostly low-grade colouring.However, this understanding partly fits with our findings in the bronchial epithelium, where all were found up to a mostly moderate number, but in some cases up to numerous interleukin-containing structures. We found rare to few interleukin-containing structures in the connective tissue, with a greater number of IL-10-containing connective tissue fibroblasts.From many pro-inflammatory, modulating and anti-inflammatory cytokines, chemokines and cell adhesion molecules, lung fibroblasts are found to produce IL-1, IL-4, and IL-6.Airway smooth muscle cells have been described to secrete cytokines IL-1, IL-6, and IL-10 (ref. 32).Because of damage to the cellular structure, epithelial cells release IL-1α to trigger sufficient inflammatory responses in deeper connective tissue fibroblasts 33 .Although epithelial tissue as a lining structure is a more potent producer of cytokines, and the underlying connective tissue shows low, but efficient, amounts of interleukin-containing fibroblasts.A more prominent number of IL-10-containing fibroblasts was found compared with other tissues, suggesting the anti-inflammatory environment was also deeper in the tissue. Interestingly, we found a prominent source of IL-7 and IL-10 in the bronchial cartilage, where moderate to numerous hypertrophic chondrocytes, as well as chondroblasts positive for IL-7 and IL-10 were found.Although as an early damage pathogen recognition site is in epithelial tissue and alveolar macrophages, other non-immune structures greatly contribute to the initiation, regulation and suppression of the immune response.The epithelium stimulates immune cells and ignites the innate and adaptive immunity signaling 3 ; moreover, alveolar macrophages easily initiate both local and systemic immune responses, as well as recruit other immune mediators, leading to inflammation 28,29 .However, the influence of deeper structures to initiate, regulate and suppress the immune responses is poorly evidenced.We suggest the influence of deeper tissue structures in the wall on a larger histological structure, but they mostly regulate the local immune responses by localizing pathological conditions (e.g., inflammation, infection, malignancies, etc.) to prevent spreading throughout the tissue. We evaluated the relationship between the age of an individual and the number of interleukin-containing structures in all examined tissue, where no statistically significant (P < 0.05) correlations were found.Although cellular defects play an important role in the outcomes due to the aging, both local and systemic environments are reorganized towards the inflammation 34 .A persistent low-grade, chronic, systemic pro-inflammatory state described in aging is termed as "inflammaging" or immunosenescence [35][36][37] .A significant increase in IL-6 and IL-1β due to increasing age was observed.IL-10 was equivalent to the measurements in younger study subjects.An overall basal increase of pro-inflammatory cytokines is observed with aging 38 .Although we evaluated structures containing proinflammatory cytokines, we observed more IL-10-containing structures, suggesting the predominance of antiinflammation is explained as controlling conditions to suppress and regulate the inflammatory environment in relatively healthy lungs regardless of age. CONCLUSION The predominance of IL-10-positive structures in all examined tissue groups indicates the anti-inflammatory response in relatively healthy lung tissue.The weak findings of IL-4-containing structures except in structures more exposed to the outer environment may prove the deficiency of allergic changes in tissue samples mirroring the ontogenetic aspects.The widespread findings of IL-6-positive structures suggest the continuous stimulation of cytokine expression in lung tissue.The relative distribution of IL-7 in all tissue groups indicates the local regulatory and modulating patterns involving non-immune cells and alveolar macrophages.The pronounced distribution of all the cytokines in bronchial cartilage suggests the involvement of a compensatory local immune response in supporting tissues. Table 1 . Appearance and distribution of cytokines in 49 lung tissue specimens of relatively healthy humans. Table 2 . Summary of Spearman's rank-order correlation analysis to determine the strong relationship (0.70 > |rs| > 0.90) between the numbers of cytokine-containing structures in different tissues.The results are ordered in descending order by correlation coefficient rs (Spearman's rho).
2018-04-03T04:08:11.427Z
2017-06-14T00:00:00.000
{ "year": 2017, "sha1": "b9e97aecc13d86e4e56d09cd9b88d1407dee7231", "oa_license": "CCBY", "oa_url": "http://biomed.papers.upol.cz/doi/10.5507/bp.2017.029.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b9e97aecc13d86e4e56d09cd9b88d1407dee7231", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269787248
pes2o/s2orc
v3-fos-license
Physical activity and mortality in patients with dementia: 2009–2015 National Health Insurance Sharing Service data The study aimed to investigate the survival rate of patients with dementia according to their level of physical activity and body mass index (BMI). A total of 5,789 patients with dementia were retrieved from the 2009–2015 National Health Insurance Sharing Service databases. Survival analysis was used to calculate the hazard ratio (HR) for physical activity and BMI. The study sample primarily comprised older adults (65–84 years old, 83.81%) and female (n = 3,865, 66.76%). Participants who engaged in physical activity had a lower mortality risk (HR = 0.91, p = 0.02). Compared to the underweight group, patients with dementia who had normal weight (HR = 0.86, p = 0.01), obesity (HR = 0.85, p = 0.03) and more than severe obesity (HR = 0.72, p = 0.02) demonstrated a lower mortality risk. This study emphasizes the significance of avoiding underweight and engaging in physical activity to reducing mortality risk in patients with dementia, highlighting the necessity for effective interventions. Introduction Dementia is a debilitating condition affecting millions of people worldwide.It is characterized by a decline in cognitive function and can have a significant impact on individuals' physical and mental well-being [1].As the population ages, the prevalence of dementia is expected to rise, which highlights the need for effective interventions for patients with dementia [2]. Physical activity has been reported to be a potentially modifiable factor in determining the survival rates of patients with dementia [3].Tolppanen, Solomon [4] found that higher levels of physical activity were associated with a reduced progression of dementia among patients with Alzheimer's disease [4].In addition, Scherder, Van Paasschen [5] found that physical activity was associated with better executive function in elderly individuals with mild cognitive impairment, which is often a precursor to dementia.Another previous study reported that higher levels of physical activity over the life course were associated with better cognitive performance in old age and may help to protect against cognitive impairment and dementia [6].A systematic review also found that exercise programs may improve cognitive function and activities of daily living for individuals with dementia [7].While the evidence is limited, it suggests that exercise may be a promising intervention for individuals with dementia.However, although many studies are reporting that physical activity reduces the progression of dementia and improves executive function in patients with dementia, there is a lack of research examining the risk of death between dementia patients and physical activity.In other words, there is a lack of research reporting on whether physical activity can reduce mortality in dementia patients. In addition, body mass index (BMI) has also been reported as a potentially significant factor influencing the prevalence of patients with dementia [7].Albanese, Launer [8] reported that higher BMI was negatively associated with prevalence in patients with dementia.However, the association between late-life dementia and BMI is multifaceted, and it could be influenced by various factors, including age, sex, and timing of BMI measurement.While both low and high BMI have been associated with an increased risk of dementia, high midlife BMI has been identified as a significant risk factor for dementia later in life [9][10][11][12].However, previous studies have extensively investigated the role of BMI as a risk factor for dementia in the general population [13,14].In addition, there has been limited focus on investigating the relationship between BMI and dementia, particularly among individuals diagnosed with dementia.Further studies are needed to investigate the complex association between BMI and dementia in older adults. Given the potential impact of physical activity and BMI on the survival rates of patients with dementia, further research in this area is needed.Therefore, this study aimed to investigate the survival rate of patients with dementia according to physical activity and BMI.Understanding the complex interplay between these factors could lead to the development of more effective interventions to improve health outcomes for patients with dementia.The hypotheses for this study are as follows: Older adults with dementia who have high levels of physical activity are expected to exhibit a higher survival rate compared to those with low levels of physical activity.Additionally, older adults with dementia who have a normal BMI or higher BMI are anticipated to show a higher survival rate than those with an underweight BMI. Data sources We utilized the 2009-2015 National Health Insurance Sharing Service (NHISS) databases provided by the National Health Insurance Corporation.NHISS, provided by the National Health Insurance Corporation in South Korea, offers a substantial advantage with its nationally representative large-scale dataset encompassing a wide array of health and medical information.Additionally, NHISS is widely utilized to support healthcare policies and conduct academic research, boasting high levels of data reliability and quality [15].NHISS databases consist of health checkup cohort databases, sample cohort databases, senior cohort databases, customized research databases, health and disease indicators, and other data.In this study, we used the sample cohort database that was collected from one million Korean residents eligible for health insurance and medical assistance.The sample cohort included the Long-Term Care, Birth and Death, Health Examination, and Eligibility and Insurance databases.The NHISS data are collected under the Korean government oversight, encompassing mandatory insurance enrollment and medical records from healthcare institutions.Health authorities secure data for health statistics and research purposes through dedicated programs, utilizing healthcare information systems for the collection of medical records.The gathered information undergoes anonymization or processing by personal information protection policies.More detailed information about the NHISS databases is available at https://nhiss.nhis.or.kr. to apply for data use.Enter personal information and complete the registration process.Data Use Application: Apply for data use through the website's bulletin board or a designated procedure.The application must detail the purpose of the research, the types of data needed, and the period of use.Ethics Review Approval: If the research involves human subjects, approval from the relevant Institutional Review Board (IRB) is required.A copy of the approval document should be included in the application process.Review and Approval: The National Health Insurance Service reviews the application and ethics review approval document, then decides on the approval for data use.Additional information may be requested during this process.Data Use Fee Payment: If approved, a fee may be charged for some data.The fee varies depending on the type of data and the scope of use.Data Provision: Once all procedures are completed and the fee is paid, the National Health Insurance Service provides access to the requested data.Data access is usually granted through permission to remotely access a computer within the National Health Insurance Service company.Data Use and Management: The provided data must be used only for the purposes specified in the application.Proper management is essential to ensure data security and privacy protection.Results Reporting: After completing the research or project, reporting the results to the National Health Insurance Service may be required.The report ensures transparency in data use and serves as foundational material for future research. A sample cohort from the NHISS database was used to investigate the effect of physical activity status, physical activity frequency, and BMI on patients with dementia.In this study, written informed consent from the participants was not sought, as the data utilized herein consisted of pre-existing and anonymized information.This study was approved by the Yonsei University Institutional Review Board (#1041849-202304-SB-062-01) and met the research exemption criteria for the participating institutions.Furthermore, it is noteworthy that our research exclusively involved adult participants, thus obviating the requirement for parental or guardian consent.We accessed the shared data for analysis from February 2023 to March 2023.Furthermore, the shared data we utilized had individual participant-identifiable information removed, and we conducted our analysis using assigned numerical identifiers to ensure participant anonymity.The study variables were extracted from the sample cohort by removing missing values and duplicate individual identification numbers.Lastly, we extracted patients with dementia from the long-term care database.As a result, a total of 5,789 patients were included in the final analytic data (Fig 1). Independent variables The independent variables were physical activity (high intensity, moderate intensity, or walking), number of physical activities during the past week, age, and BMI.Physical activity variables were extracted from the Health Examination database and categorized into three groups, such as high intensity, referring to vigorous physical activity lasting 20 minutes or more per week; moderate intensity, indicating moderate physical activity lasting 30 minutes or more per week; and walking physical activity, consisting of walking for 30 minutes or more per week.Physical activity was coded as 1 when one or more types of physical activity were performed in the past week and 0 when none were performed.The number of physical activities was the sum of each high-intensity, moderate-intensity, and walking physical activity (scores ranged from 0 to 2).Finally, the BMI value was extracted from the Health Examination database and coded into five groups using the Asia-Pacific BMI standard, including underweight (BMI < 18.4), normal weight (18.5 < = BMI < = 22.9), overweight (23 < BMI < 24.9), obesity (25 < BMI < 29.9), and severe obesity (BMI > 30) [16]. Dependent variables The death rates in the patients with dementia were extracted from the Birth and Death database.The variable of death date was used to code deceased patients with dementia as 1 and surviving patients as 0. Additionally, the death date variable was coded monthly from 2009 to 2015 to investigate the survival curve. Statistical analysis Demographic characteristics were analyzed using descriptive statistics.Survival analysis was performed to investigate the effect of physical activity, age, and BMI on survival rates in patients with dementia.Kaplan-Meier analysis was used to plot the median survival time and cumulative survival probability for each patient group [17].The Kaplan-Meier method calculates the cumulative survival probability based on the probability of an event occurring after a certain point in time.Differences in survival rates between independent variable groups were examined using the log-rank test [17].Although the Kaplan-Meier method allows for a visual comparison of differences in the likelihood of event occurrence between groups, it does not control for factors other than those specifically selected for analysis.Therefore, the Cox proportional hazards model was used for multivariate analysis to adjust for the study covariates and calculate HR [17].The Cox proportional hazards model can analyze the impact of various attributes on the occurrence of a specific event.Descriptive statistics and survival analyses were performed using SAS software, version 7.1 (SAS Institute Inc, Cary, NC, USA), provided by the virtual server of the NHISS. Results Table 1 shows the frequency distribution of the demographic characteristics of the study sample.The majority of the study sample were older adults with 65-84 years old (n = 4,852, 83.81%), female (n = 3,865, 66.76%), and almost half of the participants engaged in regular physical activity (n = 2,761, 47.69%).Among those who engaged in physical activity, the majority performed physical activity once or not at all during the past week.In terms of body weight, the participants were almost evenly distributed across normal weight (n = 2,151, 37.16%), overweight (n = 1,255, 21.68%), and obesity (n = 1,644, 28.40%).Most of the participants were non-smokers (n = 5,284, 91.28%) and non-drinkers (n = 5,326, 92.00%).Among the chronic diseases, hypertension was the most common (n = 3,372, 58.25%). Kaplan-Meier analysis and log-rank test This Furthermore, this study estimated the time of death based on physical activity status, physical activity frequency, and BMI (Fig 3).The log-rank test revealed significant differences in physical activity status (p = 0.005), number of physical activities (p = 0.012), and BMI (p = 0.007). Cox proportional hazard model Table 2 shows the results of the Cox regression analysis, with physical activity and BMI being the independent variables.Participants who engaged in physical activity had a significantly lower mortality risk (HR 0.91, p = 0.02, 95% confidence interval [CI] = 0.84-0.99)than those who did not.Regarding body weight, participants with normal weight had a significantly lower mortality risk (HR = 0.86, 95% CI = 0.76-0.97,p = 0.01) than those who were underweight.Additionally, participants with obesity (HR = 0.85, 95% CI = 0.74-0.98,p = 0.03) and severe obesity (HR = 0.72, 95% CI = 0.55-0.95,p = 0.02) had a significantly lower mortality risk than those who were underweight.None of the other variables included in the analysis showed a significant association with mortality. Table 3 shows the subgroup analysis of the number of physical activities and mortality risk.Participants who engaged in 1 physical activity per week had a significantly lower mortality risk than those who did not (HR = 0.90, 95% CI = 0.82-0.99,p = 0.03).However, no significant Discussion This study investigated the association of physical activity and BMI on the survival rates of patients with dementia.The Cox regression analysis was performed and revealed that participants who engaged in physical activity had a significantly lower mortality risk than those who did not.Participants with normal weight had a significantly lower mortality risk than those who were underweight, and participants with obesity or severe obesity had a significantly lower mortality risk than those who were underweight.No significant association was observed between other variables.These findings suggest that physical activity and body weight are important factors in determining the survival rates of patients with dementia. The Cox regression analysis revealed that participants who engaged in physical activity had a significantly lower mortality risk than those who did not.Previous studies identified physical activity as a potential factor that may affect the dementia progression of patients with dementia [3,4].Additionally, physical activity may improve cardiovascular function, reduce inflammation, and promote neuroplasticity, improving outcomes in patients with dementia [18].These findings are consistent with the current recommendations that physical activity is essential for maintaining overall health and reducing the risk of chronic diseases such as dementia [19].Previous studies and current recommendations support these findings. However, in our study, no significant difference was observed in the risk of death among participants who engaged in physical activity twice a week.According to a study by Toots et al., [20], a high-intensity exercise program showed a significant decrease in ADL function and improved balance in dementia patients, while the exercise program showed cognitive decline.Additionally, de Souto Barreto et al. [21] reported that the intervention group which received exercise showed a decrease in cognitive function compared to the group that received music intervention and arts and crafts.They suggested that advanced age and excessive training might have contributed to this cognitive decline.Based on the evidence from these previous studies, it is believed that there was no significant difference in the risk of death among participants who engaged in physical activity twice a week as reported in our study.This study found that BMI was associated with the survival rate of patients with dementia.Participants classified by BMI as normal weight, obese, or severely obese had a lower risk of death than underweight participants.These findings are consistent with previous research, which showed that being underweight is associated with an increased mortality risk in patients with dementia [22,23].Weight loss may be a common symptom of dementia, which can further exacerbate the negative impact of low BMI on survival rates in patients with dementia [22].Another study reported that being overweight or obese in middle age was associated with an increased risk of dementia but that being overweight or obese in later life did not increase the risk of dementia [10,12].However, it may be related to the impact of BMI on overall health and immune function.This study highlights the importance of body weight in determining survival rates in patients with dementia, suggesting that maintaining a healthy weight may result in lower mortality in patients with dementia [8,24].In addition, research suggests that being underweight, especially among older adults, may serve as a warning sign for compromised immune function and overall health.Winter et al. [25] found that both normal weight and higher BMI were associated with better health outcomes compared to underweight status among older adults.Thus, it is inferred that there was no discernible difference in outcomes between the commonly known unhealthy high BMI and normal BMI categories.This suggests that in older age groups, normal BMI and higher BMI may not yield significantly different results, as both represent a healthier state compared to being underweight.As another evidence, the studies conducted by Sobo ´w et al. [26] provide compelling evidence that, within the context of mild cognitive impairment (MCI), the implications of BMI on health outcomes take on a different dimension.Specifically, these studies highlight that lower BMI and weight loss in individuals with MCI are associated with an increased risk of progression to dementia, including Alzheimer's disease.This suggests that while higher BMI may generally be associated with better health outcomes in the broader elderly population, the situation is more complex for those with MCI.The findings imply that for MCI patients, maintaining a stable weight or avoiding underweight status could be particularly crucial in mitigating the risk of developing dementia.Therefore, it becomes evident that in the management and care of MCI patients, a nuanced approach to BMI and weight management is required, one that considers the unique risks and needs of this population. Additionally, the control variables used for analysis in our study included age, sex, smoking, drinking, stroke, heart disease, hypertension, diabetes, and hyperlipidemia.Among them, only age showed a significant difference.In this study, based on previous research, age was classified into pre-older adult age, young-old + old-old, and oldest-old [27].The risk of developing dementia tends to increase with age.According to Raz [28], the prefrontal cortex was reported to be the area with the greatest age-related vulnerability.Additionally, a study by Jorm, Jolley [29] reported that the incidence of dementia increases exponentially with age between 65 and 90 years and doubles approximately every 5 years.Similar to these previous studies, in this study, the risk of death in dementia patients increased as the age increased from 65 to 84 years old and over 85 years old compared to the under-65 age group. This study has several limitations.First, the physical activity variable captured only a limited time range (e.g., during the past week).In addition, physical activity should encompass not just a wide time range but also include specific details regarding the duration of the activites being maintained.Second, because the data used in this study was from 2009 to 2015, generalization to 2023 may be difficult.Accordingly, future studies should utilize recent data and include variables including intensity and duration as well as time range of physical activity.Third, in this study, only BMI and physical activity were employed as crucial variables for analyzing the risk of mortality in patients with dementia.However, relying solely on these two variables is insufficient to meet the study's validity requirements.This inadequacy stems from the omission of numerous other determinants associated with the risk of mortality in patients with dementia, including social and environmental determinants.The study, structured around variables of interest based on previous research, resulted in the exclusion of several decisive factors.Therefore, for future research endeavors, it is imperative to incorporate the latest data and explore variables related to the intensity, duration, and temporal aspects of physical activity.Additionally, there is a need to include social and environmental determinants associated with the risk of mortality in dementia patients.This approach will address the limitations of the current study and contribute to a more comprehensive understanding of the subject. Conclusion This study reported that physical activity can reduce the risk of death in patients with dementia.However, there appeared to be no correlation with excessive exercise in patients with dementia.These results suggest that when implementing exercise programs for patients with dementia, it is advisable to tailor the exercise regimen based on the individual patient's condition.Additionally, in this study, it was confirmed that among patients with dementia, normal weight compared to underweight, and overweight and obesity had a lower risk of death compared to underweight.Based on these results, it suggest that interventions are needed to adjust the diet of patients with dementia to maintain and increase sufficient nutrition and body weight.The study findings would be useful when establishing treatment programs for dementia patients. Fig 1 . Fig 1. Cohort selection flow diagram.DB: Database, ID: Identification.https://doi.org/10.1371/journal.pone.0301035.g001 study investigated the time of death of patients with dementia based on a six-year dataset by plotting the Kaplan-Meier curve (Fig 2).The x-axis represents the death date for patients with dementia from 2009 to 2015, and the cumulative survival probability is plotted on the yaxis.A total of 5,789 patients were tracked, and 3,217 of them died between 2009 and 2015.
2024-05-17T05:13:05.631Z
2024-05-15T00:00:00.000
{ "year": 2024, "sha1": "e28c1dca2c19a5eea93eb60fe936fb823c5fc3a6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0301035&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e28c1dca2c19a5eea93eb60fe936fb823c5fc3a6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235440060
pes2o/s2orc
v3-fos-license
Light from a firefly at temperatures considerably higher and lower than normal Bioluminescence emissions from a few species of fireflies have been studied at different temperatures. Variations in the flash-duration have been observed and interesting conclusions drawn in those studies. Here we investigate steady-state and pulsed emissions from male specimens of the Indian species Sclerotia substriata at temperatures considerably higher and lower than the ones at which they normally flash. When the temperature is raised to 34 °C, the peak wavelength gets red-shifted and the emitted pulses become the narrowest which broaden considerably thereafter for small increases in temperature; this probably indicates denaturation of the enzyme luciferase catalyzing the light-producing reaction. When the temperature is decreased to the region of 10.5–9 °C, the peak gets blue-shifted and the flash-duration increased abnormally with large fluctuation; this possibly implies cold denaturation of the luciferase. We conclude that the first or hot effect is very likely to be the reason of the species being dark-active on hot days, and the second or cold one is the probable reason for its disappearance at the onset of the winter. Our study makes the inference that these two happenings determine the temperature-tolerance, which plays a major role in the selection of the habitat for the firefly. The light of the firefly is the outcome of a very efficient reaction, called chemiluminescent reaction. It is wellknown that oxygen is the biochemical trigger which excites the substrate luciferin, and produces the photoemitter molecule oxyluciferin in presence of ATP and Mg 2+ , the reaction being catalyzed by the enzyme luciferase. In the normal flashing state of a live firefly, visible light is produced as the excited state oxyluciferin decays to the ground state via a pathway followed by molecules indicating phosphorescence 1 . It has been shown that the pulses produced by the firefly are manifestations of an oscillating chemical reaction 2 . Very recently, assuming the firefly lighting cycle to be a nonlinear oscillator with a robust periodic cycle, a low dimensional nonlinear mathematical model based on the basic lighting mechanism of a firefly has been proposed 3 . A few studies have been carried out on the effect of temperature on in vivo emissions of fireflies. It has been observed that flash periods of four Luciola species of fireflies of Melanesia decrease with an increase in temperature 4 . A significant negative correlation between the ambient temperature and inter flash interval has been observed in specimens of Luciola cruciata at five different sites in central Japan 5 . For the Indian species Luciola praeusta at 20-40 °C, the pulse duration has been found to decrease approximately linearly with temperature for male specimens 6 and exponentially with temperature for female specimens 7 . These imply that the speed of the light-producing reaction increases approximately linearly for males and exponentially for females of this species in this range of temperature. At the temperature of approximately 42 °C, the duration of a flash from this species becomes the minimum and thereafter increases considerably with a slight increase in the temperature, implying that denaturation of the enzyme occurs at this temperature optimum. The peak wavelength also shows a red shift of about 5 nm at this temperature. On the other hand, at temperatures lower than approximately 21 °C, pulse-durations show large increase in a non-linear manner with lowering of the temperature 8 -almost like the ones observed with fireflies positioned under a strong static magnetic field 9 . Below 17 °C for females and 15 °C for males of this species, and from below the normal flashing temperature of about 28 °C for females and below about 22 °C for males of another Indian species Asymmetricata circumdata, peaks in flashes get split into three, manifesting the three luminescent forms of the emitter oxyluciferin, with lifetimes of the order of milliseconds 1 . Sclerotia substriata is the fourth species of firefly found in India after the summer species L. praeusta and A. circumdata, and the winter one Diaphanes sp. This species has been identified by Dr. Lezley Ballantyne of Charles Sturt University, Australia. A male specimen of this species is shown in Fig. 1. It is found near the banks of two large ponds, separated by a lane, in the campus of Gauhati University in very small numbers from April to Results and discussion Steady-state measurements at different temperatures. Steady-state emission spectra recorded at high temperatures of 34, 35, 37, 39, 41, 43 and 45 ºC along with the one at the normal laboratory temperature of 28 ºC are shown in the wavelength scale in Fig. 2a, and in the energy scale (eV) in Fig. 2b. Emission spectra at low temperatures of 11.5, 10.5, 9.5, 8.5, 7.5 and 6.5 ºC with the normal one at 28 ºC are presented in the wavelength scale in Fig. 2c, and in the energy scale in Fig. 2d. A typical spectrum in the normal range of temperature for this species is asymmetric in nature. The wavelength peak appears at 558 nm, and the full width at half maximum (FWHM) is measured as 61 nm, spreading from 532.5 to 593.5 nm, at the laboratory temperature of 28 ºC (Supplementary Table 1). It has been hypothesized that different species of fireflies emit light at slightly different wavelength peaks because of slight differences in their enzyme structures 10 . Changes in temperature from 20 up to 34 ºC do not change the values of the emission peak (558 nm) and FWHM (61 nm). From 34 ºC onwards, the peak begins to shift towards red; the lower and upper positions of the FWHM begin red-shifting as well, and the spread becomes broad. At 45 °C, the maximum employed temperature that only a few of the specimens could withstand, the maximum peak shift of 40 nm with the maximum FWHM spread of 96 nm, ranging from 539 to 635 nm, are observed (Supplementary Table 1). At this temperature, the light organs of the surviving specimens exhibit continuous glows for a couple of minutes rather than on-off emissions, that is, flashes. We may consider this temperature as the maximum tolerable one for this firefly. Variations in emission wavelengths at different temperatures can be realized from the photographs of the lantern recorded at different temperatures as shown in Fig. 2e. This change is found to be irreversible, as lowering of the temperature to a normal one does not reset the peak position. It has been reported that the emission peaks prominently shifted towards red for males of the species L. praeusta at 42 ºC 7 , and the winter species Diaphanes sp. at 28 ºC 11 , with the conclusion that these changes imply denaturations of the firefly enzymes at or above these temperature optimums. In the case of the presently studied species, as the peak-shift becomes distinct from 34 ºC onwards, we could say that denaturation of the luciferase probably occurs at this temperature. However, at the maximum tolerable temperature of 45 ºC, the change in the intercellular pH might be significant, adding to the red-shift of the spectra. A few reports have put forward propositions on the mechanism of the color change in the firefly luciferase in vitro at varying pH and temperature. It has been suggested that conformational changes of the firefly luciferase at different pHs could be the reason for producing red bioluminescence and changes of kinetics 12,13 . In another report, rigidity of the pH-sensitive luciferase has been proposed to be the reason behind different emission colors 14 . Partial denaturation by heat and low pH has been cited as the probable reason for the redshift in the bioluminescence spectra 15 . Bioluminescence of the firefly Photinus pyralis has been shown to change its color at pH 6.8 from yellow-green at 15 ºC to yellow at 25 ºC and then to orange at 34 ºC, which in conjunction with sharp lengthening of the decay time pointed towards denaturation of the luciferase 16 . At temperatures below 28 °C, values of the peak and FWHM of the spectra remain the same up to approximately 10.5 °C (Fig. 2c,d). Around this temperature, the peak of the spectrum starts shifting towards the shorter wavelength side, with the FWHM becoming narrower. As the temperature is lowered to the minimum of 6.5 °C, at which specimens emit continuous faint light, the peak gets shifted to 553 nm with a minimum FWHM value of 57 nm (Supplementary Table 1). The maximum peak shift towards the lower wavelength side, therefore, is 5 nm. Below 6.5 °C, no light emission from the specimens is observed. Changes towards the side of the lower wavelength could even be noticed easily from the images of the lantern shown in Fig. 2e. Therefore, this figure is apparently a photographic evidence of the change in the emitted wavelength peak due to the change in the temperature to very low and very high values for this species. A general observation is that the emission intensity increases as the temperature is decreased from 28 °C; the intensity reaches a maximum value at approximately 10.5 °C, and thereafter decreases continuously up to 6.5 °C. The change in the peak-value and attainment of the maximum intensity imply that the enzyme luciferase of S. substriata possibly gets denatured at about 10.5 °C At the normal laboratory temperature of 28 ºC, the peak of the spectrum appears at 558 nm with a FWHM value of 61 nm. Values of the peak and the FWHM remain the same up to 34 ºC. As the temperature is gradually increased further, the peak of the spectrum begins to shift towards red and the FWHM begins to get broadened with its lower and upper points also showing red shifts. At 45 ºC, the peak shifts to a maximum value of approximately 598 nm, and the FWHM extends to a maximum value of approximately 96 nm. At low temperatures, values of the peak and FWHM remain the same up to approximately 10.5 °C. Thereafter, the peak shifts towards blue and the FWHM gets narrower. At 6.5 °C, the peak shifts to 553 nm and the FWHM gets narrowed to the value of 57 nm. (b,d) Bioluminescence spectra in the energy scale. The bioluminescence peak energy, which remains the same up to 34 ºC, decreases slowly up to 41 ºC and thereafter rapidly up to 45 °C. (e) Photographs of the firefly light emitting organ at various temperatures. There is no change in the color of the light organ at 25, 28, and 34 ºC, which indicates the same values of peak and FWHM. The color changes to yellow at 36 ºC, to yellow-orange at 39 and 42 ºC, and to red at 45 ºC. On the other hand, the lantern shows tinges of green at 10.5 and 9 °C; the color becomes predominantly green at 6.5 ºC. This figure is a visual indication of shifts in the wavelength peak. Changes in the color from yellowish-green to red and to green make it clear that the peak gets shifted at both high and low temperatures. (f) Measured peak energy data (mean ± SD) at various temperatures in the range of 6.5-45 ºC. Standard deviations, that is, fluctuations, are clearly much more at both low and high temperatures compared to those at normal temperatures. www.nature.com/scientificreports/ due to the cooling effect. The changes observed, however, are reversible; when the temperature is increased, the peak goes back to its normal position and intensity. The change in the emission peak towards red or blue with the change in temperature implies that the relative probabilities of transition from an upper level to the lower levels get affected, and the maximum probability at a high or a low temperature is no longer associated with the usual two levels giving rise to the 558 nm peak at 28 °C. We may now consider temperatures above 34 ºC as high and below 10.5 ºC as low for this species of firefly. Scientific Reports It could be noticed in Fig. 2a,b that the luminescence intensity decreases as the temperature rises up to 34 ºC, though the wavelength peak (558 nm) and FWHM (61 nm) remain unchanged in this range. A decrease in intensity with an increase in temperature has also been noticed in cases of the other three Indian species of fireflies 7,11,17 . With the increase in temperature, the rate of the biochemical reaction inside the firefly increases and the excitation-emission process becomes more frequent. Since the rate of the firefly chemiluminescence reaction increases, the flash duration decreases which implies a decrease in the lifetime of the upper level of the excited state oxyluciferin 6 . As the steady-state luminescence intensity is proportional to lifetime 18 , the emission intensity decreases. Another possibility is that above 34 ºC, because of the denaturation of the enzyme, only a small number of luciferin molecules take part in the luciferase-luciferin reaction which results in the weak emission-intensity. Variations in bioluminescence peak energies of this firefly over the temperature range of 6.5-45 ºC are presented in Fig. 2f. At the normal laboratory temperature of 28 ºC, the emission shows the bioluminescence peak energy at 2.2244 eV which remains the same up to 34 °C (Supplementary Table 2, Fig. 2b,f). Above this temperature, the energy decreases slowly up to 41 °C, and thereafter rapidly especially from 43 °C onwards, attaining a minimum value of 2.079 eV (598 nm) at 45 °C. At low temperatures, the peak energy does not change up to approximately 10.5 °C; thereafter it increases and attains the maximum value of about 2.2445 eV at 6.5 °C (Supplementary Table 2, Fig. 2d,f). It could be observed from the wavelength-spectra that the spectral shape changes with an increase in temperature above 34 °C (Fig. 2a,b), which becomes clear from 41 °C. The emission intensity decreases at high temperatures, and at 45 °C the spectrum nearly decomposes into two peaks. To analyze the changes with temperatures, we employ the technique of Gaussian curve fitting by assuming four peaks: two green components of 540 nm (2.2986 eV) and 558 nm (2.2244 eV), an orange component of 598 nm (2.0756 eV), and a red component of 630 nm (1.9702 eV). The Gaussian fitted spectra reproduce the experimental spectrum, which explain changes in the spectral shape very well. The Gaussian fitted curves along with the experimental spectra recorded at 8.5, 10.5, 28, 34, 41, 43 and 45 °C are presented in Fig. 3a-g, and the variation of the intensity ratio of the Gaussian components are presented in Fig. 3h with the values given in Supplementary Table 3. The intensity ratios I 598 /I 558 and I 630 /I 558 remain approximately equal from 25 to 34 °C, and increase exponentially thereafter with further increase in temperature up to 45 ºC. On the other hand, the intensity ratio I 540 /I 558 changes linearly with a change in temperature. The gradual fall in the intensity of the green component of 558 nm and the rise in the orange-and red-intensity with the increase in temperature are clearly visible in the Gaussian fitted spectra ( Fig. 3a-g). Hence it may be inferred that changes in spectral shapes in the emission spectra at high temperatures are due to the rise in the intensity of the orange and red components, respectively. A study in vitro on bioluminescence emissions of the firefly Photinus pyralis has revealed that the intensity of the green component of energy 2.2 eV is the only temperature-sensitive component while the red and orange ones of energies 1.9 eV and 2.0 eV are robust 16 . In the present in vivo study, it is quite clear that the intensities of the orange and red regions are the temperature-dependent components above 34 °C. The peak of each Gaussian component is shifted towards red at high temperatures ( Supplementary Fig. 2a) and towards blue at low temperatures, and the FWHM spread of each component increases slightly with an increase in temperature ( Supplementary Fig. 2b), similar to the observation of the change in the bioluminescence color due to a change in the pH 19 . It is worth mentioning here that in the temperature-range of 15-34 °C at pH 6.2, 7.0, and 8.0, luminescence energies and widths for green, orange and red components in the Gaussian fit spectra of in vitro bioluminescence of P. pyralis do not show any significant temperature dependence. At temperatures below 28 °C in the present case, no significant change in the ratio I 598 /I 558 and I 630 /I 558 (Fig. 3h) is observed, which implies that the orange and red components are insensitive to the low temperature. Time-resolved measurements at different temperatures. Flashes emitted by this firefly at normal and high temperatures are presented in Fig. 4, and the data given in supplementary data 1. Unlike males of L. praeusta which usually emit three flashes in a particular sequence, this species emits a number of consecutive flashes in a regular interval of time producing a long pulse train at normal temperatures ( Fig. 4a-d, Supplementary Video 1). Flashes from a male S. substriata at a certain temperature are found to be considerably broader than those from a male L. praeusta at that temperature. For example, in the normal flashing range, its durations of 187 and 147 ms (Supplementary Table 4) are distinctly longer than L. praeusta's 112 and 97 ms at 25 and 30 °C, respectively 6 . Longer flash durations of this species indicate that the chemiluminescence reaction lasts longer, probably proceeds slower, than in L. praesuta. Though males of these two species are of the same size, approximately, the lantern-size of S. substriata is smaller due to the existence of a small gap between the upper and lower segments, which does not exist in L. praeusta. This gap implies the presence of a lower concentration of the luciferin-luciferase combination in the lantern of the presently studied species. Therefore, we propose that presence of the lower concentration of this combination in S. substriata, compared to L. praeusta, probably makes the speed of the reaction slower. It is observed that the flashes at 40 °C are simple, not compound or combination ones, but those appear above the zero level of the oscilloscope. From 40 ºC upwards, no single or 'clean' pulses are observed; the ones obtained are noisy and of irregular shapes. The minimum intensity of the flashes remains above the zero level, sometimes lying very close to the zero level, resembling the flashes from the winter firefly Diaphanes sp. at its normal flashing temperatures 11 and from L. praeusta at low temperatures 8 www.nature.com/scientificreports/ www.nature.com/scientificreports/ one blinks at these high temperatures. This should be the reason for the intensity never really coming down to the zero level during the 'off ' time above 40 °C. Effects of low temperature on the flashes of Diaphanes sp. and L. praeusta have been suggested to be due to the non-uniform proceeding of the neural activity releasing the nerve pulse octopamine, which eventually affect the regular time-resolved flashes. In the present experiment also, the neural activity clearly gets affected at high temperatures. It is well known that after denaturation the number of reactants in a reaction decreases with an increase in temperature, resulting in the excitation of less number of molecules to the upper level. In this case when the temperature goes past 40 °C, the number of luciferin molecules reaching the excited level is probably much less, and because of this the low intensity time-resolved emission profile has been obtained with a noisy appearance. A similar observation has been made at 37-38 ºC for the winter firefly Diaphanes sp. 11 . Flashes produced at low temperatures are presented in Fig. 5, and the data given in supplementary data 2. It is easily noticeable that when the temperature is lowered from 20 °C, the duration of a single flash increases slowly, and the increment becomes sharp from the temperature of approximately 10.5 °C. At the temperature of 9 °C, the flashes become considerably broad, and in most of those the start-and end-points are not sharp or clear. Some of the specimens stop flashing at this temperature. At temperatures lower than this, the number of flashing specimens become even smaller and their flashes present distorted shapes. When the temperature is further decreased from approximately 8.5 °C, the specimens emit a giant flash, and thereafter emit glows or continuous light of weak intensity. At 7.5 °C, eight such flashes from eight specimens could be recorded; below this temperature, no 'clean' single flash is observed. At 6.5 °C, all the insects stop flashing, and only glows could be noticed in their light organs. This weak continuous emission means that the respiration of mitochondria, which densely pack the peripheral cytoplasm of photocytes 20 , is inhibited, and thereby the biochemical trigger oxygen is supplied continuously to luciferin-containing organelles (peroxisomes). Variations in the measured pulse duration with temperature over the range of 20-40 ºC are shown in Fig. 6a. In the range of 20-34 °C, changes in the pulse duration are found to be exponential. As already mentioned, over the temperature range of 20-40 °C, the change in the single pulse duration with temperature for the species L. praeusta was observed to be linear for its males, indicating that the rate of luciferin-luciferase reaction inside the firefly changed linearly with temperature 6 , and exponential for females suggesting exponential increase in the reaction-rate with temperature 7 . The variation of the decay time of the in vitro bioluminescence of P. pyralis in the temperature range of 15-34 °C at pH 7.0 and 8.0 has been reported to be exponential 16 . It is well established that the rate of the enzyme-catalyzed reaction increases exponentially with a simultaneous decrease in the active enzyme through thermal irreversible inactivation [21][22][23] . It is evident that the flash duration decreases continuously with an increase in temperature up to the temperature optimum of 34 °C, and thereafter increases for further slight increase in temperature (Figs. 4,6b). Recently, minimum flash durations for male and female fireflies of the species L. praeusta have been found at 42 °C and 41.5 °C, respectively, above which the durations increase noticeably with slight increases in temperature. Hence 42 °C and 41.5 °C have been considered as the optimum temperatures for male and female fireflies of L. praeusta 7 . In the present case, the flash duration becomes minimum, which implies the maximum rate of the luciferase-luciferin reaction, at 34 (± 0.5) ºC. This consolidates the finding that this temperature is the optimum one for the presently studied species of the male firefly. This result is in good agreement with the steady-state emission spectrum, and the change in the flash duration is also found to be irreversible, like the change in the peak position. For the North American species P. pyralis, as mentioned already, probable denaturation or deactivation of the enzyme luciferase has been reported above 30 ºC at pH 8.0 as the decay time of the in vitro luciferase-luciferin reaction increases sharply 16 . The variation in the measured single pulse duration from 20 to 7.5 °C is presented in Fig. 6c. It could be easily observed that the flash-duration increases with a decrease in temperature. It has already been hypothesized that low temperatures slow down the speed of the enzyme-catalyzed bioluminescence reaction, which make the duration longer 6 . In the range of temperature presented in the figure, the increase is exponential. A general observation is that flashes appear in a regular flash-interval, and the interval increases with a decrease in the temperature (Figs. 5,6d); but from 10.5 °C, and more prominently from 9 °C, irregularities occur in the flashinterval. Additionally, changes in the pulse duration from 11.5 to 10.5 °C are very small compared to the ones recorded at other temperatures. Therefore, we could roughly say that 10.5 °C is the temperature up to which the stability of this firefly luciferase is sustained, and is lost at temperatures lower than this. This inference is supported by the values of steady-state intensities of emission (Fig. 2c) and standard deviations of the flashes (Supplementary Table 5). The changes become marked at 9 °C. It has been reported that an enzyme becomes more unstable beyond the transition temperatures of the two states (fold/unfold) due to both cooling and heating effects 24 . Therefore, we speculate that 10.5-9 °C is the range of temperature where denaturation of the enzyme luciferase takes place for this species of firefly. However, it has to be mentioned here that there also exists the possibility of compartmentalization of the active site which gives rise to the blue-shifted spectrum. As in the case of the steady-state emission, changes in the pulse emission are also reversible: the characteristic features come back on raising the temperature above this range. We realize that the optimum temperature at which thermal effects become prominent could be determined precisely, as the flash duration becomes minimum at this temperature and then increases sharply for small increases from this. Steady-state spectra, showing redshift at this temperature, consolidates this. On the other hand, because of the fact that the flash duration along with the standard deviation go on increasing with lowering of the temperature, exact determination of the optimum temperature producing marked changes in the emitted flashes is difficult. In the present case at 10.5 ºC, steady-state spectra manifest a blue-shift which could possibly be an indication of cold denaturation. But at this temperature, the increase in the standard deviation is not strikingly high and the pulse broadening is just a little bit more marked in the flashes obtained. Hence making a statement that probable protein unfolding starts exactly at this temperature is difficult. It is at 9 ºC that the duration and its fluctuation, that is, standard deviation, increase sharply which should be a confirmation of the www.nature.com/scientificreports/ www.nature.com/scientificreports/ instability of the enzyme. Therefore, instead of precisely pinpointing a temperature, it would be safe to indicate a small range of temperature-between 10.5 and 9 ºC. Average values of inter-flash intervals of the firefly S. substriata at temperatures from 10.5 to 34 ºC are shown in Fig. 6d. At the normal laboratory temperature of 28 ºC, the flash interval is measured as 209 ms (Supplementary Table 6); the interval changes exponentially in the range of 20-34 °C. It is easily noticed that the flashes come closer to each other when the temperature rises (Fig. 4), and go away from each other as the temperature decreases from 28 ºC (Fig. 5). This clearly points to the fact that the inter flash interval decreases when the temperature increases, and vice-versa. Above 34 ºC, the interval no longer remains regular-some of the flashes are very closely spaced while some are very widely spaced-thus making a statistical analysis pointless. Similarly, below 10.5 °C, flash intervals vary so much that their measurements have to be discarded at those temperatures. In a communication, it has been shown that the average normal flash interval of the firefly decreases with an increase in temperature 25 . In the present case, we propose that a change in temperature over the range of 9-34 °C affects the rate of regular flow of oxygen to the light producing organelles, and this changes the inter flash interval. Cold denaturation at low temperatures and heat denaturation at high temperatures are the probable reasons for the irregularities in flash intervals from below 10.5 °C and above 34 °C. Few proteins have been reported to undergo cold denaturation above the freezing point of water in the absence of denaturants. The yeast frataxin homologue 1 (Yfh1), the C-terminal domain of ribosomal protein L9, the scaffold protein for iron-sulfur (Fe-S) cluster biosynthesis in Escherichia coli, IscU, and the human immunodeficiency virus-1 (HIV-1) protease are the ones found to exhibit this [26][27][28][29] . We propose for addition of the firefly luciferase in the body of a live firefly to this list. Extraction of luciferase from the light organs of dead fireflies and determination of the relevant thermodynamic parameters at these temperatures would confirm this proposition; but that would be a study in vitro, the real flashing situation no longer existing. www.nature.com/scientificreports/ Choices of appearance-time and habitat. The red shifting temperature of 34 °C for S. substriata comes in between the ones for L. praeusta (42 ºC) and Diaphanes sp. (28 ºC). These temperatures give the reason why L. praeusta fireflies could be active in the hot summer season just after sunset when the ambient temperature remains high, while Diaphanes sp. fireflies could be active in the cold winter seasons only. As the species S. substriata becomes active usually 1 to 2 h after sunset, there is enough time for the ambient temperature to cool down to a few degrees below the temperature optimum. In a study on 55 species of North American (temperate zone) fireflies, it has been found that 'dark-active' fireflies emit green bioluminescence and 'dusk-active' species emit yellow, in general 30 . The two broad groups of species, 'early-starting' (dusk-active) and 'late-starting' (darkactive), have been defined as the one that begins the flashing activity in advance of 30 min after sunset and the one that begins after this, respectively. Of 32 dark-active species observed in that study, 23 have been found to emit green light (λ max ≤ 558 nm) and 21 of 23 dusk-active species emit yellow light (λ max ≥ 560 nm). The species S. substriata, barely falling in the first category, is generally observed to be dark-active. However, on the few cloudy or rainy summer days when the temperature at the time of sunset becomes 30 ºC or less, one or two specimens of it have been sighted just about half an hour after the sunset. That is, it becomes almost dusk-active, and there is little to differentiate the appearance times between this species and the dusk-active L. praeusta inhabiting the same locality. Generally, though, the majority of L. praeusta fireflies come out before sunset on any day, while the majority of the S. substriata fireflies comes out roughly 45 min after sunset in those days. Hence, we propose that the peak-wavelength, though generally determining a firefly's active time, does not determine its time of coming out-the temperature indicating denaturation of the luciferase is the chief deciding factor. This temperature also gives the reason for their choice of the habitat. In this locality, the temperature of the hottest days during the summer goes past the optimum denaturation-temperature of 34 ºC. Shades from branches and leaves of the trees in the banks of the big ponds help to keep the temperature within the temperature optimum of this firefly. In the hot days, these fireflies have been observed to start flashing 2-3 h, sometimes more than 3 h, after sunset, sitting on leaves or branches of trees. We hypothesize that different enzyme-structures of different species give rise to different temperatures of denaturation, along with different wavelength-peaks, and these are the reasons of their choice of different habitats and times of activeness. As this species finds temperatures above 30 °C difficult, and above 34 °C impossible, to bear, a question immediately arises: Why, then, it does not choose a place where the temperature is cooler than this? The answer to this question probably lies in the cold denaturation of the enzyme luciferase. Since this is indicated between 10.5 and 9 °C, the insect obviously has to maintain some margin. From observations of 3 years, we conclude that the normal flashing-range of temperature for this species is 25-30 °C. It could be noticed that the change in the pulse duration is approximately linear from high temperature optimum of 34 °C to 25 °C, and the linearity is lost below 25 °C (Fig. 4b, Supplementary Table 4). We speculate that the loss of linearity-which implies longer lasting or probably slower proceeding reaction-forces the firefly to stop flashing at temperatures lower than this. The other species L. praeusta, which exists in this locality in plenty, is also no longer seen at around the same time at the onset of the winter. The linearity in the duration-change in flashes for males of that species has also been found to be lost from approximately the same temperature 7 . Determining the blue-shifting temperature for that species, or for that matter for all the species of the world, will clarify how much margin a summer species of firefly normally keeps before vanishing from our sights for that year. In late October, when this firefly disappears for the season, the minimum temperature in the region goes down to roughly 20 °C. Thus, this species needs temperatures about 10 °C more than the low temperature optimum to be found in the locality, and more than 24 °C after sunset to 'warm up' for flashing. As for the high temperature part, L. praeusta fireflies come out just after sunset-the temperature at that time in this locality, though normally 32-33 °C, could even be about 35 °C for a day or two in the hottest days. That is, L. praeusta fireflies maintain a margin of 7 °C while S. substriata fireflies maintain a difference of at least 4 °C from their high temperature optimums. Hence we put forward the general proposition that the high temperature optimum decides the firefly's active time, and the high and low temperature optimums together decide its choice of the habitat as far as temperature-suitability is concerned. Conclusion When the temperature is increased to 34 °C, the emission spectrum shows a redshift of the wavelength peak and emitted pulses show the minimum width which increases sharply for further small increase in temperature. These probably indicate thermal denaturation of the enzyme luciferase. We propose that this is the reason for the species being dark-active in summers. On the other hand, blue shifting of the peak and considerable broadening with fluctuations of the flashes at 10.5-9 °C point towards possible occurrence of cold denaturation in this range of temperature. These two effects, apart from availability of food, of course, give the reason why this species finds this particular place comfortable to dwell. We have to have the enzyme structures vis-à-vis the emission wavelength peaks and different active times of all the species of fireflies for drawing the complete picture. Methods Sample collection. Before the experiment, a couple of adult male specimens of the firefly S. Substriata were captured on the banks of two big ponds lying side by side in the campus of Gauhati University about one and a half hours after the sunset. These insects were observed to fly above water near the bank, and rest on leaves of trees considerably above the ground, and whenever they came flying to a catchable height on the ground of the bank, they were caught. The collected specimens were then brought to the laboratory, approximately 1 km from the location. A strong and intensely flashing one was selected for experiments. This species has the characteristic that it emits light only when in motion, never when immobile! Therefore, arrangements had to be made to make the firefly move for recording spectra in both steady-state and flash emissions. www.nature.com/scientificreports/ Recording of the firefly spectra. At the laboratory temperature. For recording emission spectra, an already calibrated high-resolution spectrometer (Ocean Optics HR4000) was used. The selected specimen was put inside a 0.5 ml micro-centrifuge tube. A piece of cotton was inserted in the tube so that the firefly could move in a very small volume. A small hole of size nearly equal to that of the ends of a fiber optic cable (cable number: QP200-2-UV-VIS, EOS-A6282132-2) was made on the conic surface of the tube. When the lantern of the inserted specimen faced the hole, the emitted light entered through the entrance face of the fiber and was collected by it. The Ocean Optics software Spectra suite was used to visualize and record the spectra. The data obtained were saved in ProcSpec files. A temperature sensor of resolution 0.1 ºC, made by using the IC LM35 connected to a digital multimeter (MASTECH MAS 830L), was used to note the temperature. The exact temperature at the location of the firefly was determined by inserting the sensor inside the centrifuge tube and waiting till the temperature became reasonably stable. The spectral analysis was carried out by using Microsoft Origin 8.0. Spectra in the energy scale were plotted by converting each data point in the wavelength spectra to energy using the equation E = hc/ . For recording time profiles of flash emissions, a transparent cylindrical cavity (petri dish, 35 mm) was used. To ensure air flow through the cavity, a number of small holes were made on the flat and curved surfaces of the cavity. A digital storage oscilloscope (Tektronix TDS 2022C) along with a photomultiplier tube (Hamamatsu H10722 with power supply C10709) were used to record the time-resolved spectra. The cavity was attached to the PMT using a sellotape, and the surface with no holes faced the light entering face of the PMT. The specimen was allowed to move freely inside the cavity, and whenever it came to the location of the light entering face of the PMT, flashes were observed and recorded in the DSO; those were saved in a USB device (HP v215b, 16 GB) as .CSV extension files. As the light intensity from this insect changed with time, the control voltage applied to the PMT was changed from 200 to 400 V to have good flash-amplitudes. The same digital thermometer, kept attached to the cavity, was used to look at the temperature. Before recording the flashes, a calibration of the inside and outside temperatures of the cavity was carried out which showed a difference of 1 ºC-the inside one being more during the lowering and less during the raising of temperature. The inside temperature, of course, was considered for the experiment. Time-resolved measurements, as in the case of the steady-state ones, were carried out with the help of Microsoft Origin 8.0. At high temperatures. For producing high temperatures, a 1-2 kw fan heater (Orpat OEH-1260) was used. By varying the distance between the firefly and the heater, different temperatures were realized. The centrifuge tube was kept at a fixed position, and the heater was moved towards it to increase the temperature. After noting down the temperature inside the tube, the position of the heater was marked, and the sensor removed and attached to outside of the tube using a sellotape. At each distance from the heater (Supplementary Table 7), the fluctuation in the temperature was found to be ± 0.2 ºC. The insect was acclimatized for about 10 min in each stepwise increase of temperature. The maximum temperature of 45 ºC used in the experiment was obtained at the distance of 52 cm from the location of the firefly. At this temperature, most of the specimens were found in the dying stage, and a few died; the specimens displayed continuous glows in their light organs-not flashes-for a couple of seconds to a maximum of 5 min. At low temperatures. For experiments at low temperatures, a specimen along with the same micro-centrifuge arrangement was brought near the window of the air conditioner (ORPAT). The specimen was acclimatized for about ten minutes at a certain low temperature, that is, at a certain distance from the window of the AC. The distance between the insect and the AC was decreased in steps to decrease the temperature at the location of the firefly (Supplementary Table 8). The minimum temperature obtained was 12 ± 1 ºC with the fluctuation of ± 0.2. To decrease the temperature below 12 ºC, a cardboard of surface area 26.5 cm × 15 cm covered with an aluminium foil was made to incline with the horizontal surface ( Supplementary Fig. 1). The cool air coming from the AC got reflected in the cardboard and moved towards the micro-centrifuge arrangement of the sample. This reflected air began to lower the temperature at the location of the firefly, and a minimum value of 9 ± 0.5 ºC could be obtained. To lower the temperature further, one side of the reflector was closed with another cardboard covered with an aluminium foil of the same area, so that the diverging cool air coming from the AC got reflected at this second reflector and thereby lower the temperature to a minimum value of 8 ± 0.5 ºC. The temperature was lowered even further by closing the other side with another reflector. The reflectors on either side of the first reflector were held fixed by mechanical supports such that they do not fall due to the pressure from air coming from the AC. The three reflectors and the horizontal surface along with the window of the AC form a three dimensional air cooled chamber, similar to a triangle-shaped prism. Hence most of the cooled air got reflected and circulated inside the chamber and the temperature dropped down to the minimum value of 6.5 °C. No further decrease in the temperature was possible. This air-cooling chamber arrangement to lower the temperature below 12 ºC might appear to be not so robust, but it was quite effective, as the effect of temperature on the light emission from this species, just like from L. praeusta 6 , was observed to be instantaneous. Spectra were recorded at each 0.5 ºC fall in the temperature. From approximately 8.5 ºC, spectra were recorded for each flash as after producing one flash by the fireflies, their light organs produced continuous glows only. Different sets, 30 in each, of specimens were used in the steady-state and time-resolved experiments. After the experiments, the specimens were set free. Curve fitting. To analyze the spectral changes, we carried out Gaussian curve fitting in Origin 8.0. We assumed four peaks around 540 nm, 558 nm, 598 nm and 630 nm to reproduce the experimental spectrum. Three parameters, viz., peak, FWHM and area of each Gaussian component, were adjusted so that their summation spectrum coincided with the experimental spectrum. Most of the fittings gave correlation coefficients above www.nature.com/scientificreports/ 0.999, except the few obtained in the range of 0.991-0.998. The lower correlation coefficients were obtained due to the relatively low S/N ratio of the spectra, which were recorded at relatively high and low temperatures. Data availability Data supporting the Fig. 3b-e are given in the Supplementary Information. Rest of the data supporting the findings of this study are with the first named author, and could be made available upon request.
2021-06-16T06:17:26.929Z
2021-06-14T00:00:00.000
{ "year": 2021, "sha1": "d2203f8183e546071846199b8202c7925f80723a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-91839-3.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "677d3d8fd14c1e9a98670d6ffcabd1da0247cee1", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
225460322
pes2o/s2orc
v3-fos-license
A comparative study of obstetrics outcome of placenta previa in scarred versus unscarred uterus at tertiary Hospital, Kathmandu, Nepal Background: Placenta previa is an obstetric life-threatening condition with several maternal and fetal complications. The objective of this study is to compare the maternal and fetal outcome of placenta previa in scarred and unscarred uterus.Methods: A retrospective case control study was carried out on 85 cases of placenta previa in the department of obstetrics and gynecology, Paropakar Maternity and Womens Hospital (PMWH) Kathmandu from April 2019 to May 2020 of which 46 had scarred uterus and 39 cases had unscarred uterus.Results: Sixty-one of patients were less than 30 years of age, 62% presented with gestational age 28 to 37 weeks and 67% had parity between 1 to 5. Frequency of placenta previa in scarred uterus was 54% and that in unscarred uterus was 46%. Eighty percent cases with scarred uterus had anterior placenta compared to 33% of cases of unscarred uterus with p value of 0.009. 42% had grade 4 placenta previa on ultrasonography. 45 percent of patient with scarred uterus had PPH compared to 23% in unscarred group with p value of 0.03. Malpresentation was found in 7 cases in scarred group and in one case in unscarred. Cesarean hysterectomy was performed in 6 cases in scarred category compared to 2 in unscarred. Low birth weight was present in 28 cases in scarred category compared to 15 cases in unscarred category with p value 0.03.Conclusions: This study concluded that fetal and maternal outcome is adverse for cases of placenta previa with scarred uterus compared to unscarred uterus. placenta previa. 9,10 Risk of placenta previa with previous LSCS has been found to range between 3 to 10%. 10 Placenta previa is an obstetric complication that is potentially life-threatening to both the mother and the baby. 11 Maternal complication includes antepartum hemorrhage, maternal anaemia, shock, operative interventions like caesarean section and hysterectomy and fetal complications are intrauterine growth restriction, malpresentation, preterm delivery, intrauterine death and stillbirth. 12,13 Diagnosis of placenta previa can be made by history, examination and investigations as ultrasonography (transabdominal and transvaginal) and magnetic resonance imaging or incidentally during an operation. History may reveal painless bleeding in late second and early third trimester and examination of abdomen reveals soft non tender uterus. 14 On Leopold's Maneuvers fetal malpresentation may be present in 35 percent cases. Accurate detection of placenta previa helps in proper management and prevention of mortality and morbidity. 15,16 The objective of the study is to compare the maternal and fetal outcome of placenta previa in scarred and unscarred uterus and to determine the frequency of placenta previa in scarred and unscarred uterus. METHODS This retrospective case control study was conducted at Paropakar Maternity and Women's Hospital (PMWH), Kathmandu, Nepal. Cases of placenta previa from April 2019 to May 2020 were studied. Inclusion criteria • Placenta previa with period of gestation (POG) more than 28 weeks. Exclusion criteria • Pregnancy before 28 weeks • APH other than Placenta previa • Multiple pregnancy • Pregnant women with Hypertension • Pregnant women with Gestational diabetes or other Comorbid conditions. A non-probability consecutive sampling technique was adopted for enrolling the patients. All types of placenta previa were included. A total 85 cases meeting the inclusion criteria were enrolled. Booked and unbooked patients both were enrolled. Diagnosis of placenta previa was made by transabdominal and transvaginal ultrasound. Total cases were grouped in two categories. Group A: 46 cases of placenta previa with history of one or more previous caesarean section or uterine surgery like myomectomy, uterine rupture or uterine curettage. Group B: 39 cases of placenta previa with no previous history of caesarean section or any uterine surgery like curettage or myomectomy. Statistical analysis All patients who delivered at PMWH over study period with placenta previa were enrolled and categorized into above mentioned two groups. Both the groups of patients were compared with regard to maternal age, parity of mother, fetal outcome and maternal morbidity and mortality. Data was collected using the proforma. Identity of patient and the patient's records were kept confidential. Statistical methods chi-square test and student t-test were used. A p value less than 0.05 was considered significant. Data entry was done using Microsoft excel and analysis was done by using SPSS 16. Datas are shown in tables and charts below. RESULTS The incidence of placenta previa at Paroparakar Maternity and Women Hospital was 0.3% over the study period. There were 85 cases of placenta previa out of which 46 cases (54%) had scarred uterus and 39 cases (46%) had unscarred uterus Table 1. In this study there were more cases under age group 25 in unscarred category 17 cases (20%) compared to 13 cases (15%) in scarred category, whereas over the age of 35 more cases were in scarred category 12 (14%) compared to 1 case (1.1%) in unscarred category and this was statistically significant with p value of 0.02 Figure 1. There were 18 cases (21%) with parity 0 in unscarred group compared to 9 cases (10%) in scarred group which was also statistically significant with p value of 0.02. Maximum cases of placenta previa were with parity 1 to 5 with total cases of 57 (67%). Gestational age of less than 37 weeks was found in 53 cases (62%) of placenta previa Table 2. A total 37 out of 46 cases (80% with scarred uterus) had anterior placenta which was statistically significant with p value of 0.009 Figure 2. Most of the patient had grade 4 placenta previa on ultrasonography (42%) Table 3. Forty five percent of patient with scarred uterus had PPH compared to 23% of patients with unscarred uterus with p value of 0.03 which was statistically significant Figure 3. Average blood loss with cases of PPH in scarred group was 1845 ml compared to 1322 ml in unscarred group. There were 6 cases in scarred category with morbidly adherent placenta compared to 2 cases in unscarred. Malpresentation was found in 7 cases in scarred group compared to 1 case in unscarred with p value of 0.018 which was statistically significant. Caesarean hysterectomy was performed in 6 cases in scarred category compared to 2 in unscarred Table 4. Preterm birth was found in 31 cases in scarred category compare to 24 in unscarred Table 5. Low birth weight was present in 28 cases in scarred category compared to 15 cases in unscarred with p value 0.03 which was statistically significant Figure 4. DISCUSSION The present study was undertaken to determine the frequency of placenta previa in scarred and unscarred uterus and to compare the fetomaternal outcome. The frequency of placenta previa in scared uterus in this study was 54% and 46% in unscarred uterus whereas in the study by Majeed T et al the frequency of placenta previa in previously scarred uterus was 67.54% and non-scarred uterus was 32.45%. 17 In this study 61% of cases presented before the age of 30 years compared to 77% of cases by Faiz Malpresentation was found in 7 cases in scarred group compared to 1 case in unscarred with p value of 0.018 which was statistically significant. This association may be explained by anterior placentation obstructing the engagement of head in scarred uterus. Perinatal mortality did not differ between two groups. However, LBW was present in 60% of cases with scarred uterus compared to 38% cases with unscarred uterus. It could be explained by placental bleeding leading to hypoxia and intrauterine growth retardation. 14 CONCLUSION Scarred uterus caused by uterine intervention such as LSCS and MVA are associated with adverse fetomaternal outcome. Reduction in the rate of these procedures along with regular ANC visits, early diagnosis by USG and early planning of deliveries will reduce the complications associated with placenta previa.
2020-07-30T02:03:43.264Z
2020-07-23T00:00:00.000
{ "year": 2020, "sha1": "056fff68e7f1f479ca7bcd07dc6ceb1fd6c1d42b", "oa_license": null, "oa_url": "https://ijrcog.org/index.php/ijrcog/article/download/8831/5736", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "be32653ac80273a5d37fe967876f322a5f033ed7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220445661
pes2o/s2orc
v3-fos-license
The education promise and its disillusion: Access barriers in early childhood institutions The idea that public expenditures for early childhood education and care (ECEC) are rather investments than costs as they yield noticeable returns has been pushed prominently by the American economist James Heckman and his colleague Flavio Cunha (Cunha & Heckman, 2007, 2008). As illustrated in the so-called Heckman curve (Figure 1), programs targeted towards the earliest years are supposed to be most promising regarding economic outcomes. This assumption is based on the notion that children’s abilities develop in cumulative stages: Skills acquired early in childhood form the basis for later skill development and are therefore related to measurable subsequent socioeconomic effects in a skilland knowledge-based economy. The education promise and its disillusion: Access barriers in early childhood institutions Getting money and early childhood education into focus: the Heckman-thesis The idea that public expenditures for early childhood education and care (ECEC) are rather investments than costs as they yield noticeable returns has been pushed prominently by the American economist James Heckman and his colleague Flavio Cunha (Cunha & Heckman, 2007. As illustrated in the so-called Heckman curve (Figure 1), programs targeted towards the earliest years are supposed to be most promising regarding economic outcomes. This assumption is based on the notion that children's abilities develop in cumulative stages: Skills acquired early in childhood form the basis for later skill development and are therefore related to measurable subsequent socioeconomic effects in a skilland knowledge-based economy. Heckman, J. (2008). Early childhood education and care. The case for investing in disadvantaged young children. CESifo DICE Report, 6(2), p. 5. Retrieved from https://www.econstor.eu/bitstream/10419/166932/1/ifodice-report-v06-y2008-i2-p03-08.pdf. Copyright 2008 by ifo Institut. Heckman's and Cunha's conclusions are drawn from programs like the Perry Preschool Project which have shown significant impacts on the educational development of disadvantaged children. From this economic perspective investing in young children-understood as human capital-is the most efficient educational intervention. In addition, public investments in ECEC are supposed to be efficient and effective if they prioritize high quality education, especially for children in deprived families. A whole range of studies in different countries and contexts has supported this finding with regard to both sides of the coin: a) the relevance of quality for the educational development of children, especially those growing up in deprived circumstances, and b) the economic benefit of investments in high-quality early education, e.g. sufficient qualified staff (e.g. Burchinal, Kainz, Cai, et al., 2009;Lesemann, 2009;Pfeiffer & Reuß, 2008;Meluish, Ereky-Stevens, & Petrogiannis, 2015). In the end, Heckman and colleagues have tied (early) education and public expenditure into a quite powerful relation. The thesis of "returns on investments" stimulated the academic as well as political debate in the 2000s. Consequently, political awareness for the importance of high-quality ECEC rose. The promise of social investments: the European scene In Europe, Anthony Giddens and Gosta Esping-Andersen developed an investment strategy within welfare policy that also stresses the importance of activation and promotion of human capital. Similar to Heckman's economic approach, their main argument was to put new emphasis on social investments within the conceptual design of welfare states (Giddens, 1998). Especially child-and family-centered reforms should help to overcome social inequalities and guarantee economic growth in the long run. In this perspective, the child was conceptualized as a future citizen-worker who is capable of dealing with the challenges of a highly competitive knowledge economy (Esping-Andersen, 2002). Giddens and Esping-Andersen played a key role in defining the education promise in the context of European welfare states. Unlike Heckman, who had drawn his conclusions from US-experiments with children from the most disadvantaged families, Esping-Andersen argued for an expansion of ECEC to all children, not only the underprivileged ones. Certainly, this would raise costs, but it could also secure the consent of middle-class families and voters (Esping-Andersen, 2008). The policy discourse around the 2000s incorporated the social investment strategy as a part of welfare state reforms. By that time European countries were confronted with the critique of insufficient solutions for societal challenges, like demographic change or rising labor market orientations of women. Politicians like Tony Blair introduced the idea of redesigning the welfare state into the political bodies of the European Union. To implement this into practice, a massive expansion of ECEC services was considered necessary. At the same time, it should raise female labor market participation and help meeting the risks of demographic change. Several political initiatives and measures were passed to promote such expansion efforts within EU member states. Among the most important ones were the Barcelona targets from 2002 that implemented an EU-benchmark: Member States should remove barriers to women´s participation in the labor market and strive to meet the demand for childcare facilities and in line with national supply requirements provide childcare by at least 90% of children between the age of three and the compulsory school age and at least 33% of children under the age of three. (European Commission, 2013, p. 4) The Barcelona targets initiated various European ECEC reforms, many of them still ongoing. Just recently, the European Commission added the proposal of a quality framework in ECEC. This framework defines a range of quality goals like the improvement of staff qualifications, access for disadvantaged children, or the further development of curricula for the early years (European Commission, 2014). However, the promise of investing in ECEC to receive economic benefits was transformed into policymaking in many different ways. Due to different welfare state backgrounds and governance structures, some countries-like the Nordic or Baltic and Balkan regions-were more successful in developing consistent policy measures and focusing on integrated systems for all children before school. Although to differing degrees, the problem of unequal access to high-quality services remains an unsolved issue for all countries. Theoretical considerations: access barriers to ECEC systems Regardless of whether you follow Heckman's focus on children from disadvantaged families or prefer Esping-Andersen's universalistic approach: For both strategies, disadvantaged children need access to ECEC-in Heckman's concept as the sole recipients of childcare programs, in Esping-Andersen's approach as a subgroup of all children entitled to ECEC. This raises the question of access barriers, a question closely linked to issues of money and education. Access to ECEC means that parents-with reasonable effort and at affordable costs-can enroll their children in an ECEC setting that suits the child's development and meets the parents' needs (Friese, Lin, Forry, & Tout, 2017). Availability and affordability, one could say, are the main aspects of access. But the issue seems to be more complex, as Vandenbroeck and Lazzari (2014) argue: Availability and affordability do not necessarily make provision accessible, as multiple obstacles may exclude children from poor and immigrant families, for example, language barriers, knowledge of bureaucratic procedures, waiting lists, or priorities set by management. (p. 331) Some of the potential barriers are regulated locally and, therefore, vary widely not just across but also within countries (Scholz, Erhard, Hahn, & Harring, 2019). Additionally, subtler and sometimes unconscious aspects like the professionals' attitudes towards children from disadvantaged backgrounds or culturally induced restraint towards families with a migration background influence access probabilities. According to Vandenbroeck and Lazzari (2014), services need to be supportive and open to negotiate values, beliefs, and educational practices with families. Therefore, accessible ECEC services need to correspond to different demands and socio-cultural diversity-they need to be embedded in a consistent policy that provides the necessary resources. Ensuring equal access and therewith reducing access barriers implies a well-functioning interplay of policy levels, responsible stakeholders, professionals, and families. Against the backdrop of the social investment promise, reducing access barriers should be a top priority on the political agenda and at the core of policy initiatives. However, using the example of Germany, we will show that this strategy is difficult to pursue in practice. The case of Germany: the disillusion of the education promise In Germany, two laws-namely the Day Care Development Act (Tagesbetreuungsausbaugesetz) from 2005 and the Childcare Funding Act (Kinderförderungsgesetz) from 2008-made far-reaching efforts to raise the ECEC participation rates of children under the age of three. In addition to the expectation of rising maternal employment, these policies were accompanied by remarkable hope for comprehensive effects in support of children from disadvantaged families (Bundesregierung & Regierungschefs der Länder, 2008). The expansion of services was successful, at least concerning general levels of attendance: In 2002, the ECEC attendance rate for children under the age of three had been 2.8 percent in Western Germany; in 2018 it already reached 29.4 percent. Even in Eastern Germany, with its long-lasting tradition of institutional childcare dating from socialist times, the attendance rates continued to grow from 36.7 percent (2002) to 51.5 percent (2018) (Statistisches Bundesamt, 2004. However, to date, the ECEC expansion did not lead to a degradation of social disparities. Research shows that especially parents with higher education respond to scientific theses of the importance of early education for later success in life (Jessen, Schmitz, Spieß, & Waights, 2018), while ECEC attendance rates of young children whose parents have low educational degrees did not increase significantly. At present, only 15 percent of children under the age of three from these families are enrolled in ECEC, without any apparent increases over the last years (Rauschenbach & Meiner-Teubner, 2019). Given the fact that attendance rates in families with parental high school degrees are three times higher, the winner in the race for educational promotion is clear. It is not the child from low-income, low-educated families. After 15 years of German ECEC expansion especially for children under the age of three, the results are quite sobering. We may have lots of evidence that children from disadvantaged families benefit from early daycare more than others (Smidt & Schmidt, 2014). But how can society use this potential if those children who would benefit the most do hardly attend day care at all? The Heckman curve reaches its limits when services aiming at the early years are mainly used by middle-class parents who have already optimized care and education for their children. The education promise cannot be realized if there are hardly any disadvantaged children in daycare. Linking universalistic and targeted policies: a strategy for efficient and accessible ECEC The questions arising from these observations therefore are: How can ECEC systems reach out to children from disadvantaged families? Is there a policy approach that uses state funds more aligned with this goal? Past experiences have shown that neither Heckman's targeted approach nor Esping-Andersen's universalist concept can solve the outlined problems alone. In a strictly targeted approach, services addressed to the poor often remain poor services. Lean state funding, in this case, does not allow for intensive support of children from disadvantaged families. On the other hand, a completely universalistic approach allows for middle-and upper-class families to take the lead in the struggle for better ECEC services for their children. In our opinion, a combination of both approaches seems reasonable. This progressive universalism combines universal and targeted approaches with the intention "to realize high quality in provisions for all families with children, including poor and migrant families" (Geinger, van Haute, Roets, & Vandenbroeck, 2015, pp. 9-10). To realize that, all responsible governance actors at the federal, state, and local levels are jointly challenged to develop coherent policies. Initial approaches can be observed in Germany. Based on national entitlements to childcare for children between 1 and 6 years of age, cities like Mannheim, Munich or Bremen have begun to finance their local daycare systems targeting regional disparities. As a consequence, daycare facilities in high poverty neighborhoods are better equipped financially. To improve quality, they get extra funds to hire more and higher qualified staff. Moreover, some cities directly address parents in these communities by offering them daycare places, hoping that a combination of better ECEC infrastructure and lower access barriers increases the participation of disadvantaged children. Progressive universalism requires investments across welfare sectors as well as political responsibility to underline the importance of education as a public-and not just an economic-value. Due to the highly decentralized governance system in Germany, implementing such a sector-crossing strategy is challenging. Nevertheless, it is the state's-and no other stakeholder's-responsibility to invest in social infrastructure. The current national law on quality development ('Gute-Kita-Gesetz') makes a first step: it defines quality goals and provides 5 billion Euros (BMFSFJ, 2019). But to increase the efficiency of this public expenditure, progressive universalism needs to be pushed further in political awareness and subsequent policymaking.
2020-07-10T02:25:18.937Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "4c722f9548ae8eedbf809e38b477d882a7d9f711", "oa_license": "CCBYNC", "oa_url": "https://www.econstor.eu/bitstream/10419/166932/1/ifo-dice-report-v06-y2008-i2-p03-08.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4c722f9548ae8eedbf809e38b477d882a7d9f711", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
104441632
pes2o/s2orc
v3-fos-license
Highly Active and Durable CuxAu(1–x) Ultrathin-Film Catalysts for Nitrate Electroreduction Synthesized by Surface-Limited Redox Replacement CuxAu(1–x) bimetallic ultrathin-film catalysts for nitrate electroreduction have been synthesized using electrochemical atomic layer deposition by surface-limited redox replacement of Pb underpotentially deposited layer. Controlled by the ratio of [Cu2+] ions and [AuCl4–] complex in the deposition solution, the alloy film composition (atomic fraction, x in the range of 0.5–1) has been determined by X-ray photoelectron spectroscopy and indirectly estimated by anodic stripping voltammetry. The catalytic activity and durability of CuxAu(1–x) thin films, Cu thin film, and bulk Cu have been studied by one- and multiple-cycle voltammetry. The synthesized CuxAu(1–x) thin films feature up to two times higher nitrate electroreduction activity in acidic solution compared to bulk and thin-film Cu counterparts. Highest activity has been measured with a Cu0.70Au0.30 catalyst. Durability tests have demonstrated that Cu thin films undergo rapid deactivation losing 65% of its peak activity for 92 cycles, whereas Cu0.70Au0.30 catalysts lose only 45% of their top performance. The significantly better durability of alloy films can be attributed to effective resistance to poisoning and/or hindered dissolution of Cu active centers. It has been also found that both CuxAu(1–x) and pure Cu thin films show best electroreduction activity at lowest pH. INTRODUCTION In the last few decades, nitrate pollution has gradually become a severe environmental problem. 1 The undesired contamination with nitrates is largely due to the byproduct of fertilizers and nuclear and animal waste. 2 A high concentration of NO 3 − ions in drinking water poses a health hazard to human beings. It has been proven that these hazards could result in methaemoglobinaemia (blue-baby syndrome) or even potentially cancer. 3,4 The global trend for environmental compatibility has led to increasing demand for removing nitrates using environmental friendly approaches that meet energy conservation standards. 5 There are a couple of methods that have been developed to collect or eliminate NO 3 − ions from aqueous media. Some of these include physicochemical means, 6,7 biological denitrification, 8 and electrochemical reduction approaches. 9−11 Among all, reverse osmosis and ion exchange are two major commercial ways to remove nitrate from drinking water. 12 However, the need for post-treatment sludge generation and high membrane costs associated with said approaches significantly restricts the expansion of their application. 5 Because of the controllable selectivity, environmental compatibility, and relatively low cost, the electroreduction of nitrate through a metallic or bimetallic catalyst has been considered the most promising technique for denitrification. 13,14 An additional advantage of this approach is that with application of different reduction routes along with their elimination/removal, the NO 3 − ions could be converted into a variety of useful chemicals, such as ammonia and hydroxylamine. Electroreduction of nitrate in acidic solution has been investigated by many researchers. Dima et al. 15 have reported comprehensively on the nitrate electroreduction activity of coinage metals and transition metals in acidic solution. It is concluded that Cu known to produce ammonia as the main reduction product is the most active catalyst among coinage metals. 15 Studies detailing the mechanism of nitrate electroreduction on poly-and single-crystalline Cu electrodes have also been found in the literature. 16−18 Overall, an eightelectron reduction route with a standard reduction potential E 0 = 0.80 V versus H + /H 2 (normal hydrogen electrode, NHE) as presented below is believed to be the dominating nitrate electroreduction process on Cu In more detail, the mechanistic study identifies the above reduction process as a multiple-step process. 15 As a result of different steps of this process, a variety of intermediates such as NO and NO 2 − have been reported. 15 The research on singlecrystal faces of Cu with different orientations also suggested slight differences in the reduction mechanism and catalytic performance. 19 Overall, the main drawbacks reported for using Cu as catalyst for nitrate electroreduction are associated with its oxidative dissolution 20 and surface poisoning/passivation leading to catalyst loss and deactivation, respectively. 15,18 According to proposed reasons for catalyst deactivation, the spontaneous oxidization of Cu 17 and competitive adsorption of ions other than NO 3 − (mostly H + 15−17,21,22 ) are deemed key factors in the performance deterioration of the monometallic Cu. To address these drawbacks, many research groups synthesized bimetallic catalysts to boost the catalyst's activity and enhance its durability. Vorlop and Tacke 23 first synthesized bimetallic catalysts consisting of both noble metal and promoter metal. The mechanism and performance of bimetallic catalysts like Pt−Cu and Pd−Cu have been studied by many groups. 24−29 According to these studies, the main function of the noble metal is to stabilize the promoter metal, steer overall catalytic selectivity, and provide sites for H adsorption. 25 The general mechanism for nitrate electroreduction at a bimetallic catalyst in an acidic environment is summarized in the following reactions 30 This mechanism indicates that H + ions tend to discharge and adsorb preferentially on the noble-metal surface. The adsorbed H atoms then engage more in facilitating the nitrate reduction rather than poisoning the promoter metal. Generally, it is believed that the adsorbed H atoms could transfer to the O moieties of NO 3 − ions at the catalyst surface to form OH − , thus facilitating the entire reduction process. 22 Although impact on catalysts' activity of anions other than NO 3 − has also been reported, 17 most of the mechanistic studies suggest that optimal catalysts for nitrate electroreduction are bimetallic structures consisting of two different types of active sites: promoter metal sites responsible for the NO 3 − adsorption followed by reduction and noble-metal sites providing for H adsorption, promoter stabilization, and product control. Several synthetic methods for mono-or bimetalic catalysts have been reported in the literature. Some groups employ electrodeposition methods, 33−35 which leads to burying of a large amount of noble metal under the active surface. Other groups synthesize nanoparticulate mono-or bimetallic catalysts. 36,37 Most tests of these catalysts focus on the impact of intermetal interactions on the nitrate electroreduction activity. 38 The results suggest that the activity could be enhanced by improving the mixing homogeneity between noble and promoter metals. 38 However, the nanoparticle preparation method is often time-consuming and complex, which generally limits its application. In this context, electrochemical atomic layer deposition (E-ALD) of ultrathin bimetallic films appears to be the optimal method because of its ability to maximize the utilization of noble metal under strict deposition control in realization continuous films as thin as a few monolayers (MLs). 39 The E-ALD utilizing a surfacelimited redox replacement (SLRR) reaction has also been proven to provide the best homogeneity among all electrochemical deposition approaches. 39,40 As recently summarized in a review article, our group has developed a one-cell approach for realizing the E-ALD by SLRR protocol and thus making it convenient for synthesis of low-Pt-loading catalysts. 39,41,42 Using that approach and replacing underpotentially deposited (UPD) Pb, Cu, and H layers, we successfully synthesized Pd, 43 Pt, 42,44,45 and Pt−Cu ultrathin films on flat and nanoporous gold surfaces. 46,47 The main objective of this work is to investigate the functionality of gold as noble-metal constituent in bimetallic catalyst for nitrate electroreduction. The emphasis is on the evaluation of gold's ability to (i) stabilize copper, (ii) minimize copper poisoning, and (iii) facilitate the nitrate reduction reaction. In this work, Cu x Au (1−x) thin-film catalysts were synthesized via the SLRRbased method, and chronopotentiometry was used to monitor the growth process. After the deposition, Pb UPD cyclic voltammetry (CV) was used to assess the surface roughness of as-grown catalysts. Anodic stripping voltammetry (ASV) and X-ray photoelectron spectroscopy (XPS) were employed to confirm the atomic ratio (elemental content) of the prepared catalyst. The activity and durability of Cu−Au catalysts are tested by cyclic voltammetry (CV) and chronoamperometry. The catalytic behavior of Cu−Au films with different compositions is also discussed in this paper. To contemplate the influence of free H + ion concentration, the pH was varied from 1.62 to 5.50 in the testing of Cu−Au bimetallic and plain Cu catalysts. The behavior of bulk copper and Cu−Au in different pH solutions was also studied. RESULTS AND DISCUSSION 2.1. Cu−Au and Cu Ultrathin-Film Synthesis through SLRR on Au Substrate. Polycrystalline Au electrodes were used as a substrate for Cu x Au (1−x) and Cu ultrathin-film deposition. Pb UPD CV known to produce signature type curves for different crystallographic orientations of Au was employed to characterize the surface of the Au substrate before each experiment as both the ultrathin-film growth and nitrate reduction have shown dependence on the substrate crystallographic orientation. 19 Figure 1 shows a representative Pb UPD curve on Au (poly) electrodes used as a substrate in this study. Reference to such curves for comparison could be found in other papers of our group as well. 43,48 The distinct presence of a sharp and narrow split cathodic peak at 0.2 V versus Pb/Pb 2+ suggests a dominating (111) orientation of the Au polycrystalline electrodes. There are some additional contributions of (100) and (110) shown by other peaks. The overall shape of the CV curve in terms of width and height of the peaks ascertains the suitability of the Au electrodes as substrates for Cu−Au and Cu deposition. The growth of Cu and Cu−Au bimetallic ultrathin films was conducted by an SLRR approach employing Pb UPD as a sacrificial layer and realized in one-cell configuration. 44,49 Thereby, for the Cu film, Cu 2+ ions replace spontaneously the predeposited Pb UPD layer, whereas for the Cu−Au ultrathin film, the replacement will be powered by both Cu 2+ ions and [AuCl 4 ] − complex. To avoid further displacement of Cu by [AuCl 4 ] − in the course of alloy film deposition, it is essential to strictly control the cut-off potential (C-OP). Figure 2 shows a representative potential transient for a set of 15 "building block" SLRR cycles performed on Au (poly) electrodes to deposit a Cu−Au ultrathin film. The growth procedure is initiated from the open-circuit potential (OCP) of the substrate, followed by the application of a negative pulse to a potential of −0.850 V (−0.05 V vs Pb/Pb 2+ ) for 1 s. Because of the substantial concentration difference of Cu 2+ , AuCl 4 − complex, and Pb 2+ ions, only negligible amounts of Cu and Au are deposited in that 1 s of Pb UPD layer formation. 39 Then, after the release of the potential control, the Cu 2+ ions and AuCl 4 − replace the Pb atoms causing a potential excursion back to OCP. The film thickness could be controlled by the number of administered SLRR events. In Figure 2, the OCP transients depict a replacement rate that slows down from the first to the third SLRR cycle because of the substrate change (from Au to Cu) and then gradually levels off with an increase of the Cu−Au film thickness, suggesting a dominant activation control for the exchange reaction. 44,49 When it comes to the SLRR co-deposition of Cu−Au bimetallic ultrathin film, it is very important to determine the cut-off potential (C-OP) where the next Pb UPD layer formation needs to be applied after the completion of the previous SLRR cycle. The C-OP in the case of Au−Cu deposition must be negative to the OCP so that any potential displacement of already deposited Cu atoms by AuCl 4 − species be strictly eliminated. A similar strategy was employed in another work of our group whereby Pt x Cu 1−x alloy thin films were deposited by SLRR in one-cell setup. 47 In the general case scenario when an alloy of Cu with more-noble metal (Pt, Au) is deposited, the cut-off potential depends on the relative concentrations of Cu 2+ ions and the more-noble-metal complexes (like [AuCl 4 ] − ) in the solution. More specifically, the cut-off potential at Cu/Au solution ratios of 4:1, 2:1, and 1:1 in the present work is set at −0.45, −0.41, and −0.35 V, respectively. Figure 3 shows the replacement process at the [Cu 2+ ]/ [AuCl 4 − ] solution ratio 1:1. The low-sloped part of the curve between −0.4 and −0.2 V where likely most of the redox replacement process takes place, one can clearly see regions with two different slopes. The first range of the curve featuring steeper slope is most likely associated with the replacement of Pb by both Cu 2+ and AuCl 4 − . The following second range features slower potential change, likely associated with a process with impeded kinetics. Most likely, that process could be the one mentioned, where the replacement of already deposited Cu atoms by [AuCl 4 ] − complex species occurs. Should the later process be enabled the resultant deposit will be substantially richer in Au in comparison with the targeted Au/Cu atomic fraction ratio in the deposited alloy. Hence, the C-OP must be chosen negatively to the potential indicated by the arrow in Figure 3. prepared plain Cu and Cu−Au bimetallic thin films could be obtained by ASV. Although the stripping involves only Cu, the analysis of the stripping experiment taking into account also the overall deposition charge allows for quantitative analysis of the amount of deposited Cu and Au from different solutions with molar ratio of [Cu 2+ ]/[AuCl 4 − ] = 1:1, 2:1, and 4:1. Figure 4 shows a plot of stripping curves obtained by ASV for ultrathin Cu−Au at different atomic compositions. On the basis of the Cu−Au phase diagram, 50,51 it is intended that at room temperature, all intended compositions would eventually form mixtures of three thermodynamically stable phases that could coexist, namely, Cu 3 Au, CuAu, and CuAu 3 . Apparently, owing to the drastically different Au contents, these phases undoubtedly dealloy at different critical potentials as seen in earlier work of our group where Cu and Au were coelectrodeposited. 46 Similarly with Cu−Au bulk alloy 52 and Cu−Au thin film, 46 the stripping curve of Cu x Au (1−x) ultrathin film synthesized in this work exhibits multistage dealloying. In Figure 4, the dealloying curves of different alloy compositions feature a peak in the potential range 0.18−0.35 V versus Cu/ Cu 2+ pseudoreference electrode (PRE), which corresponds to the nonbonded or low coordination Cu on the surface site. The stripping critical potential for nonbonded Cu appears far more positive than that of single Cu metal, featuring a positive excursion of 0.2 V. The second peak is seen in the range 0.60− 0.75 V. It is believed that this peak is associated with the dissolution of Cu 3 Au based on previous results in determining the critical potential of Cu 3 Au bulk alloy. 52 The third peak, which appears in the range 0.80−1.00 V, is associated with the stripping of CuAu and CuAu 3 . Each of the alloy ultrathin films has different contents of single-phase Cu−Au alloys based on the relative concentrations of Cu and Au. Consequently, alloys enriched in Cu, like those deposited at [Cu 2+ ]/[AuCl 4 − ] solution ratio of 4:1, would feature more Cu 3 Au, which is related to the second peak in the range 0.60−0.75 V. The alloys with more Au, such as Cu/Au = 2:1, contain more CuAu and CuAu 3 phases, which will show up in the third peak (0.80−1.00 V). Further analysis of the ASV curves ( Figure 4) can enable an estimate of the overall Cu content in the SLRR co-deposited Au−Cu alloys, which in turn can be used to further understand the relationship between [Cu 2+ ]/[AuCl 4 − ] solution ratio and the composition of the accordingly deposited alloy. Table 1 summarizes the total dealloying charge as well as the dealloying charges in different potential ranges determined by ASV for Cu−Au ultrathin film with different compositions. All reported charges have been corrected with the background charge measured under identical conditions on pure Au electrode. A closer look at the data in Table 1 suggests that with the increase of Cu atomic fraction in the deposited alloy, more of the nonbonded or low coordination alloyed Cu atoms will take part in the dissolution process, resulting in most of the Cu being dissolved at most negative potential range. Compared with our previous results on the dealloying of Cu−Au thin films synthesized by bulk co-deposition, 46 the use of SLRR codeposition in this work suggests the presence of more alloyed Cu, which on dealloying ( Figure 4 and Table 1) results in more charge in the second and third anodic potential ranges. This phenomenon is possibly due to enhanced homogeneity in the Cu−Au alloy layer deposited by SLRR through layer-bylayer deposition. A more detailed consideration and further analysis of data presented in Table 1 along with accounting for the number of administered SLRR cycles and the charge density of one ML of Pb, which is same on both Cu and Au polycrystalline electrodes, 53 could be used to estimate quantitatively the atomic fraction of Cu/Au ratio in the SLRR-deposited alloys. More specifically, such an estimate could be done knowing that 15 SLRR cycles will produce on a one-at-a-time basis, a total of 15 MLs of Pb UPD with a total charge of 4500 μC cm −2 (300 μC cm −2 for 1 ML Pb on Au poly ). 48,54 Then, Pb will further be replaced at OCP by Cu 2+ and AuCl 4 − at an assumed efficiency of 94% as suggested by earlier experience with the growth of pure Cu by SLRR. 55 This means that the total deposition charge for the growth of our alloy films in the 15-cycle run will be 4230 μC cm −2 . For [Cu 2+ ]/[AuCl 4 − ] = 1:1, one measures a total Cu stripping charge of 1994 μC cm −2 (see Table 1) that is then corrected with the background charge (620 μC cm −2 ). As a result of this correction presented in the first row of the respective cell in Table 1, one obtains a net Cu stripping charge of 1374 μC cm −2 (see the second row of the cell). Finally, assuming that the difference between the corrected total deposition charge (4230 μC cm −2 ) and Cu stripping charge (1374 μC cm −2 ) is associated with Au fraction in the alloy and taking into account the difference in the oxidation state between Cu and Au in the solution (2:3, respectively), one calculates the atomic fraction ratio Cu/Au in the analyzed alloy to be 1:1.4. More results summarizing the corrected total stripping charge and partial charges in different potential ranges for Cu−Au alloys deposited from solutions with 2:1 and 4:1 Cu 2+ /[AuCl 4 ] − ratio can be also found in Table 1. On the basis of the respective data and the above described algorithm, the table presents calculated Cu/Au atomic fraction ratios in the respective alloys as well. 3), respectively. The Cu and Au contents in the alloy were determined with an error of inhomogeneous distribution of metals on the surface and inaccurate relative sensitivity factors due to the oxidization of metals on the surface, respectively. Compared with the electrochemical dealloying results, the charges were determined with an error of about 5% and the XPS results are more accurate with a percentage error of up to 1.00%. As seen in Table 2, the obtained results by both methods are reasonably close (compare directly alloys deposited in solution with 1: − ] ratio), and both suggest the presence of a higher atomic fraction of Au in comparison with the one that follows directly from the Cu/Au solution ratio. This finding persists as a trend in all other alloy compositions subjected to both characterization approaches even though direct comparison cannot be made because of differences in the Cu/Au solution ratios. Therefore, it can be concluded that Cu atomic fraction in the alloy is always lower than the one following from the specific solution composition. The observed common trend in measured and estimated atomic fractions could be explained by the strong likelihood of secondary galvanic displacement of already deposited Cu atoms by the substantially more noble Au ones. Such secondary displacement is readily enabled by the presence of AuCl 4 − complex at potentials that are only slightly positive so that the cut-off potential is chosen inevitably with considerable ambiguity. Overall, regardless of the encountered discrepancy between the solution and atomic fraction ratios of Cu/Au, the analyses of ASV and XPS results indicate that the atomic composition is generally controllable by tuning the [Cu 2+ ]/[AuCl 4 − ] ratio in the growth solution. 2.4. Nitrate Electroreduction Activity Tests. In this work, bulk Cu, bulk Au, Cu ultrathin film, and Cu 0.53 Au 0.47 ultrathin alloy film deposited in a solution with Cu/Au ratio of 2:1, both produced by 15 SLRR cycles, have been chosen for assessment of catalytic activity in nitrate electroreduction test in 10 mM HClO 4 solutions with and without 10 mM HNO 3 . Figure 6 shows the nitrate electroreduction activity of selected catalysts. The violet and the green curves indicate that ultrathin-film Cu shows similar reduction activity as the bulk copper catalyst. Thus, the cathodic current exhibits the first peak at −0.75 V versus mercury−mercurous sulfate electrode (MSE) ascribed elsewhere to the reduction of NO 3 − to NO 2 − for bulk copper and copper ultrathin film. 16−18 Going negatively on the CV curve, one sees another peak at −1.00 V versus MSE. That peak is generally attributed to a further reduction of NO 2 − to ammonia or other products. 15 The onset of the hydrogen evolution reaction (HER) occurs at −1.40 V. The red dotted (alloy) and green dotted (pure Cu) curve both obtained without NO 3 − ions in the solution are provided for comparison with catalysts basically exhibiting only HER activity. The orange curve shows no activity for pure Au toward nitrate electroreduction in solution where there is an abundance of NO 3 − ions. This result is consistent with pure Au activity test reported in the literature. 15,56,57 The reduction behavior of Cu 0.53 Au 0.47 ultrathin film in the presence of NO 3 − ions appears to be a one-step reduction process. More specifically, unlike pure Cu and Cu ultrathin films, 17,19,58 the reduction curve of Cu−Au bimetallic catalyst features only one peak at −1.05 V versus MSE. This peak is also about 50% higher in the same potential range than the pure Cu one. Negative to that range, the HER starts taking place on the alloy surface at the same potential as pure Cu. So far, many researchers have proved that polycrystalline Au catalysts exhibit no activity in nitrate reduction scenarios. 15,56,57 However, the density functional theory (DFT) calculation has proved that H adsorption albeit much weaker than Pt and Pd 59 still takes place on Au. The Au atoms could be then considered as optimal sites for H adsorption and further reaction instead of centers only holding strongly H atoms on the surface like Pt or Pd. At the same time, Cu is among the best catalysts for nitrate electroreduction, 25,58 although it has the disadvantages of undergoing spontaneous dissolution in acidic solution 20 and being prone to passivation with the reduction testing cycles. 17,19 Therefore, owing to the dramatically different affinities of Au and Cu to nitrate reductive conversions, the Cu−Au bimetallic catalyst appears to support a different reduction mechanism from the pure Cu. This alloy combination exhibits better activity than pure Cu and Cu ultrathin films in the potential range between −0.90 and −1.20 V (second peak position). Possible reasons for this enhanced activity are as follows: (i) In the bimetallic catalysts, Cu atoms could be perceived as sites for nitrate adsorption (see eq 6), whereas Au atoms could be sites for H + adsorption, as shown in eq 5 in the same potential range. This is in agreement with the results in Figure 6 whereby the pure Au polarization curve (solid orange curve) suggests an ongoing HER activity in the potential range of interest. In turn, this HER activity can provide sufficient amount of adsorbed H as a powerful reducing agent for nitrate reduction, reactivation of passivated Cu sites, and thus for promotion of the entire reaction, as demonstrated by eqs 1 (ii) The lattice structure and electronic structure of a Cu−Au bimetallic catalyst is different from pure Cu as directly manifested by a larger lattice parameter and a shift of d-band center. 60,61 Given that, the binding energy between NO 3 − ions and the Cu−Au catalyst surface may become optimal and thus beneficial for promotion of the reduction reaction. On the other hand, according to the previous calculation, the nitrate establishes bidentate adsorption configuration on both Au and Cu, 16,62 in which two oxygen atoms connect to surface and the other oxygen atom is exposed to solution; hydronium adsorbed on sites surrounded by NO 3 − ions could form a bond with oxygen of neighboring NO 3 − to facilitate the nitrate reduction. As there is no comprehensive study on the application of Cu− Au catalyst system in nitrate electroreduction process, a further reduction product analysis and DFT calculation will be required in the future for better understanding of this matter. 2.5. Effect of Cu/Au Atomic Ratio on Nitrate Electroreduction Activity. In the synthetic part of this work, routines for the SLRR deposition of bimetallic catalysts Cu/Au with atomic fraction ratio from 0.50 to 0.90 have been developed. Figure 7 summarizes results on the influence of the increasing Cu amount on the Cu−Au alloy catalyst activity. It is clearly seen in that all compositions containing between 0.50 and 0.80 fractions of Cu exhibit similar catalytic activity manifesting itself through a curve with one very dominant peak, as already presented in Figure 6 for a Cu 0.53 Au 0.47 catalyst. On a relative basis, maximum activity is obtained with Cu 0.70 Au 0.30 alloy. Higher or lower Cu content leads to relatively lower activity thus making the Cu fraction/activity dependence to look like a volcano shape relationship. With the increase of the Cu fraction in Cu−Au catalysts, the activity decreases to a value same as seen with pure Cu catalyst. It is noteworthy that the transition to two-peak curves occurs when the atomic fraction of Cu is equal or higher than 0.90. For the Cu with an atomic fraction less than 0.50, the nitrate electroreduction activity drastically decreases to that of a pure Au, which is catalytically inactive. The possible mechanism for this behavior is associated with a change of the atomic fraction ratio on the catalyst surface. When Cu is more abundant on the surface (more than 0. 90 atom fraction), the Au sites are undoubtedly less accessible. In that case, the nitrate electroreduction activity curve should be practically identical with the activity of pure Cu. In contrast, if the Cu atomic fraction is less than 0.50 in Cu−Au, the Cu sites for nitrate adsorption will be limited on the catalyst surface, as the dominating phase or element on the catalyst surface could be Au 3 Cu or Au, respectively; these two catalysts mostly favor the HER that is readily enabled in this potential region. In other words, due to lack of adsorbed NO 3 − ions, adsorbed H atoms mostly combine with each other to generate H 2 gas. As a result, in that case, catalysts become practically inactive in nitrate electroreduction process. 2.6. Catalyst Durability Tests. In this section, nitrate reduction durabilities of pure Cu and Cu−Au have been tested in 10 mM HClO 4 + 10 mM HNO 3 solution for 100 cycles. Figure 8 shows the CV diagram obtained from pure Cu immersed in the testing solution, whereby the reduction activity decreases substantially with the number of cycles. Figure 9 shows the CV diagram obtained from Cu−Au bimetallic catalysts in the same solution. It is clearly seen that unlike in the previous case, the catalyst retains a high reduction activity with the number of cycles. A quantitative look into the comparison of results presented in Figures 8 and 9 suggests activity decrease for the Cu−Au catalyst to 55% versus a decrease to 35% for the pure Cu catalyst at a potential between −0.9 and −1.1 V after 92 cycles. This comparison clearly proves that Cu−Au catalyst exhibits significantly better durability in the nitrate electroreduction process. Previous research proves that pure Cu catalysts experience spontaneous dissolution and hydrogen poisoning in acidic solution. 5,8,18 Cu 0.70 Au 0.30 bimetallic catalysts show better durability in nitrate electroreduction compared with monometallic Cu. One of the explanations for better durability of Cu 0.70 Au 0.30 catalyst is due to the usage of noble-metal Au. As largely discussed earlier in this paper, Au stabilizes the Cu by forming a single-phase alloy or intermetallic compounds both being substantially more stable in acidic solution. In addition, some researchers ascribe the passivation of pure Cu catalyst to the adsorbed H atoms that block the Cu sites. It is well known that Au is a weak adsorption substrate for H atoms. Thus, it is deemed likely that the addition of Au could make the Cu active sites less prone to H adsorption as well. 2.7. Effect of pH on Nitrate Electroreduction Activity. In this section, the pH dependence of nitrate electroreduction has been studied on pure Cu and Cu 0.70 Au 0.30 bimetallic catalysts. Figure 10 displays the CV diagram of Cu and Cu 0.70 Au 0.30 behavior at pH 5.50 and 3.00. The comparison of all curves shows that the nitrate electroreduction is very pH sensitive on both catalysts; in other words, the H + plays an important role in promoting the activity of nitrate electroreduction reaction. If one compares the green long dashed (pH 5.50) and the blue dashed (pH 3.00) curves in Figure 10 with the blue dotted line (pH 2.00) in Figure 7 that features a similar curve shape but a lot higher current density, the observed trend clearly suggests that the highest nitrate electroreduction activity on Cu 0.70 Au 0.30 alloy is registered at highest concentration of H + , i.e., at lowest pH. It is known that NO 3 − ions need to be adsorbed on catalyst surface before their reduction. At higher pH, the OH − can occupy the Cu active sites, resulting in the inhibition of nitrate adsorption and the oxidation of the Cu sites. 5,63 Earlier studies on related phenomenology clearly suggest the presence of adsorbed oxygen species hundreds of millivolts negative to the Cu/Cu 2 O reduction potential. 64 On the other hand, according to reaction 1 and the role of adsorbed H discussed previously, H + / adsorbed H could facilitate the nitrate reduction process by eliminating the oxidation on Cu active sites. 16,19 The anodic peak of bulk Cu at pH 5.00 could be attributed to the deadsorption of NO 3 − or OH − , which means that the respective lack of H + significantly slows down the nitrate electroreduction reaction. Overall, to increase the nitrate electroreduction activity in the process of nitrates' removal, it is essential to lower the pH of polluted ground water, which brings another challenge of returning the pH to a neutral value afterward. 5 CONCLUSIONS Bimetallic Cu−Au ultrathin-film catalysts were first synthesized via the E-ALD method utilizing SLRR of Pb UPD layer. The XPS and anodic stripping results clearly demonstrated that the SLRR method is advantageous for deposition of homogeneous and continuous films with controllable composition and thickness. The nitrate electroreduction activities of Cu−Au ultrathin film, Cu ultrathin film, bulk Cu and bulk Au were also comparatively studied. Our results confirmed that pure Au catalysts were inactive in the nitrate electroreduction process. In contrast, Cu−Au bimetallic ultrathin films were found to be more active than pure Cu or Cu ultrathin films, especially at potentials more negative than −0.85 V versus MSE. The reason for the enhanced activity can be ascribed to H ads atoms accommodated selectively on the Au sites at around −0.95 V versus MSE. These atoms are believed to act as reducing agents thus facilitating the entire reaction. Further work focused on identification of products and intermediates of the reaction on Cu−Au ultrathin-film catalysts is planned in the future. Also, DFT calculations would be a necessary addition for further explanation of the mechanism of this phenomenon. The durability of the studied bimetallic and monometallic catalysts was also tested for up to 92 potential cycles. The Cu 0.70 Au 0.30 ultrathin film showed better durability than monometallic Cu. A hindered spontaneous Cu dissolution and minimized hydrogen poisoning by adding Au as morenoble metal to Cu were both identified as possible reasons for the better durability of alloy/bimetallic catalyst. Finally, studies of the impact of H + ions on nitrate electroreduction suggested the highest activity in the lowest pH solution of both Cu 0.70 Au 0.30 and pure Cu film catalysts. EXPERIMENTAL SECTION 4.1. Electrode Preparation. The working electrodes used for all electrochemical and morphological characterization experiments in this work are polycrystalline gold and polycrystalline copper, Au (poly) disks (0.9999 purity) and Cu (poly) of 6 mm diameter and 2 mm thickness. The Au and Cu surface preparation was initiated by mechanical polishing down to 1 μm using water-based, deagglomerated alumina slurry (Buehler). After polishing, Cu electrode was thoroughly rinsed with Barnstead Nanopure water (18.2 MΩ cm). The Au electrodes were immersed in warm concentrated HNO 3 to remove any trace contaminants before thorough rinsing with Barnstead Nanopure water (18.2 MΩ cm). The Au crystals were then annealed to red hot in a propane torch for 5 min before being cooled rapidly in an ultrapure nitrogen atmosphere and finally covered with a droplet of the Barnstead Nanopure water to prevent surface contamination. 4.2. Cell Setup. In the electrochemical cells, the electrodes were immersed and held in a hanging meniscus configuration. 65 All electrochemical experiments were performed in three-electrode setup using solutions made with Barnstead Nanopure water and ultrahigh purity grade chemicals as received from the vendors. A saturated mercury−mercurous sulfate electrode with a potential of 0.650 V versus the normal hydrogen electrode was used as reference electrode in most experiments unless stated otherwise in the text. Also, a Pt wire served as the counter electrode (CE) in all experiments. All potentials in the manuscript are presented versus MSE, and all current densities are normalized with respect to the geometric area of the electrode. All electrolytes used in deposition and characterization routines were purged with ultrapure N 2 for at least 30 min before the experiments. , and 100 mM NaClO 4 (GFS Chemicals, 98%) was used for the deposition of pure Cu ultrathin films, as described elsewhere. 55 All deposition experiments were monitored by open-circuit chronopotentiometry carried out with the potentiostat and software mentioned above. 4.5. Cu Anodic Stripping. After deposition, selective stripping of Cu from synthesized Cu−Au ultrathin films was conducted by anodic stripping voltammetry in 1 mM CuSO 4 and 100 mM HClO 4 (GFS Chemical, 70% redistilled). The experiments were performed by running anodic scans from 0.10 to 1.20 V versus Cu/Cu 2+ PRE at a sweep rate of 1 mV s −1 . 4.6. XPS Analysis. XPS results were obtained on a PHI 500 VersaProbe XPS equipped with Ar sputtering gun for surface cleaning and depth profiling. A scanned and focused monochromatic Al Kα X-ray beam was utilized to irradiate elements on the sample surface. The emitted photoelectrons were captured and analyzed by Hemispherical Analyzer with 16 channels. The atomic composition of the prepared Cu−Au thin film was analyzed by binding energy and intensity of the peak. 4.7. Nitrate Electroreduction Testing. After composition analysis, the activities of prepared Cu−Au ultrathin film, polycrystalline Cu, polycrystalline Au, and Cu ultrathin film were tested by running a CV in 10 mM HClO 4 and 10 mM HNO 3 over a potential range from −0.45 to −1.45 V (vs MSE) at a sweep rate 20 mV s −1 . A durability test was carried out by the repeating above-described CV process for 92 cycles. 4.8. pH Measurement and Adjustment. OAKTON pH700 was employed to measure the pH of the solution. The pH meter was calibrated by pH 7.00 buffer (Fisher, certified pH 6.99−7.01 @ 25°C) and pH 4.00 buffer (Fisher, certified pH 3.99−4.01 @ 25°C). The pH was controlled by tuning the concentration of HClO 4 from 10 to 0.01 mM.
2019-04-10T13:12:41.556Z
2018-12-18T00:00:00.000
{ "year": 2018, "sha1": "f0f5d1e35975f57bae8ad008f5c1ffd29c988dab", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b02148", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5b6502a382ad1f2f8886106f4868512411038e92", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
246016119
pes2o/s2orc
v3-fos-license
Skydiving Technique Analysis from a Control Engineering Perspective: Developing a Tool for Aiding Motor Learning This study offers an interdisciplinary approach to movement technique analysis, designed to deal with intensive interaction between an environment and a trainee. The free-fall stage of skydiving is investigated, when aerial maneuvers are performed by changing the body posture and thus deflecting the surrounding airflow. The natural learning process of body flight is hard and protracted since the required movements are not similar to our daily movement repertoire, and often counter-intuitive. The proposed method can provide a valuable insight into the subject's learning process and may be used by coaches to identify potentially successful technique changes. The main novelty is that instead of comparing directly the trainee's movements to a template or a movement pattern, extracted from a top-rated athlete in the field, we offer an independent way of technique analysis. We incorporate design tools of automatic control theory to link the trainee's movement patterns to specific performance characteristics. This makes it possible to suggest technique modifications that provide the desired performance improvement, taking into account the individual body parameters and constraints. Representing the performed maneuvers in terms of a dynamic response of closed-loop control system offers an unconventional insight into the motor equivalence problem. The method is demonstrated on three case studies of aerial rotation of skilled, less-skilled, and elite skydivers. Introduction Skydiving involves skilful body movement to control free-fall. While skydivers fall near terminal vertical velocity they alter their body posture, thus deflecting the air flow and initiating a great variety of solo and group aerial manoeuvres. Mastering body flight is naturally difficult and protracted. The reason is that the required body movements are not similar to our daily movement repertoire, and are often counter-intuitive. The stressful environment, causing muscle tension and blocking kinaesthetic feedback, exteroceptive sensory overload, and a very limited prior information on desired body postures constitute additional learning challenges. Skydiving body movements depend highly on individual anthropometric, clothing, and equipment factors. Therefore, coaches cannot predict what movements will be required from trainees for performing even the simplest manoeuvres, like falling straight down. Moreover, they cannot describe even their own joint movements, since their bodies perform them automatically before being aware that posture adjustments are required. These joint rotations are usually small, often unnoticeable on a video recording. The coaches give only general recommendations regarding relaxation, visualization, and focus of attention. For a qualitative description of common skydiving moves, see Newell (2020). In Clarke and Gutman (2017) a simulation of free-fall dynamics was developed. It predicts the skydiver's motion in the 3D space given a sequence of body postures, and, therefore, can be utilized for studying the skydiving technique. Technique is the way an athlete's body executes a specific sequence of movements Gløersen et al (2018); Lees (2002). It can be viewed as an athlete's repertoire of movement patterns: combinations of body Degrees of Freedom (DOFs) that are activated synchronously and proportionally, as a single unit. From the perspective of dynamical systems theory, motor learning is the process during which these movement patterns emerge Schmidt and Wrisberg (2008). First, the patterns are simple (coarse), providing just the basic functionality. As learning continues, the movement patterns become more complex (fine), providing adaptation to perturbations and uncertainties, and improved performance. We offer a new method to link the acquired behaviours of the dynamical system to the specific changes in the movement patterns, and predict how the movement patterns can be altered to provide desired performance. Existing quantitative methods of technique analysis attempt to provide similar insight. For example, deterministic modelling is widely used for analysing swimming, athletics, and gymnastics Chow and Knudson (2011). It determines the relation between the performance measure and mechanical variables, e.g. horizontal velocity at take-off point during a somersault. It thus identifies parameters important for performance and their target values. Some of those parameters may be related to joints positions and rotation speeds, but the full body motion sequence is usually not provided. Additionally, sometimes multiple techniques can be adopted to generate some of the mechanical variables due to human motor equivalence. Other conventional technique analysis methods, such as variability/ temporal analysis and phase portraits, suffer from some pitfalls: collection of only partial biomechanical data that are insufficient for analysis of the whole skill, and focusing only on some variables or aspects believed to most affect performance or best represent learning progress. See e.g.: Davids et al (2005); Federolf et al (2012); Hamill et al (2000); Lamb and Stöckl (2014); Müller and Sternad (2004); Myklebust (2016); Scholz and Schöner (2014); Stergiou (2016); Sternad (2018). Most often, the methods analysing sports techniques focus on comparing the movement of the trainee to a template/ coach/ top rated athlete (e.g. Ghasemzadeh and Jafari (2011);Federolf et al (2014); Gløersen et al (2018)). Moreover, required movements are usually well known and have large amplitude, as in kicks, swings, poles placement. Thus, the analysis focuses mainly on the timing of those movements and their small variations (Holmberg et al (2005)). Our method aims to be wider-ranging. It is designed to analyse mathematically, without templates, the technique of a previously unexplored activity that is highly dependent on individual body properties, taking into account that all body parts are involved and slight posture variations might have a large influence. Trainees might have different physical condition, age, and even disabilities. As an illustration, skydiving attracts very diverse participants. The reason is that free-fall manoeuvres do not require a significant muscle power or physical fitness. All manoeuvres are generated by aerodynamic forces and moments when a skydiver deflects the air flow around his body by changing the posture. Consequently, our method is designed to deal with an intensive interaction between environment and trainee. The method has one sports-specific part (modelling), and one general part in which the predicted dynamic responses are used for adjusting the technique to make it more efficient. The model computing these responses is driven by the recorded biomechanical measurements. Thus, we adopt an interdisciplinary approach, integrating dominant motor learning concepts with the analytical tools of automatic control theory. We hypothesise that an individual skydiving technique can be assessed if it is interpreted as an actuation strategy of a closed loop feedback control system comprising the trainee's body and the environment. Materials and Methods One professional skydiving instructor with over 30 years of experience and 15000 jumps, one amateur skydiver with less than 10 years of experience and 1000 jumps, and one elite skydiver competing in the discipline Relative Work (RW) with 25 years of experience, 2500 jumps and numerous hours of wind tunnel training participated in the study. The three participants will be hereafter referred to as: Instructor, Student, and Elite Skydiver, respectively. The Student is still pursuing improvement of his personal technique. Since each jump has about 30 seconds of training time, his total free-fall manoeuvring experience does not exceed 8.5 hours. Therefore, his movement patterns are coarse, and some of the body DOFs are still locked. The flow of the study included collecting the body movement data from the participants, processing it by the means of Principal Component Analysis (PCA), and analysing the results by the means of a Skydiver Simulator developed for this purpose. Dynamic simulation of the human body in free-fall The dynamic simulation of the human body in free-fall is essential for our technique analysis method. Therefore, a simulation that receives a sequence of body postures and computes position, orientation, and linear and angular velocities of the skydiver model in a 3D world was developed, see Clarke and Gutman (2017). The inputted postures can be recorded, transmitted in real-time, as well as synthetically generated, or even inputted via a keyboard using a graphical user interface, specifically designed for this purpose. The simulation is implemented in Matlab and it has a continuous graphical output, which shows a figure of a skydiver in its current pose moving through the sky. The sky has a grid of equally spaced halftransparent dots, so that the skydiver's manoeuvres can be easily perceived by the viewer. The modules comprising our skydiving simulator are briefly described below, while the exact equations can be found in our previous work (Clarke and Gutman (2017)). The biomechanical model represents the body segments in terms of simple geometrical shapes and calculates the local centre of gravity and principal moments of inertia for each segment. A set of rotation quaternions linking each two segments enables computation of the overall centre of gravity, inertia tensor, and their time derivatives. The model has to be provided with a set of parameters, expressing body size, shape, and weight of the skydiver under investigation. The dynamic equations of motion, derived following the Newton-Euler method, provide six equations: 3D forces and moments. The kinematic model computes the body inertial orientation, and angles of attack, sideslip, and roll of each segment relative to the airflow. These angles are used in the aerodynamic model to compute drag forces and aerodynamic moments acting on each segment. The total aerodynamic force and moment together with the gravity forces are substituted into the equations of motion. The aerodynamic model is formulated as a sum of forces and moments acting on each individual segment, modelled similar to aircraft aerodynamics -proportional to velocity squared and to the area exposed to the airflow. The model includes six aerodynamic coefficients that were experimentally estimated, and has to be provided with a set of configuration parameters specific to the skydiver under investigation (type of parachute, helmet, jumpsuit, and weight belt). The skydiving simulator output was experimentally verified: Various skydiving manoeuvres in a belly-to-earth pose were performed by different skydivers in a wind tunnel and in free-fall, and the recorded posture sequences were fed into the simulator. The six tuning parameters related to the aerodynamic model (maximum lift, drag, and moment coefficients; roll, pitch, and yaw damping moment coefficients) were selected so that all the manoeuvres were closely reconstructed. The errors RMS in angular and linear (horizontal and vertical) velocities were 0.15 rad/s, 0.45 m/s (horizontal), and 1.5 m/s (vertical), while the velocities amplitudes were 7 rad/s, 15 m/s, and 65 m/s, respectively. Equipment The X-Sens body movement tracking system Roetenberg et al (2009) was chosen for the purpose of getting an accurate measurement of a full body posture, meaning knowing the relative orientation (three DOFs) of all relevant body segments. It provides a suit with 16 miniature inertial sensors that are fixed at strategic locations on the body. Each unit includes a 3D accelerometer, 3D rate gyroscope, 3D magnetometer, and a barometer. The X-Sens motion tracking suit can be worn underneath conventional skydiving gear. It has a battery and a small computer located on the back and not restricting the skydiving-specific movements. The X-Sens output is transmitted or recorded at 240 Hz. Each measurement set includes the orientation of 23 body segments (pelvis, four spine segments, neck, head, shoulders, upper arms, forearms, hands, upper legs, lower legs, feet, toes) relative to the inertial frame, expressed by quaternions. The measurements accuracy is less than 5 degrees RMS of the dominant joint angles Schepers et al (2018). The experiments took place in a wind tunnel (diameter 4.3 m, height 14 m), which is a widely used skydiving simulator, where the air is blown upwards at around 60 m/s -an average terminal vertical velocity of skydivers in a belly-to-earth pose. The human body floats inside the tunnel replicating the physics of body flight. The experiments were videoed for future reference, see Online Resources 1-3. Measurement procedure The Instructor, Student, and Elite Skydiver were in turn equipped with the X-Sens suit and performed the calibration procedures defined by X-Sens that are necessary for the convergence of the measurement system. The participants' body segments and height were measured and inputted into X-Sens software, as well as into the Skydiving Simulator, described Sect. 2.1, along with weight, helmet size, and jumpsuit type, worn on top of the X-Sens suit. Next, the participants entered the wind tunnel and performed 360 degrees turns to the left and to the right in a belly-to-earth pose during two minutes -a typical wind tunnel session. The participants were instructed to perform as many turns as they could within this session, while preventing any horizontal or vertical displacement relative to the initial body position in the centre of the tunnel. This is an essential skydiving skill. Extracting and visualising the movement patterns The movement patterns were extracted from the X-Sens data by means of PCA -a method widely used for analysing human motion in general and sports technique in particular, see Federolf et al (2014) for a review. PCA uses Singular Value Decomposition (SVD) of the recorded motion data in order to find movement patterns. In our case, X-Sens data consists of inertial orientation of 23 body segments. For skydiving in a belly-toearth pose such a detailed posture measurement is superfluous, hence some segments can be united. Our skydiver body configuration Clarke and Gutman (2017) is defined by 16 rigid segments (pelvis, abdomen, thorax, head, upper arms, forearms, hands, upper legs, lower legs, and feet), and 15 joints (lumbar, thorax, neck, shoulders, elbows, wrists, hips, knees, ankles). The relative orientation of connected segments is defined by three Euler angles (the sequence of rotation is shown in Fig. 1), which are computed from the recorded X-Sens quaternions. The data matrix is constructed from 3·15 rows and 120·240 columns, since there are 15 joints, each with 3 DOFs, the sampling frequency is 240 Hz, and each tunnel experiment is 120 s. From each data row we subtract its mean, whereas normalization of rows is not required in our case, since all rows have the same units and the differences in variability are meaningful. Computing the SVD results in a diagonal matrix S and unitary matrices U and P C, such that: The matrix P C of size 45 by 45 contains the Principal Components: eigenvectors that define the components of the skydiver's turning technique, hereafter referred to as movement components. The 45 diagonal elements of S are the eigenvalues corresponding to each movement component, with more dominant components having larger eigenvalues. Projecting the original data into the principal components provides the time evolution The skydiver's pose at time t is composed as: where ControlSignal i (t) is value from row i of matrix ControlSignals corresponding to time t, i.e. from column t · 240; N pose [j] is the entry j of the column vector N pose containing the mean of the sequence of recorded postures and, thus, representing the neutral pose; and P C i is the eigenvector (with norm 1) defining the movement component i, i.e. column i of matrix P C. Thus, the control signal α i (t) is the angle in radians of the movement component being engaged, and will be referred to as the pattern angle, since each movement component defines a certain pattern of coordination contributing to the total variance of the posture. The Skydiving Simulator can be used to animate a specific movement component multiplied by any synthetically constructed or recorded control signal. We often use a sine wave with amplitude A = 1 rad and frequency f = 1 Hz to animate PCA movement components: This makes it possible to give an athlete a very detailed visual feedback of coordinative structures comprising his movement repertoire. The Skydiving Simulator fed by the sum of all identified PCA movement components, multiplied by the corresponding control signals, will reconstruct the performed manoeuvres. However, if we choose to input specific PCA movement components, multiplied by the recorded or synthetically generated control signals, we can obtain answers to a great variety of interesting questions related to the skydiving skill and its acquisition process. Below we describe the analysis configurations that provided the most useful insight. Manoeuvre generated by a given PCA movement component Each movement component reconstructed by PCA has a corresponding normalized eigenvalue in the range [0, 1]. Only a few PCA components are expected to have eigenvalues greater than 0.2, hereafter referred to as dominant components. An accepted interpretation of these values is the percentage of total posture variability a movement component is responsible for, see Gløersen (2014); Hollands et al (2004). We seek a deeper insight: describing the exact role of a given PCA movement component in the overall manoeuvre. This is achieved by feeding the Skydiver Simulator by each dominant movement component with: 1. a synthetic periodic control signal (Eq. (4)) 2. a step input: pose i (t) = N pose + A · P C i Notice, that in the latter case the pose is constant in time and its difference from the neutral pose is A · P C i . For A = 1 rad this quantity will be referred to as a movement pattern with dimensions, as opposed to a dimensionless eigenvector P C i . The frequency and amplitude of the control signal in Eq. (4) are chosen to obtain significant and yet feasible pose changes in time. PCA movement components required to reconstruct the original manoeuvre The following method provides an insight on how many PCA movement components are required to reconstruct the manoeuvre under investigation. It suggests feeding the skydiver simulator by a different amount of dominant PCA movement components with the corresponding control signals from the experiment: where n is the number of chosen PCA movement components. For n = 45 we obtain pose 45 (t) -an exact pose the skydiver had during the experiment at every instant of time. Since the manoeuvre in our experiment was turning, its outcome can be described by yaw rate. In order to compare the outcome produced by different pose inputs, we define the total discrepancy between the yaw rate profiles Ω 45 (t) and Ω n (t) resulting from pose 45 (t) and from pose n (t), respectively, accumulated during the time of the experiment T = 120 s, which included N = 120 δt steps, δt = 1 240 s: The similarity of the outcome manoeuvres can be defined as: where Err 0 is the error if the skydiver doesn't manoeuvre at all by keeping the neutral pose at all times. Thus, if the component P C 1 had no influence on the yaw rate, Sim 1 would be zero, whereas, if only P C 1 out of all the 45 PCA movement components had influence on the yaw rate, Sim 1 would be one. Dominant DOFs in a given PCA movement component Each PCA movement component that generates a meaningful manoeuvre can be further investigated. Consider feeding the skydiver simulator by a given PCA movement component, i.e. by a given eigenvector P C i that has 45 elements, each representing a joint rotation DOF. Instead of engaging all DOFs, only an increasing sub-set of these DOFs will be engaged in each simulation, starting from the elements of P C i with highest absolute values. This way, each element k, k ∈ [1, 45] of the body pose is computed at every instant of time as: where k eng is a set of indices of P C i elements being engaged. This will make it possible to determine which DOFs within the studied movement component are most important for performing the manoeuvre associated with this component. Synergies of PCA movement components Synergies of PCA movement components can be determined by activating a combination of two or more components with synthetic periodic control signals that are similar to the signals contained in the relevant rows of matrix in Eq. (2). PCA control signals can be plotted in the same figure and examined for any apparent correlation between them, such as phase shift. If found, such synergies can be tested in simulation. This allows identifying less dominant components that compensate for undesirable effects more dominant components might have on the overall manoeuvre. Analysis of movement patterns from a control engineering perspective The key of our method is determining the potential effectiveness of the identified movement components: how well they fulfil their roles, reveal the pitfalls and possible ways to eliminate them. Our method is intended to be a tool for coaches that can analyse why certain posture changes generate faster manoeuvres, performed with less effort, or greater precision; and compute individual modifications for the trainee's movement patterns, which could be naturally produced by the CNS in due course. One of the central problems in the motor learning field is Bernstein's DOFs problem Bernstein (1967): How does the CNS choose which DOFs to use for a certain movement? We offer in this section a novel insight into this problem applied to manoeuvring in free-fall. In the literature it is usually assumed that movement patterns are solutions of optimizing some cost function. Suggested costs include mechanical energy consumption, joints acceleration/jerk, amount of torques/forces applied by joints/muscles Berret et al (2011). These cost functions are tested, for example, on arm reaching movements under some disturbances. However, such an approach may not always capture the dynamics of realistic activities. For instance, torques and forces applied by joints and muscles in free-fall are usually very small since all the manoeuvres are executed by the aerodynamic forces and moments, which are generated by only slight changes in the body posture. For these reasons, we offer a new approach to exploration of the DOFs problem and, thus, technique evaluation. Let's consider a skydiver performing free-fall manoeuvres from a perspective of automatic control theory. Manoeuvre execution can be represented as a hierarchical closed-loop feedback control system, as shown in Fig. 2. The inner control loop is closed inside the body actuation block, where the command signal, issued for the specific movement patterns, is compared to the feedback from body proprioceptors triggering the necessary adjustments. The outer control loop has a role of an automatic pilot in autonomous systems: an automatic controller that interprets the disparity between the desired and sensed linear and angular velocities in terms of an actuator command. For example, let's consider the turning manoeuvre discussed in our study case. The desired yaw rate is compared with the actual body yaw rate sensed by the eyes and vestibular system, and according to the disparity the automatic pilot issues a command signal for the pattern P C 1 . Next, this command is implemented by the body (at a much higher rate than the outer loop) and the resulting joints rotation and posture becomes an input to the equations of motion, that propagate the position, orientation, and motion of the skydiver in a 3D world. In Automatic Control Theory a combination of process and actuator is called plant. As shown in Fig. 2, in our skydiver model, the plant that the automatic pilot has to control includes: actuator -body actuation using a specific movement pattern; process -skydiver free-fall dynamics, and sensing. The complexity of an automatic pilot and its performance (how fast and accurate it tracks the desired yaw rate profile) depend on the dynamic characteristics of the plant. When control engineers design an automatic pilot for autonomous platforms they usually have a set of requirements regarding these characteristics and analytical tools for examining the plant and its properties. We believe that the same control considerations are important for a natural automatic pilot built into an athlete's mind/ body. The body's natural controller will achieve a better execution of a desired manoeuvre if the plant possesses certain (convenient for control) qualities. The plant, in its turn, strongly depends on the choice of movement patterns that are used for body actuation. Thus, we suggest studying the dynamic characteristics of the plant actuated by a movement pattern under investigation. One of the basic characteristics is the transfer function, which defines the relation between the input and output signal of a system, mathematically defined using Laplace transform Aström and Murray (2010). For skydiving, the input is a pattern control signal, and the output is a state variable (skydiver's acceleration/ velocity/ position/ orientation) associated with this pattern. For the turning manoeuvre it will be the transfer function from the pattern angle to yaw rate. In the general case, multiple transfer functions may be constructed: from the input signal of each dominant movement component identified by PCA to each significant state variable. An example of such analysis is given in Clarke and Gutman (2017), where we compared two movement patterns for turning obtained from observing two types of students: novice and advanced. Both patterns consisted of only four DOFs (associated with shoulders) but produced plants with very different characteristics. The plant actuated by the 'novice' pattern had a resonance and anti-resonance pair around the frequencies of a desired bandwidth. It was impossible to design a yaw rate controller for this plant that would satisfy a minimal set of reasonable specifications. This may explain the fact that novice skydivers turn very slowly, whereas initiating a faster turn causes them to lose stability (e.g. flip to back). However, the resonance and anti-resonance pair did not exist in the plant actuated by the pattern of the advanced skydiver who was able to perform fast and stable turns. Transfer functions are constructed according to the correlation method Ljung (1998), while the Skydiving Simulator acts as a virtual spectrum analyser. It is fed with a sequence of poses constructed from the pattern under investigation (the PCA movement component P C i ) inputted in a form of a sine wave (Eq. (4)) with different angular frequencies (in our case: 0.01 ≤ ω ≤ 100 rad/s). For each frequency, the gain G(ω) and the phase Φ(ω) of the desired transfer function are computed as: where A is the amplitude of the movement pattern angle, T = 2π/ω, n is the number of periods taken for analysis. We used n = 10 cycles, starting after the transient, i.e. after the yaw rate had reached periodic steady state. Ω(t) is the yaw rate of the skydiver computed by the simulator, and the integrals are approximated via the trapezoidal method. Next, a Bode plot of the transfer function is created, where the gain in dB is: 20 · log 10 G(ω). This procedure is repeated for different amplitudes A of the input signal in order to verify the linear behaviour of the plant in the chosen range of amplitudes. In case of a linear plant all the obtained transfer function values for a given frequency will be identical. Instructor The PCA of the recorded body postures of the Instructor shows that, similar to other sports (e.g. skiing, see Federolf et al (2014)), skydiving also requires from our body to produce about 4-8 significant movement components, see Fig. 3. The PCA control signal of the most dominant movement component (P C 1 ) has the same fundamental frequency (0.25 Hz) as the recorded yaw rate. This means that the first PCA movement component is responsible for turning, which was the main objective in this experiment. Other objectives were: keeping distance from all tunnel walls, keeping constant altitude relative to the tunnel floor, and transitioning to each turn in the opposite direction in minimal time. It seems that 4-5 PCA movement components are sufficient for performing the manoeuvre under investigation, since the control signals of the other PCs resemble noise. From Fig. 4 we compute: Sim 1 = 0.85, meaning that one PCA movement component is sufficient to reconstruct 85% of the yaw rate profile. The highest six values in the eigenvector associated with this component are mostly related to the DOFs of head and hips, see Tab. 1 summarizing the values of Euler angles representing the movement component and Fig. 1 explaining the coordinate system. It is known from skydiving experience that hips are very efficient for turning, therefore it is expected that an experienced skydiver engages his hips while turning. The head movement is also expected: we naturally look in the direction of motion. An interesting question, however, is whether the head position is important for reaching the desired yaw rate. As shown in Fig. 4, 84% of the yaw rate profile (Sim = 0.84) is reconstructed by engaging only the head and the hips for P C 1 . However, without engaging the head only 52% (Sim = 0.52) of the motion is reconstructed. Thus, the head movement combines two tasks: looking where the body is going, and acting as a significant aerodynamic control surface. Animation of P C 1 is shown in Online Resource 4. Investigation of the three next dominant PCA movement components shows that: • P C 2 provides fall rate adjustments by engaging the arms • P C 3 is responsible for stopping the turns by engaging the knees and, to a smaller extent, the shoulders • P C 4 prevents orbiting, meaning a horizontal motion in a large circle, induced by P C 1 Engaging P C 4 in addition to P C 1 in the following way: where ω = 2 · π · 0.2 rad/s , compensates for the orbiting effect, see Fig. 5. Student The Student exhibited many different PCA movement components incorporated for turning (10-15 as opposed to one turning component exploited by the Instructor; the control signals for the six most dominant components are shown in Online Resource 5). Most of these components possess serious pitfalls: are coupled with a horizontal/vertical motion component, provide a turn only in one direction, and require very large amplitude of a control signal. The yaw rate profile of the experiment is closely reconstructed by the first nine PCA movement components (Sim 9 = 0.87, see Online Resource 6). Thus, the Student's movement repertoire includes a greater amount of less efficient coordination patterns. This is verified by the Bode diagram in Fig. 6 comparing the Instructor's turning pattern with the student ones, animated in Online Resources 7-10. The Student's transfer functions from turning patterns to yaw rate lack gain at low frequencies (making it hard to generate a turn and keep high turning rate), and lack attenuation at high frequencies (making it hard to stabilize the closed loop and deal with disturbances and noise). These are the reasons the Student reported that he had to constantly fight the turbulence of the tunnel airflow and found it hard to generate the movement. The Instructor's transfer function has a high gain at low frequencies providing the efficiency of rotations, and a gain reduction of about 20 dB per a decade starting from around 1 rad/s, providing a disturbance attenuation at high frequencies and replicating dynamics of an integrator in the innermost (proprioception) loop, recall the control block diagram in Fig. 2. The higher level loop, which tracks the yaw rate, includes the dynamics of the closed proprioception loop: replicating a low pass filter, what is also seen in the phase plot in Fig. 6. In this way, the transfer function of the plant possesses the desired characteristics of the open yaw rate tracking loop, which can thus consist of the plant and a proportional controller. In other words, the control signal driving the Instructor's turning pattern can be simply the scaled yaw rate error. In contrast, any of the Student's plants will be either inefficient or unstable under proportional control. It is possible to design a controller with high order dynamics in order to compensate the plant pitfalls and achieve a desired closed loop behaviour, but it is unlikely that such a controller can be implemented by a human motor system. From numerous experiments with various aerial manoeuvres we may hypothesise that humans can not implement complex dynamic controllers. Instead, the human body develops such movement patterns that produce an ideal plant for practised manoeuvres, enabling athletes to operate in a closed loop via a simple control law. Proposing improvement of Student's technique with simulation The above analysis of the Student's pattern angles to yaw rate transfer functions showed that a significant improvement of the student's technique will follow from increasing the dynamic stiffness property of his plant. Dynamic stiffness, or impedance, is the ability of the actuator to resist an external oscillatory load Wang et al (2015); Winters et al (1988). Hence, we seek to increase the student's ability to reject disturbances at high frequencies, such as the turbulence of the wind tunnel air. In The chosen pattern is the PCA movement component P C 6 , since its corresponding transfer function has less pitfalls relative to other options. P C 6 is not very dominant (its eigenvalue is 0.11), which means it was used for only a few of the performed turns. The reason, probably, is that this coordination pattern has been formed only Such a dominant property as dynamic stiffness is most likely related to the major aerodynamic -5 20 -10 -7 2.5 3 S-P C 1 -4 6 5 3 4 6 4 -1 6 -2 7.5 S-P C 6 -7 -5 -7 17 -6 -20 1 1 1 1 Note: The symbols φ, θ, ψ in the table head represent Euler angles stated in degrees in order to facilitate their intuitive interpretation. The values represent the quantity 1 [rad] · P C, where P C is the dimensionless movement pattern eigenvector. I-P C1 is the first principal component extracted from the turning experiment with the Instructor. S-P C1 and S-P C6 are the first and the sixth principal components extracted from the turning experiment with the Student. The DOFs with absolute values less than 0.5 [deg] are not shown (angles of wrists and ankles, φ of thorax abdomen and head, and θ of elbows, knees and hips). surface of the body -the torso. From Tab. 1 it can be seen that the torso is activated differently by the Instructor and the Student during turning. The Instructor exhibits lateral tilt, whereas the Student exhibits axial rotation, especially this is pronounced in his most dominant PCA movement component P C 1 . Notice (from Tab. 1) that axial rotation in P C 6 becomes much smaller, and even some tilt appears. Modifying P C 6 so that the torso tilts instead of rotating (abdomen θ = 2.5 degrees, ψ = 0; thorax θ = 0.5 degrees, ψ = 0) significantly changes the pattern angle to yaw rate transfer function, bringing it very close to the Instructor's one, see the Bode plot in Fig. 7 labelled 'P C 6a modified torso'. Just a couple of degrees of torso tilt make a great difference. We called this effect the key DOF -DOF that has the most impact on shaping the dynamic response. However, in the introduced modification the thorax joint tilt is smaller than the abdomen joint tilt, what is opposed to the torso actuation observed in the instructor. This difference might be caused by differences in body parameters, however, prior to experimental verification the reason remains uncertain and might be ergonomic. Therefore, in case a more comfortable torso movement implies tilting thorax more than the abdomen, the Student is offered a second option. One change in the Student's neutral pose provides an option to actuate his torso similar to the Instructor (abdomen θ = 1.5 degrees, thorax θ = 2 degrees), while the transfer function under investigation remains close to the one of the Instructor (see Fig. 7, Bode plot labelled 'P C 6b ergonomically modified torso and neutral pose'). This change is associated with the position of the forearms: in the Student's recorded neutral pose the forearms are too far down (relative to the Instructor's neutral pose and also to the standardly accepted neutral pose). This is fixed by decreasing the shoulder's internal rotation (by 35 degrees), as shown in Fig. 8. Notice that P C 6 must be normalized after modification for computation of the transfer function according to Eqs. 4 and 9. It is possible to further improve the transfer function under investigation by increasing the phase around the current bandwidth frequency. This will allow using a proportional controller with a larger gain and extending the bandwidth, thus performing faster turns with faster transitions between different turning directions. Such an improvement can be achieved by modifying, in addition to the DOFs mentioned above, the flexion of the elbows: both elbows flexion values φ are multiplied by 1.7. The resulting transfer function is labelled in Fig. 7 as 'P C 6c , ergonomically modified torso, neutral pose, and forearms input'. The Student's plant modified as proposed above can be easily controlled using a proportional controller, as shown in Fig. 9. Moreover, the latter modification allows achieving a better gain margin (23.5 dB) than that of the Instructor's open control loop (5 dB). In this way, rather than mimicking the Instructor's movement pattern our technique improvement method is aimed at shaping the Student's plant. This process, however, requires to decide, first of all, on design specifications: desired dynamic properties of the plant under investigation. In the next sub-section we address manoeuvres that involve contradicting dynamic characteristics. The rotations performed at a competitive level require to shift the stability-agility tradeoff towards the latter, in order to produce the fastest and most accurate rotations a human body is capable of. Elite Skydiver By the means of PCA three movement components producing turning were extracted from the experiment with the Elite Skydiver. See Online Resource 11-13 for the animation of these patterns, and Online Resource 14 for the PCA details and reconstruction of the measured yaw rate profile in simulation. The first Principle Component produces a plant that can be stabilized with a proportional controller, as in the case of the Instructor rotations. Moreover, it has better phase characteristics, allowing for very large phase and gain margins, since this system is Minimum Phase, see Fig. 10. It is hence possible to control this plant with a high gain proportional controller. However, we must bear in mind that all human joints have limitations, therefore, high gain will improve performance only until the actuation limits are reached. Nevertheless, it is advantageous to be able to utilize the whole range of movement granted by the body flexibility. The second and third Principal Components produce turns only in one direction: right and left, respectively. This is due to body engineering: these patterns, as opposed to P C 1 , involve an asymmetrical engagement of the hips DOFs, see Tab. 2. In P C 1 these DOFs are used symmetrically with a small range of motion: the right hip's flexion and abduction is used along with the same magnitude of left hip's extension and adduction in order to turn left, and vice-a-versa. In P C 2 and P C 3 only one hip is used for turning in each direction: P C 2 generates a right turn using right hip extension and abduction, while P C 3 generates a left turn using left hip extension and abduction. Thus, for control purposes these two movement components can be combined, such that either P C 2 or P C 3 is activated depending on the sign of the control signal α(t), as shown in Eq. (11). By simulating the skydiver motion caused by a sine input driving Eq. (11) we can see that movement components P C 2 , P C 3 invoke a different mechanism of turning than that initiated by P C 1 of both skydivers. When P C 1 is applied (see Online Resource 15 for simulation recording), the aerodynamic forces acting on arms and legs acquire components that generate a yaw moment depending on lever arms: distances of those limbs from the centre of gravity. pose(t) = N pose + |α(t)| · P C 2 , if α(t) < 0 α(t) · P C 3 , otherwise (11) In contrast (see Online Resource 16), application of P C 2 , P C 3 causes the roll movement of the upper body, thus creating an angle between the torso and the airflow. This generates a large aerodynamic force responsible for the yaw moment, since torso has the largest surface area of all body segments. During the initial roll motion the body slightly rotates in the opposite direction, which indicates that this plant is Non Minimum Phase (NMP), see Fig. 11. Notice from the phase diagram on Fig. 10 that the Instructor's plant is also NMP, however this behaviour is less pronounced, as also seen from the step response on Fig. 12. Additionally, the transfer function, associated with this plant, has high gains for frequencies up to 10 rad/s, whereas the phase at low frequencies is similar to that of the plant of the Instructor (see Note: The symbols φ, θ, ψ in the table head represent Euler angles stated in degrees in order to facilitate their intuitive interpretation. The values represent the quantity 1[rad] · P C, where P C is the dimensionless movement pattern eigenvector. P C1, P C2, P C3 are the first three Principal Components extracted from the turning experiment with the Elite Skydiver. The DOFs with absolute values less than 0.5 [deg] are not shown. Fig. 10). This means that if the movement components P C 2 , P C 3 are engaged with large control inputs, assuming a proportional controller whose gain is larger than about 0.3, the closed loop will be unstable. As it was mentioned in Sect. 3.2, humans are unlikely to be able to implement a dynamic controller. In the experiment, however, amplitudes reaching 1 rad were observed. It seems, therefore, that the Elite Skydiver engages the movement components P C 2 , P C 3 in open loop: when the right turn is desired P C 2 is engaged proportional to the desired angular acceleration, in order to 'throw' the body into a turn, and as it turns, P C 1 is used in a closed loop to adjust the turning rate to a desired profile. In order to examine this observation, such a control strategy was implemented in simulation. A sine yaw rate profile was tracked, and the tracking performance achieved by a controller based only on P C 1 was compared to the results obtained utilizing a combination of the three movement components. The yaw rate reference signal was: where t is the simulation time. The controller utilizing P C 1 for body actuation had proportional and feedforward parts: where Ω(t) [rad/sec] is the yaw rate of a virtual skydiver in simulation, and α 1 (t) [rad] is the control signal, i.e. the angle of the movement component P C 1 , such that the skydiver's pose at each instant of time is defined as: It can be seen from Fig. 13a that the desired yaw rate profile can not be tracked accurately due to actuation limitations: In other words, the movement component P C 1 does not allow to change the turning rate so fast. The delay in tracking the desired yaw rate profile is 0.5 s. Utilizing the movement components P C 2 A combination of P C 2 and P C 3 is defined in Eq. (11). Fig. 11: Open loop response to the step input signal driving the movement components P C 1 , P C 2 , and P C 3 extracted from the rotations experiment with the Elite Skydiver. The pose in the case of P C 2 , P C 3 combination is defined according to Eq. and P C 3 significantly improves the performance, as shown in Fig. 13b. The feedforward control signal driving the combination of P C 2 , P C 3 is where α 1 (t) is according to Eq. (13). This control strategy, which we termed the Superposition, allows to reduce the tracking delay to 0.05 s and reconstruct in simulation the fast turns observed in the experiment, see Fig. 13b. We thus hypothesize that for each specific manoeuvre performed at a competition level athletes develop a 'high performance' pattern (such as P C 2 , P C 3 ), what may be analogous to the flight dynamics of high performance aircraft. Athletes have to learn from practice how to engage this pattern during the manoeuvre execution, along with the closed loop control of the manoeuvre. Notice that the derivative of the reference signal in Eq. (16) is known a-priori, as it is related to the intended manoeuvre. The closed loop on the derivative of the yaw rate error, however, may not be possible for human implementation. It requires to sense/ compute the derivative of the actual yaw rate and filter out the noise in real time, what is most likely beyond human abilities. For this reason, we used a feed-forward of the reference signal derivative combined with a proportional controller, instead of a conventional (Proportional-Derivative) PD control. Interestingly, there is another way to combine different movement components. After exploring other Principal Components, we noticed that P C 2 (a) pose given in Eq. (14) (b) pose given in Eq. (17) Fig. 13: Simulation of tracking a sine yaw rate profile in a closed loop. The movement components P C 1 , P C 2 , P C 3 , extracted from the rotations experiment with the Elite Skydiver, are used for body actuation in combination with P C 4 for right turns and P C 3 in combination with P C 6 for left turns allows to construct a set of movement patterns, defined by the parameter k: where α(t) is the input signal and k is the factor reflecting to what extent the additional patterns P C 4 and P C 6 are involved. Movement patterns in this set produce plants that can acquire any behaviour in between the two extremes: the agility of P C 2 , P C 3 , and the stability of P C 1 . The frequency response shown in Fig. Fig. 14: Comparison of the pattern angle to yaw rate transfer functions, generated by different combinations of Principal Components extracted from the rotations experiment with the Elite Skydiver. A combination of P C 2 and P C 3 is defined in Eq. (11). A combination of P C 2 , P C 3 , P C 4 , P C 6 is defined according to Eq. (18) 14 is for the value of k = 0.8, whereas it is possible to obtain any response in between the most agile and the most stable behaviour, thus tuning the agility-stability trade-off in a continuous way! This strategy to combine movement components we termed the Synergy. Recall the synergy of the Instructor's Principal Components P C 1 , P C 4 given in Eq. (10) for prevention of orbiting (see Fig. 5): A similar synergy can be observed between the movement components of the Elite Skydiver, see Online Resource 14. Moreover, synergies of movement components were identified from analysis of other manoeuvres, for example, the side slides performed by the Elite Skydiver. The details are given in Online Resource 17, along with the PCA of additional manoeuvres performed by the Instructor, Student, and Elite Skydiver. Practical perspective The key idea of our technique analysis method is testing the extracted PCA movement components of the trainees in a Skydiving Simulator and generating transfer functions from a pattern angle to a physical variable associated with a given pattern. Simulation tests show what manoeuvre is generated by each PCA movement component, which joint rotation DOFs have a dominant influence, and how many PCA movement components were required to perform the given task or exercise. This information reflects the skill level of the trainee and the major pitfalls in his technique, as was demonstrated in our study case. We suggest to interpret the analysis of the Student's turning patterns as the evolution of his movement repertoire: the old habits (P C 1 , P C 3 ), exploration (P C 8 ), and on-going progress (P C 6 ). Movement components that represent exploration by the trainee of the interaction between his body and the airflow usually include a particularly strong movement (a large value in vector P C i relative to others) of a certain limb, and in the simulation usually produce a turn in only one direction. The progress is represented by movement components that include more unlocked DOFs relative to the most dominant PCA movement component, and in the simulation produce faster turns with less undesirable horizontal motion. We believe that one of the most important tasks of a skydiving coach would be identifying an emergence of such components and triggering the student's body to utilize them more during next training sessions. The trigger can be found by analysing the PCA control signals of other movement components and the focus of attention at the moment when this component was first activated. For example, in our case, P C 6 was triggered when prior to starting the turn the skydiver came to a full stop, increased fall-rate, and kept a visual reference to an object outside of the tunnel at his initial heading. Thus, in order to potentially accelerate learning aerial rotation, the coach could give this student cues to stabilize his initial heading in front of the coach, arch his back, and keep eye contact with the coach. The coach can give a sign to start turning once these conditions are fulfilled. Next, the coach can expect that the analysis of the following training sessions will show that the preferable movement component is more dominant. The second potentially useful tool for coaches is constructing transfer functions reflecting the dynamic properties of the body in free-fall actuated by a PCA movement component under investigation. As we have seen in the above analysis the Bode plot of such a transfer function shows what dynamic properties need improvement in order to facilitate for the trainee the implementation of a given manoeuvre. Moreover, it is possible to compare the instructor's and the student's transfer functions, and to check what modifications of the student's movement pattern can reduce the gap between them. Notice, that in skydiving, comparing the movement patterns directly may be less useful, as well as mimicking the movement patterns of a skilled model like the instructor. The reason is that the aerodynamic forces and moments generated by moving a certain limb greatly depend on the individual body shape, height, weight, the type of jumpsuit, and the neutral pose, which in its turn depends on the body flexibility and centre-of-gravity location. Thus, the same movement of a certain student's and instructor's limb can cause a different type of motion in a 3D space, or the same type of motion but in the opposite direction. In this way, from the student's PCA alone the coach cannot know what tips to give him, as there is no 'template' or a 'correct way' to perform a move. Whereas, the tool proposed above allows the coach to know what performance change will likely be caused by a change in movement pattern. In our study case, the coach can recommend the Student to pay attention that his torso tilts rather than rotates, to allow a larger range of movement in the elbow joint, and, finally, to pay attention that in the neutral pose the forearms are in the same plane as the torso. These changes will enable the Student to be less sensitive to the turbulence of the airflow, and generate faster turns while not losing stability. The simulated improvements need to be experimentally verified and compared to other practice and instruction conditions. Introducing the desired changes into practice should be gradual (one change at a time), and, while performing the manoeuvre, the focus of attention should still be external, e.g. looking at the instructor as mentioned above. When initiating a movement (e.g. a rotation) the trainee should consciously apply the desired change (e.g. torso tilt) and pay attention to how it feels and how the airflow around the body is redistributed. This new initial response, according to fundamental somatic practices (Brodie and Lobel (2012)), will help to break previously acquired 'bad habits' and trigger emergence of new movement patterns. Theoretical perspective The main hypothesis of this research, demonstrated in the study case, explains the learning process of a new motor skill from the point of view of control theory. The learning begins with a primitive body actuation leading to a problematic plant, and forcing novices to act in an open loop since the required controller would have a dynamic complexity that is hard to realize by the human motor mechanisms. Over time, the learning converges to operating in a closed loop via a simple control law since an improved body actuation provides the plant with the desirable dynamic properties. When the skill level is sufficiently high, the stability-agility trade-off becomes shifted towards the latter in order to enable performing advanced manoeuvres. The plant, inherently unstable in this case, is controlled in open loop, while the feed-forward gain is learnt by trialand-error. This process is very time-consuming: athletes repeat each competition move thousands of times in the wind tunnel. Probably, for this reason the specialization in skydiving is so narrow. There are many disciplines, and at each discipline skydivers compete only at a very limited number of specific manoeuvres. For example, the oldest skydiving discipline called 'Style and Accuracy' included only two manoeuvres: turns and back loops. We have discovered that for most manoeuvres a significant improvement of plant dynamics can be achieved by the means of modifying the engagement of only a few (or even just one) DOFs. In Sect. 3.4 it was the torso tilt, which we termed the Key DOF. We observed that the key DOF is usually proximal: one of the torso DOFs, shoulders, and hips. This can be expected since a small change in these joints creates a significant displacement of the distal limbs (due to a large lever arm), allowing to place them in a more efficient position relative to the airflow. This assumes, however, that the distal DOFs are unlocked, i.e. engaged in movement. Thus, the insight into the Bernstein's DOF problem, emerged from our research, is the following: Movement patterns required for a skilful task execution incorporate a large amount of body DOFs in order to account for many aspects of the environmental dynamics. Human kinematic redundancy is utilized for achieving a dynamic system with good handling characteristics: sufficient bandwidth and stability margins, high frequency disturbance attenuation, no crosscoupling, zero steady-state error for step inputs, fast rise time, small overshoot, and other qualities depending on the desired manoeuvre or task. Simulations show that it is impossible to produce a plant with good handling qualities if the movement pattern that actuates the body engages only 1-2 joints rotation DOFs, even if these DOFs are sufficient for initiating a desired manoeuvre, e.g. a turn. This is the reason why the current skydiving training is so hard and protracted. The movements that are taught to students are simple, but it's very hard to use them in reality, while the efficient movements that are convenient for manoeuvring are not taught, as it is difficult to explain them. In order to develop efficient movement patterns the body needs to learn the interaction with the environment: the aerodynamics of free-fall. This is achieved through performing free-fall manoeuvres, what is extremely difficult for novices, since they are trying, according to our model, to control a plant with very poor dynamic characteristics. This results in stability loss and spending most of the training session (free-fall time) in attempts to regain it. This vicious circle can be broken, theoretically, in two ways, which are the primary directions for our future work: The first option is showing the trainee manoeuvres that can be performed using the simple movements. Firstly, determine (by the means of simulations) the performance envelope of a trainee given his current movement repertoire. Secondly, design individual exercises that are inside the trainee's performance envelope. The trainee is thus given tasks that we know are achievable, i.e. we prevent him from losing stability and wasting training time. The trainee's body will acquire a feel for flight dynamics thus triggering the CNS to produce new movement patterns, utilizing additional DOFs, and extending the performance envelope. The second option is finding a way to teach the more efficient movements. For example, extract Declarations Conflict of interest. The authors have no relevant financial or non-financial interests to disclose. Funding. No funds, grants, or other support was received. Author contributions. The manuscript draft preparation, and data collection and analysis were performed by Anna Clarke. The study supervision and the manuscript editing was performed by Per-Olof Gutman. Both authors read and approved the final manuscript. Ethics approval. The experiments were approved by the Technion Institutional Review Board and Human Subjects Protection. Consent to participate. The participants were informed of the aims and procedures of the experiments, and signed an informed consent form. Consent for publication. The participants have agreed to publication of the motion data collected during the experiments, the data processing results, and videos recorded during the experiments.
2022-01-19T02:16:07.667Z
2022-01-15T00:00:00.000
{ "year": 2022, "sha1": "75682c998a1fbcf1aea3963d79f7c6b596809637", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "75682c998a1fbcf1aea3963d79f7c6b596809637", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
55451690
pes2o/s2orc
v3-fos-license
The Evaluation of GPS techniques for UAV-based Photogrammetry in Urban Area The efficiency and high mobility of Unmanned Aerial Vehicle (UAV) made them essential to aerial photography assisted survey and mapping. Especially for urban land use and land cover, that they often changes, and need UAVs to obtain new terrain data and the new changes of land use. This study aims to collect image data and three dimensional ground control points in Taichung city area with Unmanned Aerial Vehicle (UAV), general camera and Real-Time Kinematic with positioning accuracy down to centimetre. The study area is an ecological park that has a low topography which support the city as a detention basin. A digital surface model was also built with Agisoft PhotoScan, and there will also be a high resolution orthophotos. There will be two conditions for this study, with or without ground control points and both were discussed and compared for the accuracy level of each of the digital surface models. According to check point deviation estimate, the model without ground control points has an average two-dimension error up to 40 centimeter, altitude error within one meter. The GCP-free RTK-airborne approach produces centimeter-level accuracy with excellent to low risk to the UAS operators. As in the case of the model with ground control points, the accuracy of x, y, z coordinates has gone up 54.62%, 49.07%, and 87.74%, and the accuracy of altitude has improved the most. Background Landslides are gravitational mass movements of rock, debris or earth [Glade, T., etc., 2012].They constitute a major natural hazard in all hilly or mountainous regions throughout the world [Hölbling D., 2012].It is not uncommon for landslides to occur in conjunction with major natural disasters such as floods, earthquakes and volcanic eruptions.Rainfall is a primary trigger of landslides [Raia, S.,etc.,2014] and an example is the recent 2014 Hiroshima landslides triggered by torrential rain [Kurtenbach, E., 2014].Expanding urbanization and changing land-use practices have increased the incidence of landslide disasters.Although landslide movements are mostly a very local phenomenon, they cause damage to man-made structures and affect infrastructures from local to regional scales or even on a national scale.The floods and landslides in China from May to August 2010 ranked second highest in terms of economic damage caused by natural disasters with US $ 18 billion worth of damage [Hölbling D., 2012, i Guha-Sapir, D., etc., 2011].Given the severity of landslides, they are also addressed by the Copernicus emergency management service implemented by the European Commission (EC) with the support from the European Space Agency (ESA) and the European Environment Agency (EEA) [ ii European Commission, 2014]. A key parameter in engineering measures and disaster management of landslides is the earthwork volume [ iii Chen, Z., etc.,2014].This parameter can also be used to predict secondary hazards, such as debris flows and dammed lakes.In addition, the earthwork volume is an important index for stability analysis, risk assessment as well as for evaluating the investment needed for dealing with landslides [Chen, Z., etc.,2014;Guzzetti, F., etc., 2005;Chang, C.W., etc., 2011].Unconventional photogrammetry (UP) has been used to study landslides including the determination of landslide volume.Setting up ground control points (GCPs) and measuring them has been part of the UP and they are necessary to improve the geometry of airborne or space borne images.However, acquiring accurate ground control points via traditional method of onground survey imposes additional time and cost and can take up more than 50% of the entire project duration.It is particularly challenging and risky to set up ground control points in hazardous and inaccessible locations.SIRIUS pro UAS which uses GNSS-RTK technology achieved 2 to 5 cm accuracy without physical GCPs.The precise positioning technology allows the image locations to be used as the equivalent of GCPs.This work reports the study of UP measurements for the determination of landslide volume in which the GCPs are measured by means of GNSS and VBS-RTK.They are evaluated against four criteria, namely the earthwork volume accuracy, turnaround time, cost and personnel safety.UP with complete absence of on-ground GCP is evaluated as well. Study area The study area was the Maple Garden in Taichung City of Taiwan.It is Asia's first large urban sunken green park constructed by the Taichung City Government and it covered an area of 28,000 m 2 .VBS-RTK Real-Time Kinematic (RTK) systems use global navigation satellite system (GNSS) signals to deliver almost instantaneous positions with centimeter-level accuracy [Riley, S., etc., 2000].The techniques have been used for a variety of applications such as topographic surveying, mining, vehicle guidance and automation [Chang, H., etc.,2014].Traditional RTK method requires a base and a rover and the maximum range between them is 10 to 15 km.With the establishment of RTK networks, one can work with a RTK rover within these networks without the need of setting up own basestation Network RTK generally requires a recommended minimum of five reference stations with an inter-station spacing of up to 70 km Network RTK has been implemented in several ways such as Master-Auxiliary Concept (MAC), Virtual Reference Station (VRS) and Flächenkorrekturparameter (FKP) .Each of this technique has its own advantages and short-comings but all of them were designed to achieve high-precision positioning via accurate correction information .This work used the VBS approach. Unconventional photogrammetry Photogrammetry is the science of making measurements from photographs and it allows reconstruction of position, orientation, shape and size of objects Kraus, K., 2007.Remote sensing data have been widely used to study landslides.In particular, the use of stereo photogrammetry is gaining momentum as it can produce high-resolution digital terrain model (DTM) or digital elevation model (DEM).It has been used to detect and analyze the spatial distribution of landslides through various sliding activities, such as cracks, scarps, and folds [Herrera, G., et al ., 2009;Zhang, W., et al., 2013,].Due to the time-critical urgency of natural disasters or major accident relief, much focus has been on the "unconventional photogrammetry" measurement methods and processes, proposed by Tomasi and Kanad [Tomasi, C. and Kanad, T., 1991].While unconventional photogrammetry is less precise than traditional photogrammetry, it is suitable for emergency response because it does not require camera calibration and can use images from consumer-grade cameras.It can also be applied to historical photos or photos taken by anyone after a disaster.These flexible attributes allow rapid reconstruction of 3D terrain data.Hsiao et al. assessed the earthwork volume of a large-scale slope failure in Taiwan using the unconventional photogrammetry method and the cut-and-fill operation.The discrepancy between their computed result and that provided by the Taiwanese government was only 2.5% [Hsiao, C., et al. 2011] and such excellent agreement provided further evidence that the cut-and-fill operation and the unconventional photogrammetry technique are able to provide accurate estimation of landslide volumes. Landslide volume calculations The general method for calculating the earthwork volume is by evaluating the differences in the DTM elevation before and after a landslide has occurred.Following the work of Chen et al., the landslide's accumulated volume, (m 3 ) can be obtained using the Height Difference Model. = ∫ ℎ (1) Where (m2) is the horizontal area between and and ℎ (m) is the elevation difference between the pre-and postlandslide DTMs, with and representation the lowest and the highest elevation, respectively.Based on Eq. 1, negative values for the Z-coordinates correspond to subsidence or ablation of rock and soil which can be used to derive the removed volume.Likewise, positive values reflect movement where subsidence is combined with the advance of the landslide and these values can be used to derive the accumulated volume of the landslide [Kasperski, J., etc., 2010].The Height Difference Model is simple yet relatively accurate and it has been used by numerous researchers and adopted by commercial software packages [Du, J. and Teng, H., 2007].The accuracy of results produced by the model depends very much on the quality of the available data.Uncertainties in the plane position (x-and y-coordinates), height (z-coordinate), and the height baseline difference will reduce the accuracy.Closely related to the Height Difference Model is the cut-and-fill operation which describes the volume change between two grid datasets.It can be thought of as the discrete version of the Height Difference Model and it is implemented in the ArcGIS software. Unmanned aerial systems (UAS) Unmanned aerial systems (UAS) refers to the system comprising an unmanned aerial vehicle (UAV), a ground control station (GCS) and the communication data link between the UAV and the GCS [Colomina, I. and Molina P., 2014].In fact, a UAS is a system of systems.The UAV itself consists of critical components relevant to flight controls, navigation, sensing and orientation.Examples of such components are mechanical servos, auto-pilot system, navigation sensors (gyros) and imaging sensors.Common types of UAV employed in geographic information system (GIS) and landslides research are the multi-rotors, rotary-wing and the fixed-wing aircraft.UAVs are well-suited for surveillance missions around hilly and inaccessible regions though many of the current UAVs are susceptible to poor weather conditions such as strong wind and rains.UAVs have several advantages for acquiring high resolution images.These advantages include a less expensive remote sensing platform, reduced operational costs, improved safety for operators, and a more rapid deployment capability than piloted aircraft [Rango, A., 2009].The UAV used for this study was a modified Hirobo Freya EX III with custom designed 360 camera gimbal system.The gimbal system was equipped with gyro-based auto-stabilization and passive vibration dampers.The platform was powered by the O.S. MAX-91HZ glow engine with a displacement of 0.912 cu.in.The onboard GPS sensor and inertial measurement unit (IMU) gave ground speed, orientation and gravitational forces information.The Eagle TreeTM FPV (first person view) system allowed the transmission of live video images, as well as relevant telemetry data such as the GPS ground speed and altitude.Maximum flight time was about 30 minutes with a total flying weight of 8.5 kg and 5.5 kg payload.The consumer-grade digital camera used in this study was the Canon EOS 550D Mark II with 21.1-megapixel full-frame CMOS sensor [Ackermann, F. 1984].The radio control and live video links used 72 MHz and YY GHz bands, respectively.A more complete list of specifications of the system is as tabulated in Table 1. Unconventional photogrammetric workflow Once the UAV flight mission was completed, the images from the digital camera were transferred to a desktop computer. Agisoft software was used for the photogrammetric processing and the creation of three-dimensional model (3D reconstruction).Camera calibration was performed using the Agisoft PhotoScan software based on the image correlation algorithm proposed by Ackermann.The absolute orientation process was applied by the affine transformation method and this includes the translations, rotations and scaling.The next step involved the automatic generation of the 3D point clouds.At this stage, however, a manual editing process has to be done to remove any obvious outlier.These point clouds were then merged by triangulation to create surface.The resultant outputs from the Agisoft software were the DSM, the corresponding orthophoto, and a 3D viewer animation.The landslide volume was computed using the ArcGIS software.Accuracy of the photogrammetric results was evaluated using the root mean square error (RMSE) estimator.It was done by comparing the coordinates (x and y) of 10 ground control points and there are the other 10 check points in the photogrammetric model with the coordinates in the measured terrain. RESULTS AND DISCUSSION 3.1 Three-dimensional (3D) reconstruction of UP Analysis and Results Ten check points in the study area (Points 1 to 10), as indicated in Fig. 4 Results To Comparison was performed between the current GCP and the other one case with the check points.As the Table 3, the xcoordinate of the point 16 is 0.5104m which is the maximum error value; and the y-coordinate of the point 18 is 0.4451m.And the point11 had the max error value which is 1.7936m.Then calculate RMSE for the x, y and z, were the 0.3958m、0.2739m、0.8704m. Comparison study The study used the ten check points to improve the error of DSM. Then to view the result, all the X, Y and Z that RMSE value were decreased.The three value -X:0.2162m(54.62%),Y:0.1344m (49.07%) and Z: 0.7637m (87.74%).Shown d as the ta Fig. 3 Fig. 3(a) shows the distribution and overlaps of the aerial photographs used in the 3D reconstruction of unconventional photogrammetric.After the triangulation of the model, a digital surface model (DSM) was obtained [Fig.3(b)].Texture model was generated by projecting aerial images onto the DSM, as shown in Fig.3(c).The 3D texture model was then projected onto the x-y plane to obtain the orthophoto [Fig.3(d)].Fig.3(a)shows the 3D surface constructed from the dense point clouds above of 226,000 points.The 3D texture model was then projected onto the x-y plane to obtain the orthophoto [Fig.3(d)].And it has a resolution of 1.3 cm per pixel and this contributed to a crisp image quality.(a) (a and b), were selected based on geographical feature for accuracy analysis.The largest deviations in the x-direction and y-direction observed were 27.09 m (at Point 4) and 28.54 m (at Point 3), respectively.These translated into RMSEx and RMSEy values of 19.09 m and 14.37 m, respectively.The deviation and RMSE values were summarized in Table 2. Perez et al. achieved RMSE of less than 0.1 m for all the three dimensions with the use of Trimble R6 GPS receiver to measure the coordinates for the check points and ground control point [iv].Wu et al. achieved similar accuracy with the deployment of Virtual Base Station Real Time Kinematic Positioning (VBS-RTK) processing which allowed real time positioning and hence the actual coordinates of reference control points [v]. Table 1 . The ground control points
2018-12-05T12:55:15.566Z
2016-06-06T00:00:00.000
{ "year": 2016, "sha1": "96eb483ea84113b9e92128f4986aa2d42ac6b0eb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5194/isprs-archives-xli-b1-1079-2016", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "96eb483ea84113b9e92128f4986aa2d42ac6b0eb", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geography" ] }
255977020
pes2o/s2orc
v3-fos-license
Cytogenetic and mutational analysis and outcome assessment of a cohort of 284 children with de novo acute myeloid leukemia reveal complex karyotype as an adverse risk factor for inferior survival Acute myeloid leukemia (AML) is rare in children. Although complex karyotype (CK) defined as ≥ 3 cytogenetic abnormalities is an adverse risk factor in adult AML, its prognostic impact on childhood AML remains to be determined. We studied the prevalence, cytogenetic and mutational features, and outcome impact of CK in a cohort of 284 Chinese children with de novo AML. Thirty-four (12.0%) children met the criteria for CK-AML with atypical CK being more frequent than typical CK featured with -5/5q-, -7/7q-, and/or 17p aberration. Mutational prevalence was low and co-occurrence mutants were uncommon. Children with CK-AML showed shorter overall survival (OS) (5-year OS: 26.7 ± 10.6% vs. 37.5 ± 8.6%, p = 0.053) and event-free survival (EFS) (5-year EFS: 26.7 ± 10.6% vs. 38.8 ± 8.6%, p = 0.039) compared with those with intermediate-risk genetics. Typical CK tended to correlate with a decreased OS than atypical CK (5-year OS: 0 vs. 33 ± 12.7%.; p = 0.084), and CK with ≥ 5 cytogenetic aberrations was associated with an inferior survival compared with CK with ≤ 4 aberrations (5-year OS: 13.6 ± 11.7% vs. 50.0 ± 18.6%; p = 0.040; 5-year EFS: 13.6 ± 11.7% vs. 50.0 ± 18.6%; p = 0.048). Our results demonstrate CK as an adverse risk factor for reduced survival in childhood AML. Our findings shed light on the cytogenetic and mutational profile of childhood CK-AML and would inform refinement of risk stratification in childhood AML to improve outcomes. Background Acute myeloid leukemia (AML) is a group of clonal hematopoietic neoplasms that are characterized by aberrations in maturation, proliferation, and survival in the stem and progenitor cell compartments. Childhood AML is a relatively rare disease that accounts for 15%-20% of acute leukemias in children. Despite considerable progress have made in treating children with AML, about 30% of patients still experience relapse and do not survive beyond five years [1,2]. Identification of additional genetic biomarkers predicting prognosis in childhood AML is needed to improve outcomes. Here, we report a study that investigated the prevalence, features, and clinical correlation of cytogenetic and mutational characteristics of CK-AML in a cohort of 284 children with AML. Our study showed CK was associated with decreased survival in childhood AML and its impact on outcome correlated with the number of chromosomal aberrations. These results would aid in informing risk stratification of childhood AML to guide risk-adapted therapy. Patients and samples A total of 284 patients (≤ 18 years old) with de novo AML were enrolled in the study between 2007 and 2018 at Children's Hospital of Chongqing Medical University in China. The diagnoses were based on histological, cytogenetics, and immunophenotyping analyses of bone marrow. The patients were treated with daunorubicin/ cytarabine/etoposide (DAE)-based regimen following the protocols of the Pediatric Hematology Group of Chinese Medical Association [21]. The study was reviewed and approved by the Ethics Committee of Children's Hospital of Chongqing Medical University in accordance with the Declaration of Helsinki. Cytogenetic analysis G-banded karyotyping and fluorescence in situ hybridization (FISH) studies were performed according to the standard procedures [22]. A complete study required analysis of at least 15 [19]. An unbalanced aberration involving two or more chromosomes was counted as two abnormalities [23,24]. Down syndrome AML was excluded from the study. CK patients with -5/5q-, -7/7q-, and/or 17p aberrations were assigned as typical CK while the others were deemed as atypical CK. Karyotype designation was in accordance with the International System for Human Cytogenomic Nomenclature 2016 [25]. Gene mutation analysis Total RNA was extracted from bone marrow samples using the Tiangen RNAprep Pure Blood Kit (Tiangen Biotech, Beijing, China), and used as the template for cDNA synthesis with the Reverse Transcription System (Promega, Fitchburg, WI). DNA fragments covering the mutational hotspots were polymerase chain reaction (PCR) amplified from cDNA following the conditions previous described [27][28][29][30][31][32]. The PCR products were analyzed by Sanger sequencing, and the PCR products containing mutations were repeated at least once to confirm the presence of the identified mutations. Some mutant PCR products were subcloned into the pBackZero-T Vector (TaKaRa Biotechnology Co., Dalian, China) for further sequencing. PolyPhen and SIFT programs as well as COSMIC database (release v89, 15th May 2019) were employed to predict the pathogenicity of variants [33,34]. Gene mutation hotspots analyzed in this study included Statistical analysis Patient characteristics were compared using chi-square (χ 2 ), Fisher's exact, or Mann-Whitney U test, as appropriate. Complete remission (CR) was defined as bone marrow with less than 5% blasts and evidence of regeneration of normal hematopoietic cells. Overall survival (OS) was calculated from the date of diagnosis to death or last contact. Event-free survival (EFS) was the time between diagnosis and occurrence of the first event (i.e., failure to achieve complete remission, relapse, secondary tumor, or death of any cause). OS and EFS were estimated using the Kaplan-Meier analysis, and the differences were compared using the log-rank test. A p value of ≤ 0.05 (two-sided) was considered statistical significance. The analyses were performed with SPSS software package v17.0 (SPSS, Inc., Chicago, Illinois). Clinical and gene mutation characteristics of childhood CK-AML There were no differences in clinical features between the CK and intermediate-risk groups except that patients with CK-AML tended to be younger (2.5 yrs. vs. 5.0 yrs., p = 0.031) ( Table 1). Compared with children with CK-AML, patients with intermediate-risk features had a higher NRAS mutation incidence although the difference was not statistically significant (3% vs. 18%; p = 0.079) ( Table 1). Among the patients with CK-AML, WT1 gene had the highest mutational incidence (13%) followed by CEBPA, FLT3/ITD, and IDH1 genes (6.0% each), and none was observed in NPM1, KIT, CCND1, IDH2, ASXL2, DHX15, and DNMT3A genes (Table 1). Patients with atypical CK-AML were likely to have higher blasts in bone marrow than those with typical CK-AML (73% vs. 68%; p = 0.08) while there were no differences in other clinical and molecular features between the two groups (Additional file 2: Table 2). Similar clinical and mutational features were observed between CK with ≤ 4 and ≥ 5 aberrations (Additional file 3: Table 3). CK-AML patients with a single clone were younger (2.0 vs. 3.0 yrs., p < 0.001) and had a higher percentage of blasts in marrow (70% vs. 68%; p < 0.001) compared to those with two or more clones, but no difference in gene mutational frequencies ( Table 2). Of thirteen common AML genes examined in the CK-AML cohort, concomitant mutants were observed only in one patient (CEBPA and NKRS) (Fig. 1). (Fig. 2). Children with typical CK-AML showed a trend for decreased OS (5-year OS: 0 vs. 33 ± 12.7%.; p = 0.084) but no difference in EFS (5-year EFS: 0 vs. 33.0 ± 12.7%; p = 0.14) compared with those with atypical CK-AML. Patients Fig. 2). Outcome data were available on three CK-AML cases with mutant WT1 gene, two relapsed, one died in 8 months and the other in 26 months. Another child was still alive at the last contact of six and half years after diagnosis. Discussion In the past decades significant progresses have been made in treating childhood AML. But one-third of children with AML relapse and do not survive beyond 5 years [1,2]. Cytogenetics is a major factor in AML risk classification which is important in guiding risk-adapted treatment [3,4]. So far our knowledge of AML cytogenetics has been primarily derived from studies on adult AML and little is known about features and clinical correlation of CK in childhood AML, mainly owing to the rarity of the disease [18][19][20]. The prevalence of CK-AML in our study cohort, 12.0%, is comparable to 9.5% reported in another AML study in Chinese children [35]. In a study of children with AML in the United Kingdom, Harrison and colleagues observed a high CK incidence of 17.7%, but the study included CK with the WHO recurrent AML genetic aberrations such as t(8;21) and inv (16) (13). A relatively high CK prevalence, 18.5%, was also documented in a small study on Korean children [36]. In another study of 642 European children with AML, Rasche and colleagues reported a CK frequency of 9% [20]. In the present study, we also observed a distinct age-associated CK distribution with a higher incidence in toddlers than in young children and adolescents (20.0% in < 2 yrs. vs. 10.8% in ≥ 2 yrs). This is in agreement with the observation from a study of German children which showed similar distribution between the two age groups [37]. CK is considered an adverse risk factor in adult AML but its role in childhood AML remains inconclusive [3]. In our cohort most children with CK-AML reached CR, which is in line with the findings from other studies [20,38,39], but had shorter survivals compared with those with intermediate-risk features, demonstrating that CK is an adverse risk factor in childhood AML. Bager et al. reported reduced OS and EFS in children with CK-AML than those with non-CK-AML [18]. In their study, the comparator group, children with non-CK, also included ones with the t(8;21) and inv (16) which are associated with favorable outcomes. Therefore, it can't definitively distinguish whether the improved survivals observed in non-CK group were due to the presence of the favorable cytogenetics in the comparator group or reduced survivals in the CK group were due to the worse effect of CK on AML than intermediate-risk cytogenetics in the comparator group. In another study of 59 children with CK-AML, Rasche and colleagues found no difference in survivals between children with CK-AML and those with either intermediate or low-risk features [20]. In adult CK-AML, shorter survival and a higher relapse rate have been associated with typical CK compared with atypical CK [19,40]. We found no difference in EFS between typical CK and atypical CK groups among our patients but OS tended to be reduced in the typical CK-AML group. In a previous study of British children with CK-AML, Grimwade and colleagues described comparable outcomes between typical and atypical CK-AML [13]. In that study, the CK cohort also included the known favorable cytogenetics of t(8;21) and inv (16) which could influence the outcomes in either or both subgroups and thus, might be a confounding factor in assessing the impact of typical and atypical CK on AML outcomes [13]. Additional studies are necessary to further assess whether typical and atypical AML are two distinct disease entities with different outcomes in childhood AML. Furthermore, our study reveals CK with ≥ 5 aberrations is associated with shorter survivals than CK with ≤ 4 aberrations, suggesting a correlation of a higher number of chromosomal abnormalities with a worse prognosis. A similar relationship was also observed in childhood CK-AML in the study of Rasche and colleagues who reported significantly reduced OS among children with CK having > 5 aberrations compared to those with ≤ 5 abnormalities but no difference in EFS between the two groups [20]. More recently, Bager et al. observed a longer 5-year OS in CK with five or more aberrations and comparable EFS compared to CK with 3-4 aberrations [18]. Future studies are warranted to determine whether complex karyotypes with five or more chromosomal aberration is associated with worse outcome in childhood CK-AML. In the present study, there were considerable number of CK cases harboring more than one cytogenetic clone but no differences in outcomes between ones with a single and multiple clones. Recent mutational studies using next-generation sequencing demonstrate that mutations at diagnosis play a critical role in leukemogenesis but mutational evolution during disease course is also important in influencing outcomes. These observations underscore the importance of continuous genetic profiling throughout the disease course in guiding optimal therapy to improve outcomes [41,42]. Thus far, there is limited information on mutational profile of CK-AML and no mutational profiling of childhood CK-AML has been reported in the literature [19]. The results from our study show that atypical CK is more common than typical CK in childhood AML compared to high frequency of typical CK than atypical CK reported in adults [19,40,43]. Analysis of more than a dozen of common AML genes examined in our pediatric AML cohort showed that mutant incidences were low and concomitant mutants were rare. In a study of 81 genes in adult CK-AML, Mrozek and colleagues reported an average of two mutants per case [19]. Considering the fact that mutational frequencies in AML increase with aging, our results along with others demonstrate that molecular aberrations are uncommon in CK-AML [37]. Although the mutated FLT3/ITD and IDH1 gene incidences in our childhood CK-AML cohort were comparable to those observed in adult counterparts, mutant WT1 and CEBPA gene incidences were higher in our cohort than adult patients (WT1: 13.0% vs. 2.9%; CEBPA: 6.0% vs. 1.5%) [19]. Of three CK-AML patients carrying WT1 gene mutation with outcome information available, two relapsed and died at 8 and 26 months, respectively, after diagnosis. These were similar to the observations reported by others that mutant WT1 gene is associated with decreased survivals and high relapse [44,45]. Finally, TP53 gene aberrations have been reported in 40%-50% of adult patients with CK-AML [19,46]. TP53 mutational analysis was not performed in the present study, and we only observed one CK-AML case (2.9%) with a 17p13.1 deletion by cytogenetic analysis. Taken together, our results show a difference in cytogenetic and mutational profiles between childhood and adult CK-AML, which is in accordance with findings in other AML subtypes [37]. Differences in results between our study and others are likely attributed to variation in the composition of study cohorts including the number of patients, age, treatment modalities, criteria for complex karyotype (≥ 3 vs. ≥ 5 aberrations), geographic locations, ethnic groups, methods used in mutation analysis, and the number of genes examined. Our results need to be validated by future studies of large cohorts of children with CK-AML. Conclusions To the best of our knowledge, no such studies have been reported in the literature and ours is the first in the Chinese population. Our results demonstrate for the first time that among Chinese children with CK-AML, atypical CK was more frequent than typical CK, mutational incidences were low and concomitant mutants were uncommon. CK-AML had reduced EFS and OS compared with intermediate-risk AML, indicating CK as an adverse risk marker for childhood AML. Typical CK-AML tended to correlate with decreased OS compared to atypical CK-AML. Moreover, CK-AML with five or more cytogenetic aberrations was associated with inferior survivals than CK with four or fewer abnormalities, suggesting that the number of cytogenetic abnormalities in CK may influence outcome. Results from our study would inform refinement of risk stratification for childhood AML to improve outcomes.
2023-01-19T21:09:48.166Z
2021-05-19T00:00:00.000
{ "year": 2021, "sha1": "dc1d077fe416c5218699a04ea873a392443c0461", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13039-021-00547-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "dc1d077fe416c5218699a04ea873a392443c0461", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
84911973
pes2o/s2orc
v3-fos-license
A review of New World Eurytenes s. str. (Hymenoptera, Braconidae, Opiinae) Th e New World species of Eurytenes Foerster sensu stricto (Hymenoptera: Braconidae, Opiinae) are revised, and a key to these species is presented. Four new species are described: Eurytenes (Eurytenes) dichromus sp. n. from Texas, E. (E.) microsomus sp. n. from Texas, E. (E.) pachycephalus sp. n. from Mexico, and E. (E.) ormenus sp. n. from Mexico. Eurytenes abnormis (Wesmael) is redescribed for comparison, and its host records are reviewed. In addition to the type species, three other species are currently included in Eurytenes s. str.: E. orientalis Fischer, 1966, E. cratospilum Chen & Weng, 2005, and E. basinervis Wu & Chen, 2006.Eurytenes abnormis is Holarctic (Fischer 1972(Fischer , 1977)), E. orientalis is from the Philippines (Fischer 1966), and the other two species are from China (Chen and Weng 2005;Wu and Chen 2006).Th e purpose of this paper is to expand the known distribution of Eurytenes s. str.by describing new species from central Texas and central Mexico.We also discuss morphological features useful for discriminating both New and Old World species. Materials and methods Nearly all of the material used in this revision is from the Texas A&M University Insect Collection (TAMU).Material for comparison was obtained from or examined at the following institutions: American Entomological Institute (AEI), Gainesville, FL, USA; Canadian National Collection, Ottawa, Canada; Institut Royal des Sciences Naturelles de Belgique, Brussels, Belgium; Naturhistorisches Museum, Vienna, Austria (NHMW); U. S. National Museum of Natural History, Washington, D. C. (USNM).Th e newly described species were compared with several Old World specimens, including E. abnormis from eight European localities, the holotype of E. orientalis, two undetermined specimens from Taiwan, and one undetermined specimen from the Kurile Islands. Descriptive terminology largely follows Sharkey and Wharton (1997), with modifi cations as in Wharton (2006).For clarity, the fi rst metasomal tergite (T1) is referred to as the petiole with the following median tergites referred to as T2, T3, etc. Th e mesoscutum consists of an anterior declivity and the more readily visible, relatively fl at portion posteriorly referred to in the descriptions as the mesoscutal disc.Measurements were taken using a reticle on a Zeiss Stemi, DRC microscope and converted to ratios or millimeters.Flagellomere width was measured at the narrowest point.Clypeus was measured as in Fig. 1 and the width compared to the distance from the inner margins of anterior tentorial pits to the eye.Face width was measured as the shortest distance between eyes and compared to face height from epistomal sulcus at top of clypeus to the lower margin of the antennal socket.Angle of precoxal sulcus was estimated with head-mesosoma and mesosoma-petiole attachments aligned horizontally.Hind tibia was measured from attachment point of femur to attachment point of tarsus vs maximum width at apex.Petiole (=T1) length was measured laterally from the base of the dorsope to attachment point of T2 (Fig. 2).Total ovipositor length, measured in lateral view, was estimated using specimens whose hypopygium was extended, thus exposing the majority of the ovipositor.Mesosomal length was measured as in Fig. 3. Measurements are presented as ranges followed by the mean (m).Th e term butterscotch is used to describe the tawny, yellowish-brown color of many of the body parts. In the material examined sections, label data are presented in a uniform format for specimens other than the holotypes of new species.For holotypes of newly described species, data are recorded exactly as given on specimen labels, with square brackets for additional data not on the labels. Images were acquired digitally using Syncroscopy's Auto-Montage Pro 5.01.0005 (Copyright Synoptics Ltd.) and PictureFrame (TM) Application 2.3 in combination with a ProgRes 3008 digital camera mounted on a Leica MZ APO dissecting microscope.All images were further processed using Adobe Photoshop® CS5.Images are stored in mx, a web-based content management system that facilitates data management and dissemination for taxonomic and phylogenetic works (e. g.Yoder et al. 2006).Th e mx project is open source, with code and further documentation available at http://sourceforge.net/projects/mx-database/.Mesosoma.Pronotum dorsally with narrow, transverse, crenulate sulcus extending continuously along lateral pronotum to ventral corner; weak to deep median pit interrupting sulcus dorsally.Mesoscutum in profi le with anterior declivity very slightly concave, nearly vertical; notaulus present on anterior portion of mesoscutal disc, angled laterally at anterior end, laterally-directed portion carinate along anterior margin; notaulus continuous with weakly to distinctly crenulate impression bordering lateral mesoscutal margin, the impression extending posteriorly at least to level of tegula; midpit of mesoscutum well-developed, discrete.Scutellar sulcus (Fig. 31) narrow, though not exceptionally so, 3-4 times wider than mid-length, crenulate, with numerous closely-spaced ridges.Mesopleuron smooth, shiny, posterior margin not crenulate; precoxal sulcus distinctly impressed, crenulate.Propodeum with large median areola, variously obscured by sculpture. Wings.Slightly more than twice as long as wide.Stigma long, very narrow, nearly parallel-sided basally, widening distally; thickest part of stigma twice maximum width of proximal half.Radial cross-vein (r) thickened, weakly to strongly bowed anteriorly, arising from extreme base of stigma, nearly in line with 3RSa; RS+M weakly to distinctly sinuate; second submarginal cell nearly parallel-sided, not or only very weakly converging distally; m-cu nearly always postfurcal, entering second submarginal cell; 2CUa distinct, shorter than 2cu-a.Hind wing with both RS and M distinct nearly to wing margin, usually nebulous: very weakly pigmented; m-cu varying from indistinct in smaller individuals to present in larger individuals as a weakly pigmented impression extending more than half way to wing margin. Hosts.Agromyzidae.See comments under E. abnormis, the only species with host records. Comments.Fischer (1972) provides the most recent detailed description of Eurytenes s. str.(in German); Wharton (1988Wharton ( , 2006)), Fischer (1998), and Wu and Chen (2006) provide diagnoses for Eurytenes s. str.and s. l.Wu and Chen (2006) were the fi rst to use morphological features other than color for discriminating between species of Eurytenes s. str.Diagnosis.Eurytenes abnormis is most readily recognized by the pale coloration of the petiole and metasoma and is further distinguished from the four North American species described below by the narrower, more ventrally concave clypeus (Fig. 10).Th e petiole is narrower than in E. dichromus, sp.n. and E. microsomus, sp.n. and is thus more similar in shape to the darker Mexican species described below. Mesosoma.Posterior-ventral margin of lateral pronotum crenulate for most of length.Precoxal sulcus parallel-sided, narrowly crenulate along most of length, usually weakly impressed anteriorly, often extending very close to anterior margin of mesopleuron; precoxal sulcus inclined at a 35 degree angle.Notaulus distinctly impressed over anterior third of mesoscutal disc, crenulate over anterior 0.2-0.3;with cluster of short setae at rugulose base of anterior declivity; with widely spaced line of 3-5 longer setae extending posteriorly towards but not usually reaching cluster of scattered setae around midpit.Propodeum with median carina present anteriorly, bifurcating near middle to form fi vesided median areola over posterior 0.6, surface rugose laterally and posterior-medially, partly obscuring areola, but posterior-lateral fi elds largely smooth, as in Fig. 11.Wings.Fore wing r-m tubular and pigmented only at extreme anterior end, otherwise unpigmented, with lateral boundaries often only weakly indicated; (RS+M) b absent, m-cu entering extreme base of second submarginal cell; 3M very weakly pigmented basally in available material, spectral over most of length.Hind wing m-cu varying from indistinct in smaller individuals to present as a spectral impression extending more than half way to wing margin in larger individuals. Color.Head and mesosoma dark reddish-brown to black.Scape, pedicel and fi rst fl agellomere yellow, antenna quickly darkening distally to dark brown; palps and tegula pale yellow; mandible and petiole tawny (darker yellow).Metasoma posteriad petiole usually bright yellow, sometimes with faint slight butterscotch banding to lighter brown banding.Hind femur and tibia darkening distally, femur transitioning from yellow to dark yellow or yellow-brown, tibia mostly infuscated, tarsi infuscated; legs otherwise yellow.Ovipositor sheath dark brown to black; ovipositor light brown throughout.Wings hyaline. Host Records.Fischer (1959) listed 12 species of Agromyzidae, one Anthomyiidae, and one microlepidopteran as hosts but with no records of host plants.In this publication Fischer also noted that the anthomyiid and especially the microlepidopteran host (Coleophora nigricella Rondani) need verifi cation.Fischer (1964) added four more dipterans to the list and later (Fischer 1969a, b) provided additional records, including host plant information for nearly all of the known hosts.Nomenclatural updates for agromyzids and plant hosts from Fischer (1972) and Yu et al. (2005) are incorporated in the list of confi rmed hosts given below, with additional updates from Ellis (2007).Host plants for these 23 agromyzid hosts are split unevenly between monocots (8 fl y species) and dicots (15 fl y species).Four of the host fl y species were reared from Asteraceae and four from Poaceae, whereas Lamiaceae, Ranunculaceae, and Cyperaceae each harbored three host fl y species. Th e agromyzid host records found in Fischer (1964Fischer ( , 1969a, b) , b) have a relatively high degree of confi dence because these records pertain to rearings by Buhr, Groschke, and Nowakowski, respectively.Fischer identifi ed the Eurytenes reared from these hosts (specimens in NHMW) and the hosts and host plants correspond well with information in Ellis (2007).Earlier literature, and several compilations based on the earlier primary sources, are problematic, however, because of the potential for misidentifi cation of the wasp and/or host fl y, as well as the absence of voucher specimens.We follow Fischer (1959) and treat the published host records for Pegomya bicolor (Wiedemann) (Diptera: Anthomyiidae) and Amauromyza verbasci (Bouché) dating to Bouché (1834), Ratzeburg (1848), andRondani (1872) as almost certainly erroneous and likely based on misidentifi cation of the other opiines that routinely attack these hosts or possibly on misidentifi cation of the host.Records of non-dipteran hosts are clearly erroneous since members of the Opiinae are all parasitoids of cyclorrhaphous Diptera. Host: Plant.Agromyza albitarsis Meigen: host plant for E. abnormis has not been recorded previously but since this fl y is known to attack several trees in the family Salicaceae, the record may need to be verifi ed; Agromyza woerzi Groschke: Knautia arvensis (L.) Coult, Caprifoliaceae; Amauromyza labiatarum (Hendel): Galeopsis tetrahit L., Lamiaceae; Amauromyza lamii (Kaltenbach): Lamiastrum galeobdolon (L.Distribution.Previously recorded from throughout most of Europe (specifi cally Austria, Belgium, Bulgaria, Croatia, England, Finland, Germany, Hungary, Ireland, Italy, Lithuania, Poland, western Russia as far as the Urals, and Ukraine).Also recorded from eastern Palaearctic (Korea and Sakhalin Island), central to eastern Canada (Ontario, Saskatchewan) and USA (Minnesota, Missouri, North Dakota, South Carolina).Specifi c references to individual records can be found in Yu et al. (2005) for the most part; the record from the Urals is from Tobias and Jakimavicius (1986).Fischer (1970) recorded E. abnormis from Montana; however, the specimen on which it is based was collected in Missouri (label information only indicated the state as Mo.).Th e records from Sakhalin (Tobias 1998) and Korea (Papp 1985) may need to be verifi ed in light of other species described from that general region.Th e specimens we have examined from Taiwan and the Kuril Islands diff er in wing venation, body coloration, and sculpture from typical E. abnormis.Papp (1985) also noted the darker coloration of the petiole of his Korean specimen. Comments.Fischer (1972) treated Opius paradoxus Ratzeburg, 1848 as a nomen nudum, while Dalla Torre (1898) listed it with a query as a synonym under E. abnormis, undoubtedly following Marshall (1891).Ratzeburg (1848), in his treatment of Opius, separated abnormis from all other species on the basis of the wing venation features that we now use to defi ne Eurytenes s. str.Ratzeburg (1848) initially states that only a single species, abnormis, belongs to the section of Opius with the radius arising from the base of the stigma.In the following sentence, however, Ratzeburg introduces the name paradoxus, indicating that it also should be placed here.Th ough this can be interpreted to mean that Ratzeburg was treating paradoxus as a synonym of abnormis, nevertheless he also mentioned body coloration (dark) and clypeal characters (lack of opening between clypeus and mandibles) that diff er from typical abnormis.Ratzeburg referred to Bouché throughout when discussing paradoxus and abnormis, and at the end of his treatment gives information on a more typical pale specimen of abnormis reared by Bouché.Whether intentional or otherwise, it would appear that Ratzeburg (1848) did provide a valid description of paradoxus with two characters that could be used to diff erentiate it from abnormis.However, his text could just as easily be interpreted to mean that paradoxus is invalid since it was fi rst proposed as a synonym of abnormis.We prefer the latter interpretation.Wu and Chen (2006) were the fi rst to use morphological features other than color for discriminating between species of Eurytenes s. str.Th ey used propodeal sculpture and the extent of the precoxal sulcus to diff erentiate their newly described E. basinervis from E. orientalis.Previously, Fischer (1966Fischer ( , 1998) ) used only color diff erences to distinguish between E. orientalis and E. abnormis.We have noted diff erences among species in the shape of the clypeus, but the appearance of the ventral margin of the clypeus changes with angle of view, and the diff erences are subtle.All species appear to have a concave ventral margin if the ventral part of the head is strongly rotated anteriorly.When placed in the same plane of view, however, the clypeus of the New World species described here is more truncate ventrally than that of E. abnormis. In addition to specimens listed in the material examined section above, two specimens from Poland and one from Germany (NHMW) were also briefl y examined; data for these specimens were previously recorded by Fischer (1969a, b).In our summary of references above, we have not included several papers that provide only distribution information.Th ese can be found in Yu et al. (2005).Wharton.Additional specimens, not paratypes (TAMU): 2 ♀, Florida, Alachua Co., Hague Dairy, 29 47.311'N, 82 24.880'W, 28.iii.2007, J. Sivinski;1 ♀, same data except 29 47.328'N, 82 24.969'W, 29.iii.2007;1 ♀, Texas, Anderson Co., 10 mi SW Elkhart, 5-6.vi.1976, H. R. Burke.Diagnosis.Th is species is most readily recognized by its broader, bicolored petiole and bicolored, ventrally truncate clypeus.Based on the shape and color pattern of the petiole, as well as the shape of the clypeus, E. dichromus is most similar to E. microsomus, sp.n.Th e propodeum is nearly always more heavily sculptured in E. dichromus than in E. microsomus and usually slightly more rugulose posterior-laterally than in E. abnormis. Mesosoma.Posterior-ventral margin of lateral pronotum crenulate for most of length.Precoxal sulcus narrowly crenulate anteriorly, sculptured area broadening posteriorly, usually weakly impressed anteriorly, often extending very close to anterior margin of mesopleuron; precoxal sulcus approximately 45 degrees, inclined more vertically than in E. abnormis.Notaulus distinctly impressed over anterior third of mesoscutal disc, crenulate over anterior 0.2-0.3;with dense cluster of short setae at rugulose base of anterior declivity extending ventrally to cover most of anterior declivity; with widely spaced line of 3-5 longer setae extending posteriorly towards but not usually reaching cluster of scattered setae around midpit as in Fig. 30.Propodeum with median carina present anteriorly, bifurcating near basal 0.3 to form fi ve-sided median areola over posterior 0.6-0.7,surface extensively rugose laterally and posterior-medially (Figs 32,33), partly obscuring areola, posterior-lateral fi elds often completely rugose. Wings.Fore wing r-m pigmented basally, less commonly over anterior 0.5, otherwise unpigmented, largely tubular, with lateral boundaries usually distinct for most of length; m-cu usually postfurcal, entering base of second submarginal cell, less commonly interstitial; 3M distinctly pigmented, nearly tubular in basal third, gradually weakening and becoming depigmented distally.Hind wing m-cu usually poorly developed, varying from very weakly to distinctly impressed. Color.Head and mesosoma black, with small red-brown spot adjacent eye dorsalmedially near ocelli, face at base of antennae also usually red-brown.Scape and pedicel yellow, fl agellomeres dark brown; mandible butterscotch with distal tip infuscated; clypeus infuscated, dark brown dorsally, butterscotch ventrally; palps and tegula yellow.Petiole dark brown dorsally, posterior fi fth and ventral-lateral region usually yellow.T2+3 butterscotch medially; T2 and T3 each with a medium brown lateral splotch; T4 and successive tergites each with dark brown transverse banding anteriorly fading to butterscotch posteriorly.Hind tibia pale yellow to whitish over about basal 0.15, remainder infuscated to medium brown, tarsus completely medium brown, legs otherwise yellow to nearly white, with femur and trochantellus often (though not in holotype) darker yellow than coxa and trochanter.Ovipositor sheath dark brown; ovipositor light brown.Wings hyaline. Male and Host.Unknown.Distribution.Known only from central Texas. Etymology.Th e name dichromus is derived from Greek: di, two; chromus, color.Th e name refers to the color of the clypeus. Eurytenes microsomus Diagnosis.Th is species is nearly identical to E. dichromus but E. dichromus is 1.25 × larger.Th e body is less heavily sculptured than in E. dichromus, there are fewer fl agellomeres, and T2+3 tends to be paler in coloration. Mesosoma.Posterior-ventral margin of lateral pronotum weakly crenulate, nearly smooth for most of length.Precoxal sulcus parallel-sided, narrowly crenulate, short, weakly impressed anteriorly, not extending close to anterior margin of mesopleuron; precoxal sulcus at 45 degree angle, inclined more vertically than in E. abnormis.Notaulus distinctly impressed over anterior third of mesoscutal disc, crenulate over anterior 0.2-0.3;with moderately dense cluster of short setae at rugulose base of anterior declivity extending ventrally to cover much of anterior declivity; with widely spaced line of 3-4 longer setae extending posteriorly towards but not reaching cluster of scattered setae around midpit (Fig. 3).Propodeum with median carina extending over anterior 0.3 before bifurcating to form fi ve-sided areola over posterior 0.7.Surface smooth to weakly rugose laterally and posteriorly, carinae forming areola not obscured by sculpture, entirely visible, areola varying from smooth to weakly rugose (Figs 34,35). Wings.Fore wing r-m at most pigmented at extreme base, largely tubular (with lateral boundaries distinct) over anterior half; m-cu distinctly postfurcal; 3M distinctly pigmented in basal third, gradually weakening and becoming depigmented distally.Hind wing m-cu indistinct. Color.Head and mesosoma dark reddish-brown as in E. abnormis, but with pale spot adjacent eye similar to though weaker than the spot in E. dichromus.Scape and pedical butterscotch, fl agellomeres medium brown; clypeus butterscotch with slight infuscation dorsally.Palps, mandible, tegula, petiole, and ovipositor as in E. dichromus.Metasoma posteriad petiole patterned as in E. dichromus but T2+3 paler, whitish medially and T2 more lightly infuscate laterally.Legs about as in E. dichromus, with hind legs often a little paler.Ovipositor sheath dark red-brown.Wings hyaline. Etymology.Th e name microsomus is derived from Greek: micro, small; somus, body.Th e name refers to the smaller size of this species compared to other species of Eurytenes. Comments.Eurytenes microsomus and E. dichromus both occur in Austin, the westernmost locality for either species.Diagnosis.Th is species is most readily recognized by the dark brown hind femur.All other species from the New World have relatively pale (whitish to dark yellow) hind femora.Th e petiole is completely dark, as in E. pachycephalus, sp.n., but the latter is a much larger species with a distinctly broader gena. Mesosoma.Posterior-ventral margin of lateral pronotum distinctly impressed, varying from crenulate to nearly smooth for most of length.Precoxal sulcus weakly impressed, not extending close to anterior margin of mesopleuron; precoxal sulcus approximately 30 degrees, inclined slightly less vertically than E. abnormis.Notaulus narrow, weakly impressed, crenulate over anterior 0.3 of mesoscutal disc; with relatively sparse cluster of short setae at fi nely rugulose base of anterior declivity and 1-2 widely spaced longer setae extending posteriorly.Propodeum with median carina present anteriorly, bifurcating near anterior 0.2 to form fi ve-sided areola over posterior 0.8; surface densely punctate-rugose to coarsely granular laterally and posteriorly, obscuring carinae, weakly sculptured to nearly smooth anteriorly on either side of short median carina. Wings.Fore wing r-m very weakly pigmented at extreme base; somewhat tubular (with lateral boundaries distinct) over anterior 0.3-0.5;m-cu distinctly postfurcal; 3M distinctly pigmented in basal third, gradually weakening and becoming depigmented distally.Hind wing m-cu indistinct. Color.Head, thorax, and petiole dark red-brown.Scape and pedicel yellow, fi rst four fl agellomeres light brown, quickly darkening distally to dark brown; palps, mandible, clypeus, and tegula yellow.Metasoma posteriad petiole medium brown.Legs yellow except hind femur medium to dark brown medially with apical and basal 0.1-0.15pale, tibia and tarsus almost completely medium brown, tibia variously pale brown dorsally.Ovipositor sheath dark brown, ovipositor light brown.Wings hyaline. Male and Host.Unknown.Distribution.South central Mexico. Etymology.Th e name ormenus is derived from Greek: ormenus, petiolated.Th e name refers to the elongate petiole of the species.Comments.Th is is a small-bodied species similar in size to E. microsomus but with a more heavily sculptured propodeum and darker hind femur.Eurytenes ormenus is characterized by the long, narrow petiole, similar in form to the petiole of E. pachycephalus sp.n. and E. abnormis and unlike the broader petiole of E. microsomus and E. dichromus.As in E. pachycephalus sp.n., and unlike the other three species, the petiole is uniformly very dark in coloration.Th e anterior tentorial pits of E. ormenus are slightly larger in this species than in the others treated here.Diagnosis.Th is species is most readily recognized by its broad clypeus and infl ated gena.It is a much larger species than E. ormenus, which was also collected at high elevation sites in central Mexico.Although both E. pachycephalus and E. ormenus have a uniformly dark petiole, the hind femur is dark in E. ormenus and more lightly colored in E. pachycephalus. Mesosoma.Posterior-ventral margin of lateral pronotum strigose for most of length, the sculpture extending towards middle of sclerite.Precoxal sulcus extending very close to anterior margin of mesopleuron; deeply crenulate anteriorly, sculptured area broadening posteriorly; precoxal sulcus approximately 45 degrees, inclined slightly more vertically than E. abnormis.Notaulus distinctly impressed and crenulate over anterior 0.3-0.4 of mesoscutal disc; with cluster of short setae at rugulose base of anterior declivity extending ventrally to some extent onto anterior declivity at each side; longer setae absent posteriorly.Median carina extending over anterior 0.2 before bifurcating to form fi ve-sided areola; surface of areola and lateral margin of propodeum rugose, posterior-lateral fi elds and region anteriorad areola smooth or nearly so. Wings.Fore wing r-m very weakly pigmented at extreme base, largely spectral (with lateral boundaries indistinct); m-cu distinctly postfurcal; 3M distinctly pigmented and largely tubular in basal third, gradually weakening distally.Hind wing m-cu extending nearly half way to wing margin as a very weakly pigmented and impressed curved line. Legs.Hind tibia 8.3 × longer than maximum width.Metasoma.Petiole 2.1 × longer than apical width.Female ovipositor sheath barely visible due to postmortem changes in position; visible portion densely setose. Color.Head, mesosoma, and petiole black.Scape and pedicel yellow, fl agellomeres dark brown; mandible butterscotch with distal tip infuscated; clypeus dark brown dorsally, ventral half butterscotch; palps and tegula butterscotch.Metasoma with T2 and T3 brown, middle tergites with brown and yellow transverse banding, apical tergites yellow.Legs yellow except hind tibia butterscotch to weakly infuscate, tarsus entirely medium brown.Wings largely hyaline, though appearing very slightly darker than other species treated here. Taxonomy Genus Eurytenes Foerster s. str. Eurytenes Foerster 1862: 259.Type species Opius abnormis Wesmael 1835 by original designation and monotypy.Description.Head.Antenna fi liform, longer than body.Frons, vertex, and temple smooth, shiny; frons bare, vertex and upper temple nearly so.Labrum exposed.Clypeus weakly to distinctly protruding in profi le.Malar sulcus deeply impressed.Mandible gradually to somewhat more abruptly widening from apex to base, carinate ventrally over most of basal half, never with distinct basal tooth as in Opius s. str.Maxillary palp longer than head, reaching mid coxa.Occipital carina present laterally, extending dorsal-medially at least to level of inner eye margin, broadly absent mid-dorsally; widely separated from hypostomal carina at base of mandible.
2018-12-05T09:32:30.678Z
2011-08-02T00:00:00.000
{ "year": 2011, "sha1": "6a579d5881d33bfb7cccf736ae01bd3a99de3c92", "oa_license": "CCBY", "oa_url": "https://jhr.pensoft.net/article/1561/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6a579d5881d33bfb7cccf736ae01bd3a99de3c92", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
7348690
pes2o/s2orc
v3-fos-license
Phenolic Compounds from Allium schoenoprasum, Tragopogon pratensis and Rumex acetosa and Their Antiproliferative Effects Experimental studies have shown that phenolic compounds have antiproliferative and tumour arresting effects. The aim of this original study was to investigate the content of phenolic compounds (PhC) in flowers of Allium schoenoprasum (chive), Tragopogon pratensis (meadow salsify) and Rumex acetosa (common sorrel) and their effect on proliferation of HaCaT cells. Antiproliferative effects were evaluated in vitro using the following concentrations of phenolic compounds in cultivation medium: 100, 75, 50 and 25 µg/mL. Phenolic composition was also determined by HPLC. The results indicate that even low concentrations of these flowers’ phenolic compounds inhibited cell proliferation significantly and the possible use of the studied herb’s flowers as sources of active phenolic compounds for human nutrition. Introduction Phenolic compounds (PhC) and their anti-tumour effects have been studied for many years [1]. Grape seeds and skins [2], tea [3] or fruits [4,5] are considered to be rich on these phytochemicals. Every plant not only has different concentrations of PhC, but their composition and content in every part is different [6]. Researchers' attention in terms of effects on tumour diseases has been mostly focused on wine PhC [7] or tea PhC [8], but the effect of herb flowers, which are also good source of phytochemicals [9], has not been described yet. In the present study the plants Allium schoenoprasum (chive) Rumex acetosa (common sorrel) and Tragopogon pratensis (meadow salsify) which could be easily available sources of PhC in Europe were studied for the first time in the context of their potential anti-tumour effects. PhC constitute a heterogeneous class of compounds [10] with varied protective effects [3,11]. PhC have been reported to display a variety of biological actions. They can act as antioxidants [12], antiangiogenics [13], selective estrogen receptor modifiers [14], anti-carcinogenic and anti-inflammatory agents [15] and many others. The most significant properties of PhC that may affect carcinogenesis are trapping of ultimate carcinogens [16], inhibitory action against nitrosation reactions [6], inhibition of cell proliferation-related activities [17], induction of cell apoptosis [16], cell cycle arrest [18], blockade of mitotic signal transduction through modulation of growth factor receptor binding [16], nuclear oncogene expression [19], inhibition of DNA synthesis [20] and modulation of signal transduction pathways by altered expression of key enzymes such as cyclooxygenases and protein kinases [21]. The aim of this study is to determine the effect of PhC contained in the flowers of three herb species on cell proliferation and to demonstrate the suitability of this herbs for the prevention of tumour diseases. Results and Discussion Several hundred different PhC have been identified in plants [22]. In this study the following ten PhC were detected by HPLC: gallic acid (GA), coumaric acid (CA), ferulic acid (FA), rutin (Ru), resveratrol (Re), vanillic acid (VA), sinapic acid (SA), catechin (C), quercetin, caffeic acid (CA) and cinnamic acid. The herb flowers used in this study (A. schoenoprasum, T. pratensis and R. acetosa) did not contain all of these PhC. Although quercetin is one of the most common flavonoids in plants, it was not detected in any of the studied herbs. No cinnamic acid was found either. In this study HaCaT cells were used to determine antiproliferative activity. As can be seen from Table 2, cells incubated in the presence of extracts have remarkable lower proliferation compared with control. These differences are statistically significant ( Table 2). Figure 1 shows the antiproliferation activity of A. schoenoprasum extracts The most abundant PhC in A. schoenoprasum was FA (Table 1), which is one of the most common phenolic acids in plants. For example, content of FA in lavender is 5.3 µg/g dry sample [23], in Crete oregano 3.4 µg/g dry sample and in mountain tea 69.5 µg/g dry sample [24]. FA has many biological activities like improvement of microcirculation, elimination of oxygen-free radicals, anti-inflammatory properties [25] and suppression of carcinogenesis [26]. According to Lin et al. [25], FA has the ability to inhibit cellular proliferation and tumour development, which matches our results. GA, CA and Ru were also detected in A. schoenoprasum, but their content was rather low. FA is also found in T. pratensis, but the content is nearly four times lower than in A. schoenoprasum. T. pratensis also contained GA, Ru, Re, SA and CA. The PhC of highest concentration in T. pratensis was found to be GA (1,347.85 µg/g). According to Proestos et al. [24] the content of GA is, for example, 15 µg/g dry sample in eucalyptus and 26 µg/g dry sample in mountain tea. GA is a free radical scavenger with significant inhibitory effects on cell proliferation, it induces apoptosis in a series of cancer cell lines, and shows selective cytotoxicity against tumour cells with higher sensitivity than normal cells [27,28]. In contrast to A. schoenoprasum, extracts of T. pratensis and R. acetosa decreased the proliferation gradually. However the differences between each concentration and control were statistically significant in all cases (Table 2). R. acetosa shows similar antiproliferation activity at concentrations of 75 and 100 µg/mL (Figure 2). T. pratensis shows similar activity at PhC concentrations of 50, 75 and 100 µg/mL (Figure 3). R. acetosa contained Re, VA, SA and C. The most abundant PhC was SA (5,708.48 µg/g). Extracts from R. acetosa had the lowest antiproliferation activity (Table 2), which can be caused by a phenomena described and explained by Kampa et al. [29] whereby the shortening of the side chain in SA leads to a loss of the antiproliferative activity. PhC extracted from the herbs used for this study have higher antiproliferative activity in comparison with PhC used in other studies. For example, black tea PhC at a concentration of 100 µg/mL reduced cell viability by 60% [30]. Different camellia flower extracts at the same concentration decrease the cell viability in the range from 10 to 60% [31]. Results in this study reached values of about 80% decreased cell viability. These different results could be caused by different times of incubation and the use of different cell lines, which may be more toxicity resistant, as Murugan et al. [30] used HepG2 cells and Way et al. [31] used MCF-7 cells. The observed antiproliferative activity of PhC can be explained by their modulation of different key targets of pathways controlling cell proliferation, differentiation, expression and cell death. The MAPK pathways can be used as example [32,33]. They include extracellular signal-regulated kinase (ERK), c-Jun Nterminal Kinase (JNK) and p38 MAPK [34]. According to Yeh and Yen [34] GA, which is present in T. pratensis and in A. schoenoprasum, increased the levels of phosphorylated JNK and p38 and almost completely blocked inhibition of the p38 MAPK pathway. T. pratensis and A. schoenoprasum also contain FA, which inhibits the activation of ERK [35]. JNK and p38 MAPK are also activated by Re, indentified in R. acetosa and T. pratensis [33]. SA, present in very high amounts in R. acetosa and also found in T. pratensis, is involved in the MAPK pathways too [36]. Another signal molecule affected by PhC is Activator protein 1 (AP-1). For example, Re blocks AP-1-mediated gene expression [37]. GA and C inhibit AP-1 binding activity [38]. Other PhC like FA, SA and CA also have effects on AP-1 [36,39]. These PhC were present in every one of the three studied herb flowers. Figure 4 shows differences between morphology of control [ Figure 4 (c) (d) This study has demonstrated the impact of herbal flowers PhC on the proliferation of HaCaT cells. The antiproliferative activity depends on each particular herb. In the case of A. schoenoprasum the activity was independent of the applied concentration of PhC, as similar activity was observed for all concentrations. The antiproliferative activity of R. acetosa and T. pratensis varied with the concentration of PhC. In the case of T. pratensis, concentrations higher than 50 µg/mL do not have an significant impact on proliferation. In the case of R. acetosa, the critical concentration was found to be 75 µg/mL. The different antiproliferative activities of herb extracts can be caused by variable PhC content and composition. Another factor which must be considered is the fact that this study only examined 10 types of polyphenols. Extraction Conditions PhC were extracted from flowers of Allium schoenoprasum, Rumex acetosa and Tragopogon pratensis. All flowers were cut during the year 2010 in the Czech Republic in central Europe. Immediately after cutting the flowers were frozen and stored at −40 °C. The extraction was performed according to Hakimuddin et al. [40] with some modifications: frozen herb flowers were homogenized in 90% methanol (2 mL/g) and subsequently extracted at 4 °C for 30 minutes. After extraction centrifugation at 1,990 rpm for 10 minutes was used to separate the supernatant. Sediments were subjected to a new extraction. This process was repeated three times. The methanol was removed using a Laborota 4011 digital rotary evaporator (Heidolph, Schwabach, Germany). Subsequently extracts concentration was adjusted to obtain concentration of 1,000 mg/mL. Antiproliferation Test The PhC extracts were diluted in culture medium (DMEM) to obtain dilutions with concentrations of 100, 75, 50 and 25 µg of PhC per mL of cultivation medium. All dilutions were used immediately. Cells were pre-cultivated for 24 hrs and the culture medium was subsequently replaced by dilutions. As a control experiment, pure medium without PhC was used. To assess antiproliferative activity on HaCaT cells, the MTT assay (Invitrogen Corporation, Carlsbad, California, USA) [42] was performed after three-day cultivation in dilutions. The absorbance was measured at 540 nm using a Sunrise microplate absorbance reader (Tecan, Männedorf, Switzerland). The cell proliferation expressed as MTT absorbance measured in respective dilutions relative to control is presented. All the tests were performed in quadruplicate. The photomicrographs were taken using an inverted Olympus CKX41 phase contrast microscope (Olympus, Hamburg, Germany). The differences between observed absorbance were detected by T-Test using Statistica for Windows. Determination of PhC A standard solution of tannin was prepared from tannin (50 mg) dissolved in water (100 mL). The standard solution of tannin was added using a pipette to six 50 mL flasks in volumes of 0.2, 0.3, 0.4, 0.5 mL. Extract (1 mL) was added to the seven flasks and dissolved as needed. Distilled water (20 mL) and the Folin-Ciocalteu reagent (1 mL) was added to every flask. After three minutes 20% solution Na 2 CO 3 (5 mL) was added. The solutions were mixed and the distilled water was added to a volume of 50 mL. After 30 minutes the color intensity compared to control (no tannin) was measured at 700 nm. Chromatography Determination of individual PhC was carried out using a Dionex UltiMate 3000 high performance liquid chromatography (HPLC) system (Dionex, Sunnyvale, California, USA). A Supelcosil LC-18-DB (25 cm × 4.6 mm I.D., S-5 µm) column was used. PhC were detected with DAD UV-Vis detection at 205 nm. The mobile phases used for gradient HPLC elution were: (A) 5% (v/v) acetonitrile, 0.035% (v/v) trifluoroacetic acid and (B) 50% (v/v) acetonitrile, 0.025% (v/v) trifluoroacetic acid. The flow-rate was set at 1.0 mL/min. The gradient elution profile started with A-B (90:10), then B was gradually increased to 20% at 10 min, to 40% at 16 min, to 50% at 20 min and back to 40% from 25 to 27 min [43]. The data presented are the average values calculated from three measurements. Conclusions This study is the first study on the antiproliferation activity of chosen phenolic compounds contained in several herb flowers. The results in this study suggest that the tested herbs are a good source of phenolic compounds and that their concentration and composition varies with each species. The work presented proved that the phenolic compounds contained in medical herbs significantly decrease cell proliferation. The fact that natural phenolic compounds contained in herb flowers (A. schoenoprasum, T. pratensis and R. acetosa) inhibit cell proliferation makes those herb flowers potentially useful for the treatment and prevention of tumour diseases. The results suggest that antiproliferation activity does not depend exclusively on total phenolic compound content or composition, but it can be also influenced by other extracted active substances which were not detected.
2014-10-01T00:00:00.000Z
2011-11-01T00:00:00.000
{ "year": 2011, "sha1": "1a3b4161aa52a96f9da7d8b12749ce1b517db2e1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/16/11/9207/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a3b4161aa52a96f9da7d8b12749ce1b517db2e1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
234500699
pes2o/s2orc
v3-fos-license
Using i* and UML for Blockchain Oriented Software Engineering: Strengths, Weaknesses, Lacks and Complementarity . New blockchain-based projects do appear every day. The technology has indeed been popularized by cryptocurrencies but is now gaining interest in various domains and new types of applications are evaluated constantly. Understanding the impact of blockchain adoption on the organization and the internals of blockchain-related behavior nevertheless remains a challenge for managers but also for IT professionals. This article studies how two existing organizational and software modeling languages can be fit to document a blockchain development project in Supply Chain Management (SCM) at its earliest stages. These two frameworks are i* on the one side and the Unified Modeling Language (UML) use case and sequence diagrams on the other side. The real life project used as a case study in this application is ‘Farm-to-Fork’ where a blockchain solution for the Supply Chain (SC) of farm animals is developed. The application of the frameworks is intended to identify their strengths and weaknesses. An extension of i* is proposed to deal with blockchain privacy issues as well as laws and norms. We finally point to the complementarity of i* and UML use case and sequence diagrams in a Blockchain-Oriented Software Engineering (BOSE) context. The i* framework indeed supports early requirements to understand the impact of the project on stakeholders while UML use case and sequence diagrams support the late requirements and the design by depicting the use of blockchain and some of its behavioral mechanisms. Introduction Blockchain, through its decentralized nature, is seen nowadays as a very promising technology with applications that go far beyond the domain of cryptocurrencies. We have indeed seen The Unified Modeling Language (UML) [12] has for years been a reference in the field of object-oriented development. UML use case diagrams are well known to describe the cases in which a system can be used. We will thus apply it to depict the uses cases of a blockchain system. Figure 2 provides the core elements of the UML use case diagrams as well as their graphical representations. UML sequence diagrams are popular to describe the interactions between the actors and objects part of a centralized system. This is useful for blockchain in SC Requirements Engineering (RE) because the sequence diagram can depict a specific order of system operations, which corresponds very well to the nature of the SC flow. This similarity makes sequence diagrams a well-established candidate to model blockchain initiatives in the SC domain. Figure 3 provides the core elements of the UML sequence diagrams as well as their graphical representations. The contribution of this article to the existing literature is threefold: Firstly, this article applies a combination of two modeling techniques (i.e., the i* framework, as well as some UML models) to a relevant case study. The case study is based on the Farm-to-Fork initiative. The Farm-to-Fork solution is a blockchain prototype for the end-to-end food SC of farm animals. Interviews were conducted with 2 consultants including a validation of the produced representations. Secondly, the ability of i* and some UML models to represent a blockchain-related problem in a SC context are evaluated against an extended set of criteria. The assessment criteria are based on existing literature and on the interviewee's expert opinions. This appraisal demonstrates that both modeling techniques have their merits and deficiencies, and none of the two techniques outperforms the other for modeling blockchain in a SC context. System UseCase Actor2 Actor Powered By Visual Paradigm Community Edition Thirdly, a series of graphical extensions for the i* framework initially proposed by Ben Hamadi et al. [13] but not validated yet have been applied to make these models more tailored to describe blockchain in SCM. The enhancements include privacy concepts and the ability to model the compliance with laws or norms. The article is structured as follows. Section 2 briefly discusses the benefits of blockchain and the related work. Section 3 explains the research paradigm, question and methodology. Section 4 discusses the case study on which we apply the frameworks, namely Farm-to-Fork. Section 5 describes the application of the i* framework on the case study while Section 6 describes the application of the UML use case diagram and sequence diagrams on the same case. Finally, Section 7 reports on the strengths and weaknesses of both frameworks and Section 8 concludes the article. Benefits of Blockchain The main benefits of blockchain technology lie in its increased transparency and immutability [14], [15], [16]. Because the network is more accessible, transparency, and hence reliability, are increased. Trust is created in a trustless system [15]. Furthermore, blockchain technology provides a highly secure method of dealing with transactions by using asymmetrical cryptography and hash functions [15]. While these are all primary benefits of blockchain, [16] describes the decentralized approach as probably the biggest advantage, because intermediaries are made completely redundant through a consensus mechanism in which data is verified by all participants, distributed and stored across different locations. Cutting out the middleman has the advantage of reducing overhead costs. Additionally, storing the database at different places reduces the likelihood of hacking and loss of data in case the system goes down. This results in a highly available system, where every node always has the same up-to-date version of the truth. Taking away several nodes will not affect the integrity of the system on its own [15], [17]. Speed is also commonly described as a major benefit of the blockchain technology [16]. Speed is defined in terms of transactional velocity, since blockchain removes all intermediaries who will only slow down the transaction process because they often need to undertake lengthy verification and approval procedures [14]. Data is immediately distributed and agreed upon [15]. Related Work While previous literature has touched upon the adoption of blockchain technology for SCM, it has failed to conceptually model these processes for software engineering. For instance, Niranjanamurthy et al. [17] discuss how blockchain can meet SC objectives and present a few small case studies to demonstrate how this technology is already used in businesses. The paper nevertheless includes only superficial process descriptions. Other research articles, like Saberi et al. [18] and Apte & Petrovsky [14] discuss the use of blockchain in SC and its benefits and challenges but without providing a case study or conceptual model. Bettín-Díaz et al. [16], Roa [19] and Casado-Vara et al. [20] provide exemplary flowcharts, but these only describe a generic implementation of blockchain in the SC of virtually any company in any industry. Furthermore, Rocha et al. [1] and Marchesi et al. [21] have tried to model blockchain implementations for a fidelity point program and for the workings of a university group, by using different UML techniques. However, both these cases were mostly fictional and limited. This article extends on previous research by Ben Hamadi et al. [13]. The latter paper studied the use of the i* modeling language for blockchain technology in SC for Blockchain-Oriented Software Engineering (BOSE), based on a case study of a Belgian retail giant. The study in this paper further investigates and elaborates on this notably by applying extensions to i* as proposed by Ben Hamadi et al. [13] but not developed in there; this has been done on a genuine case study. Moreover, the present research additionally applies UML as a modeling technique. The latter is widely adopted in businesses for specification stages in software engineering. Research Paradigm, Question and Methodology Research Paradigm. The research presented here takes roots in the Design Science paradigm [22]; the latter aims to deliver generic solutions for known (or not yet considered) problems. The result of a design science research problem can be a solution in the form an artifact, terminology, methodology, engineering tool, and so forth. In the present research, we have enriched the i* framework to better match with the problematic of blockchain as well as applied i* and UML models for BOSE. Strengths and weaknesses of the models are explored, and a comparison between the frameworks is presented, based on a set of criteria. Research Question. Are extended i* models and UML use case/sequence diagrams appropriate modeling techniques to visualize the organizational structure of blockchain ecosystems and how can these two frameworks be extended and integrated to better fit this purpose? Research Methodology. To answer the research question, a case study is required [23]. The chosen case study is a Farm-to-Fork project. Farm-to-Fork is a SC tracking prototype that uses blockchain to digitize the food SC and make it more transparent. The Farm-to-Fork project does not have any technical documentation available, so all information was gathered through interviews. Interviews have been conducted with two experts of blockchain working at the consultancy company to gather the domain knowledge. A first interview was conducted in February 2020 with Interviewee 1 (I1) a blockchain consultant who has worked on, among others, blockchain projects for the Belgian government. A second interview was conducted in April 2020 with Interviewee 2 (I2), another blockchain expert working at the same company. He provided some additional insights. Out of the information gathered from the interviews, we have elaborated several conceptual models using both i* and UML techniques. A third, final validation session was organized in May 2020 with I2; the latter then validated and confirmed the case study description as well as the associated representations. After applying both modeling techniques to the case study, they have been compared at the light of a set of criteria. As can be seen in Table 1, the criteria are based on three intakes: generic modeling criteria based on existing literature [24,25]; blockchain-specific criteria defined in consultation with blockchain expert (I2), and other criteria based on findings from applying both modeling techniques. Comparing both techniques is useful to determine the pros and cons but also the complementarity of each technique. Case Study The case study, called Farm-to-Fork concerns a blockchain solution made to track farm animals throughout the SC process, from "their birth to your plate". The solution also includes an easy to use app that gives an overview of the stages of the SC process, including QR-codes to track animals. Every participant in the network, and therefore every node in the SC, can quickly check the origin of the animal, the quality and the different previous steps that the animal has gone through. Farm-to-Fork The Farm-to-Fork prototype was created to meet the increasing expectation levels for improved transparency in the food industry. This solution provides an answer to many of those struggles. I2 suggests that the most important benefits of this implementation are the traceability and the liability aspects. Traceability ensures the ability for the SC participants to closely monitor the animals and allows them to know the exact state and quality of the animal (product). Therefore, it becomes much easier to detect contaminated batches, and to identify any such batches before they can reach the final SC node (such as supermarkets) where they may create a health hazard to unknowing consumers. This also helps to reduce waste. Additionally, I2 remarks that, even if a contaminated product manages to get to consumers, it is much easier to trace down the specific faulty batches, since all product information is meticulously and individually stored in the blockchain. Therefore, in case a contaminated batch would still reach the end-consumers, the health associated consequences will be much less severe. The liability aspect that I2 mentions refers to knowing all the actions of the SC nodes, including their consequences. For instance, fragile chicken eggs that are transported from node to node throughout the SC can break at any stage. However, disputes can arise between the participants of the SC network about who is responsible for this. With blockchain, these disputes can be settled very quickly as the database can tell when and where every individual egg broke. Furthermore, the advantages do not only apply to the producers, but also to the consumers, since the idea is also to expose a part of the blockchain to them. Consumers can view information about a specific animal product in the supermarket by scanning a QR-code. This enables consumers to verify the origin and all the process steps that the animal has gone through. As consumers become more and more critical about their food intake, it is also becoming more and more important for them to be able to check the authenticity of their food. In this case, consumers could, for instance, check whether chicken eggs come from free range chicken farms or what kind (and how much) of antibiotics they have gotten. However, it is important to mention that consumers should only have access to restricted, but relevant information. If a consumer would also be able to see exactly how many chicken eggs they have bought from a specific farmer, they might want to skip some parts of the SC cycle and go straight to that farmer for eggs, leaving the rest of the SC nodes redundant and unprofitable. I2 underlines the importance to carefully assign specific access rights to each participant. Farm-to-Fork Blockchain Type The most appropriate blockchain type for the Farm-to-Fork solution is the permissioned enterprise (or private) blockchain. Indeed, such type is only accessible for the responsible participants of the SC. The roles and rights of every node are decided in consultation with all the participants at the start of the blockchain implementation (I1). I2 nuances this by adding that the roles and rights can still be changed afterwards. This can happen through a voting procedure. The rules for such a voting are usually agreed at the start of the project and may, for instance, specify that there needs to be a two thirds majority (2/3) in agreement before a rule can pass. Another example is that all rule changes require unanimous votes. In case the voting is not successful, nothing will change. In any case, in the context of Farm-to-Fork, consumers have fewer rights than the SC actors. All the nodes' identities are known and there is no problem in upscaling the network (I1). Farm-to-Fork Raft Consensus The network works by using the Raft consensus [26]. This is a consensus mechanism that assigns one of three possible roles to every node in the network: follower, candidate or leader. At the start, every node is a follower. Then, an election is held to vote which node becomes a leader. A majority vote is conducted, every node is a possible candidate and the chosen leader is the one responsible to validate the transactions in the blockchain (I1). I2 adds that every other node can (and will) double-check the transactions afterwards for security, in order to avoid that the leader will validate incorrect information. The election continues and when another majority vote is found, a new leader is chosen. If the newly voted leader does not respond within a given timeframe (usually around 300 milliseconds), then the leader is timed out and a new leader will be elected as well. This mechanism offers to all parties the opportunity to become a leader, and thus becoming responsible for the validation of the blockchain transactions (I1; [26]). Overview of the Farm-to-Fork Blockchain Participants The Farm-to-Fork solution is used in a context of farm animals that go through the SC. For this article, the case study takes an in depth look at the logistic flow of chicken meat from farmer to consumer. The possible participants for the blockchain project, their respective roles within the SC and their minimum required input into the blockchain are listed in Table 2. This represents a generic model of how the solution works in this context. More (or less) participants could be involved, depending on the needs and context of each specific SC's structure. Packs the meat according to needs of the supermarket. The amount of meat, the size and volume. Transporter to supermarket Transports the chicken meat from the packager to the supermarket. The conditions of the transportation such as the humidity, the temperature, the shipment status. Supermarket Sells the chicken meat to consumers. Amount of chicken meat and volume sold, stock data, waste data, quality control. The first participant in the network is the farmer. He is the starting point of the blockchain. The farmer is responsible for raising the chickens on his farm, feeding them, and taking care of them. This obliges him to insert data about the chickens into the blockchain. This data includes the kind of chicken, its age, whether it is free range or not, whether it is biological, as well as the type of poultry feed that is given to the chicken. Additionally, the data should also contain serious events like chicken diseases (more precisely: what kind of sickness, illness period, type and quantity of antibiotics). I1 remarks that some chicken characteristics can be derived from a small IoT machine that is used already by many farmers. Such devices allow to inspect the blood of the chickens and can be linked to the blockchain to automatically register this data [27]. The catcher then catches the chickens of the farmer. The required data from this node describes which chickens were caught (individual identification) and in which order. The order of the catching is important. I1 mentions that during the process of catching chickens (to send them to the butcher), the last chickens have more stress because they realize they will be captured. This increased stress level leads to a lower quality of chicken meat (compared to those chickens that were caught first). So, the order is indeed important to distinguish the higher quality chickens from the lower quality ones. The farmer will confirm the number of chickens that were caught. The node responsible for the transportation from the farm to the butcher will need to feed data about the transport conditions into the blockchain. This includes, among other data elements, the humidity and the temperature of the transport vehicle. Furthermore, the blockchain can be helpful to monitor and control the loss or damage of the chickens. Additionally, shipment status information is also required to permit the other nodes to track the chickens. I1 notes that, here too, some information can be sent directly to the blockchain, via interfaces with IoT devices. Such IoT devices can accurately capture the humidity and temperature levels in the transportation vehicle [27]. The butcher who prepares the chicken meat will confirm the number of chickens received and the number of chickens slaughtered. He will slaughter the chickens, apply treatments to the meat and store it. All the treatments and all the conditions of treatments and storage must be recorded into the blockchain database as well. The individual pieces or packs of chicken should be stored into the blockchain by size and volume. Additionally, a thorough quality control is performed. After the butcher, the packager will pack the meat in conformance with the supermarket's specifications and will record data about the different packages, such as the weight and the number of packages. The transporter will then transport the chicken meat from the packager to the final node of the SC: the supermarket. Similar data as for the transport from the farmer to the butcher, is required. Finally, in the supermarket, the meat should all be qualitative enough to sell to consumers. A final quality control is performed. As a general remark, all nodes of the network must identify themselves when entering data. This way, each process step is linked to the organization that is responsible for it. Along every step of the SC, all nodes must perform quality control and report their findings for maximal transparency. Nodes should also enter price data into the blockchain database (I1). I2 adds that some data, like pricing, may be considered as sensitive information. Therefore, he recommends to carefully restrict access to pricing data to those participants who really need this type of information, without making it publicly available. 5 Using i* to Model the Farm-to-Fork Blockchain Proposed Extensions for i* for Modeling Blockchains Two types of extensions of i* are proposed in this article: privacy, and laws and norms. Bashir [15] and Bettin-Diaz et al. [16] note that, from the various blockchain hurdles, the privacy issue might be one of the most challenging. The privacy of all nodes in the network must be respected by restricting access to certain data. The nodes themselves should be able to determine which information can be accessed by whom and what information should be anonymized. The importance of privacy should not be overlooked. Therefore, it is recommended that these privacy requirements are explicitly modeled when visualizing blockchain in SCM. Ben Hamadi et al. [13] proposed to extend the i* framework by adding the following concepts: access control, privacy accountability, confidentiality and anonymity but did not implement it, this is done in this article. Next to the privacy issues, I1 also stresses the importance of regulations. As blockchain is still a relatively young technology, new regulations that limit the working of the blockchain and/or smart contracts might become applicable. The legal binding of smart contracts in a court of law is often a subject for debate [15]. I1 specifically refers to the repercussions of the GDPR regulations on blockchain. Under GDPR, personal data should remain within the EU. This imposes restrictions on public blockchains because there is no control on where the nodes are located. I1 mentions that this is less of a problem with private blockchains. Additionally, the 'right to be forgotten' conflicts with blockchain as an immutable chain of historical transactions, although this rule lacks a real, strict definition. Currently, a workaround exists whereby personal data is stored 'off-chain', outside the blockchain database. A reference and a hash of this data is then registered in the blockchain. However, this destroys the purpose of the blockchain, since transparency is diminished, data-ownership becomes vague, one need to find a new way to integrate data from other participants and the data is more vulnerable to cyber-attacks. Siena et al. [28] and Siena, Perini, Susi, & Mylopoulos [29] have introduced an i* extension to model laws and norms. Siena et al. [28] revolves around the application of such extensions specifically for European food traceability systems. This is particularly interesting for blockchain in SCM, as it is important for system developers to understand how the blockchain should be compliant with which regulations. Because blockchain is a technology which steadily becomes more widespread in the IT-landscape, new regulations will emerge to control it legally. Table 3 provides an overview of all suggested extensions, including their proposed graphical notations. The i* extensions for privacy concepts and for laws and norms are taken over from the literature. Table 3. Proposed extensions of i* for BOSE Concept Graphical notation Description References Access control Access to data in the chain is restricted to certain nodes.This notation can be used on data elements. The annotation allows to specify who has access or who has not. [13] Privacy accountability The notation is used on a dataelement and allows to make third parties accountable for data manipulation under privacy requirements. [13] Confidentiality The data-owner can hide certain data from the other nodes. This notation can be used on data elements. [13] Anonymity An actor wants to anonymize his data partially or completely.The notation is used on an actor element and allows to specify what data should be anonymized. [13] Laws and norms Norms An actor should be compliant with a certain norm. This norm also has a source (e.g., EU). [28] Strategic Dependency Diagram -Farm-to-Fork with Blockchain It should be apparent that, in case of a blockchain adoption for the Farm-to-Fork process, all actors will become connected to each other through the blockchain system. To visualize such a process, the blockchain system itself should also be represented as an actor, alongside the other participants in the network. The relationship between the nodes in the SC and the blockchain is indeed a dependent one. The blockchain network depends on the farmer, the catcher, the transporters, the butcher, the packager and the supermarket for data. The data is validated and saved into the blockchain. Certain nodes, like the transporter, can depend on the use of IoT devices to automatically capture and send data to the blockchain. For the transporter, this data can include the transportation conditions such as the humidity and the temperature. Based on this data, the blockchain system can also verify whether the contractual terms are fulfilled. The (execution of the) smart contracts therefore depend(s) on the data in the blockchain database. The system can automatically execute the contract through these smart contracts. Because these smart contracts depend on the input data of the SC nodes in the blockchain database, they are also represented as an actor. Additionally, the blockchain depends on the supermarket to specify the attributes that must be collected by the various stakeholders as input for the blockchain database. On the other hand, the supermarket node also depends on the blockchain data itself, to permit an analysis of the optimal quality requirements (via business intelligence techniques on this data). After the optimal conditions are estimated by the supermarket, the smart contracts need this list of quality requirements to adjust the contract specifications. Moreover, consumers can check the product's history and origin by (partially) viewing the blockchain's data. The SD model of such a SC process is shown in Figure 4. The legend for the elements of this figure are represented in Figure 1. Figure 4 also contains the extensions for privacy concepts. Consumers are only allowed to see a limited part of the blockchain data and process. Next, the quality requirements imposed by the supermarket are only distributed to the relevant nodes, depending on their respective responsibilities within the overall process. The supermarket can also hide its own price data, since this is classified as sensitive information. All nodes can specify what data they want to hide from other nodes, and third parties should be held accountable when given access to manipulate this data. Strategic Rationale Diagram -Farm-to-Fork with Blockchain The SR model focuses more on the internal rationale or reasoning of all the nodes, related to the dependencies between actors. In addition to the interaction between the different SC participants, the supermarket's ability to specify the quality requirements for each stakeholder is also important and is therefore depicted with the SR model in Figure 5. The legend for the elements of this figure are represented in Figure 1. The SR model focuses on the interdependencies between the supermarket, the blockchain, the smart contracts, and the consumer. The supermarket is an especially important node as the final product arrives here and is sold to consumers. Hence, the chicken meat must be of the best quality in order to sell it to consumers. It is likely that most benefits of the blockchain adoption are experienced in this stage of the SC: no more wastage because of higher quality and avoidance of contaminated products, contaminated products can no longer get into the hands of consumers which limits health risks, and consumer awareness is higher because they can scan the QR-code on the packaging of the chicken meat to check the history of the product. Given these four important actors (the supermarket, the blockchain, the smart contracts and the consumer), the SR model can understand the 'why' of interdependencies. The original i* extension to describe regulatory compliance was specifically targeted towards the SR type of models in i*. Figure 5 shows the integration of the EU GDPR law in the SR model. The overall aim of the regulation is to protect personal data. As shown in Figure 5, this can be achieved through guaranteeing the 'right to be forgotten', keeping data processing transparent, only recording data when necessary, keeping the data within the EU and ensuring data integrity, security and confidentiality. 6 Using UML to Model the Farm-to-Fork Blockchain UML Use Case -Farm-to-Fork The use case diagram is depicted in Figure 6. The legend for the elements of this figure is represented in Figure 2. All network nodes can input, store and verify data. The verification of data can only be fulfilled when a leader is appointed in the Raft consensus mechanism, although every node will double check the verification of the leader (I2). Additionally, the supermarket can provide quality specifications that must be adhered to by all parties. Here, smart contracts are shown as an actor even though they are an integrated part of the blockchain system. This is done to show the possible actions of the smart contracts (i.e., checking whether contractual terms are fulfilled or not and automatically executing the smart contracts). Moreover, consumers can check the history and origin of products by scanning a QR-code. Figure 6. Farm-to-Fork blockchain use case diagram UML Sequence Diagram -Farm-to-Fork The sequence diagram shows the order in which the activities occur. As already mentioned, this is especially useful for SC processes. The Farm-to-Fork sequence diagram is depicted in Figure 7. The legend for the elements of this figure are represented in Figure 3 and details on the supported process can be found in Section 4. With every blockchain return message 'Verification of data', an alternative fragment should take place, which defines what happens if the verification is successful and what happens if it is not. However, for simplicity reasons, in Figure 7, this alternative (or alt-) fragment is only explicitly modeled for the first occurrence where the blockchain wants to verify data (i.e., at the farmer's data entry). Thus, although not explicitly modeled, this alt-fragment takes place every time the blockchain wants to verify data. As mentioned before, the transporter can use an IoT device to automatically save and send transportation data to the blockchain. This proposed IoT device is also included in the sequence diagram to show the effects of the implementation. Finally, at the bottom, a loop is included. This represents the quality requirement updates that the supermarket can repeatedly implement whenever a new (local or global) optimum is found for the process conditions (e.g., by using business intelligence tools). Next to the overall sequence diagram in Figure 7, also the Raft consensus mechanism (previously discussed in Section 4.3) can be represented by a sequence diagram. Figure 8 shows the model for this blockchain consensus mechanism. The whole election process loops until a candidate with the highest term or majority vote becomes leader. A new leader is appointed through a new election when a new majority vote has been found or when the appointed leader times out. Powered By Visual Paradigm Community Edition Figure 8. Farm-to-Fork blockchain sequence diagram for raft consensus mechanism Finally, Figure 9 depicts that consumers can access a limited part of the product history in the blockchain. This can be done by scanning the label of the product in the supermarket with their phone. Using a mobile app, consumers can view some of the processes of the SC. Evaluation of i* and UML for Modeling BOSE in SCM This section compares the pros and cons of the SD and SR models (i* framework) versus the use cases and sequence diagrams (UML) as modeling techniques for BOSE. The evaluation criteria have been presented in Section 3. Both frameworks are evaluated using these criteria, then a summary is given. Evaluation of the i* Framework to Model Blockchain in SCM This section maps the characteristics of the i* framework against the set of criteria described in Section 3. Generic criteria: • Coverage of elements. The internal behavior of the system cannot be represented with i*. Also, both the Raft consensus mechanism and the execution of smart contracts cannot be explicitly represented; • Reusability. Reuse of i* models can be envisaged for different blockchain situations; • Guidelines and tool-support. No CASE-tool provides the explicit extensions for blockchain yet; • Widespread in different areas. The framework has been applied in various areas but barely in the field of blockchain. Expert opinions: • Restricting access and privacy concepts. The i* representations lack a way to include the blockchain's (here private) typing. Consumers clearly should have restricted access and restricted rights when accessing the blockchain. There is no way to represent restricted access in i*. Moreover, not only consumers, but also other network SCM nodes may express the need to keep certain information private or even to anonymize specific data; • Scalability. A major downside of i* to represent a blockchain-based problem is its growing complexity when the network expands. This disadvantage is not specifically tied to the blockchain field but rather to the nature of the SC. In the Farm-to-Fork case, only seven SC nodes (including the consumer) were identified. In many SCs, the number of nodes can be higher which would increase the complexity of the representation; • Ability to express workflow patterns. The i* framework lacks the notion of structure or sequence which can be seen as a drawback in the context of SCs; • Norms. The i* framework does not allow the modeling of laws or norms that should be respected when adopting blockchain in SCM. Other criteria: • Social focus. In the context of SCM, the i* framework is particularly suited because every SC involves a network of depending actors. The social focus is here very appropriate. This is also valid for the blockchain system itself and for the smart contracts: interactions of the blockchain within the SC process can be modeled, making it clearer what role the system plays in the SC; • Dual granularity. While the SD model allows for a high-level and more abstract view of the network, the SR one provides a more detailed understanding of the reasoning behind the actions in the process [2]. These two levels permit the modeling of blockchain and smart contracts to be depicted at a high level as well as on a more detailed one. This allows a better understanding of the tasks of the blockchain and smart contracts, and to what goals they contribute; • Flexibility in modeling. The means-end links in the SR model can depict how a certain goal can be achieved through different tasks or different means [2]. Conceptually, this gives room for different options and scenarios to be modeled. The use of blockchain technology in SC offers many options to make processes more efficient so it provides new opportunities for actors to achieve their goals. Being able to flexibly model these means is as a plus of i*; • Technical concepts. The i* framework fails to entirely capture blockchain as a technology, because it is oriented on social concepts. The blockchain database is represented as an actor in the diagrams. i* lacks dedicated stereotypes to represent specific system components. The inability to depict the blockchain system technically is a major disadvantage of i*. Within the i* framework, it is thus unclear that the actors use the blockchain database to store information about their SC processes using cryptographic means to verify and validate data from other nodes. Evaluation of the UML Use Case and Sequence Diagrams to Model Blockchain in SCM Similarly as for i* in the previous section, this section gives an overview of the pros and cons of using the UML use cases and sequence diagrams for BOSE in the SCM domain. Because the UML use cases and sequence diagrams complement each other, and are often used together, the evaluation of the modeling criteria applies to the combination of both diagrams. The set of criteria for evaluation is described in Section 3. Generic criteria: • Coverage of elements. UML models provide a broad coverage because they can depict the Raft consensus mechanism, the verification of data and the functions of the smart contracts. This is in line with findings of a study by [1]. Also, the interaction between the blockchain database and other devices such as IoT devices to gather data automatically can be represented with UML diagrams. [21] also points to the sequence diagram as one of the best diagrams to accurately visualize the workings of smart contracts; • Reusability. The UML diagrams are not bound to a specific situation but can be modeled for various purposes and in different contexts; • Guidelines and tool-support. Many sources of explanation such as guidelines about the UML models exist online. Furthermore, many CASE-tools support UML representations; • Widespread in different areas. [1] points out that UML is a widely known and adopted modeling technique, which makes it easy to use for most software engineers, as they are probably already familiar with it. Expert opinions: • Restricting access and privacy concepts. UML diagrams lack privacy concepts in their modeling techniques. Use case and sequence diagrams fail to specify restrictions of access, anonymity of data or any privacy matters; • Scalability. The combination of the UML use case Diagram with sequence diagrams is scalable: with an increased set of actors, the model will not become overly complex as the models have an organized structure and a specific flow, making it easier to follow even with an increasing number of SC participants; • Ability to express workflow patterns. In the sequence diagram a certain sequence of events or order of actions is established. This fits very well with the flows of SCs as it also contains a strict sequential order. Showing this flow or structure makes it easier to understand the supply chain process, from start to finish. This is also particularly relevant for modeling the consensus mechanism behind a blockchain; • Norms. UML use case and sequence diagrams do not allow the modeling of laws or norms that must be obeyed when adopting blockchain in SCM. Other criteria: • Social focus. By their nature, UML use cases and sequence diagrams are focused on late analysis and software design, rather than on social aspects. These diagrams are functionally or behaviorally defined for the purpose of building a to-be system. Social aspects, such as intentions behind actions, reasoning of actors or goals cannot be represented; • Dual granularity. Like the i* approach, a dual granularity is possible within UML. The use case diagram depicts a high-level overview of the system while the sequence diagram provides more details about the internal working of the system and the different interactions with the actors. Both diagrams complement each other nicely to provide a better understanding of the requirements; • Flexibility in modeling. The UML use case and sequence diagrams lack some flexibility as they tend to document a unique solution. They do not provide the flexibility of modeling different means to get to one end; • Technical concepts. The two UML diagrams represent interactions of actors with a system. Both the use case and the sequence diagrams are targeted to represent the workings of the blockchain system itself and are able to represent the blockchain as a system. Summary As can be seen in Table 4 (that provides a final evaluation), the two frameworks distinguish themselves by their different purpose: while i* is social-focused, the UML use case and sequence diagrams are system-oriented. Both techniques are complementary. Therefore, we recommend to use the i* framework during the early phases of RE. This enables system developers to understand the 'why' of the SC process, giving a clear overview of the interdependencies and the goals of all nodes in the blockchain network, as this is the core of the blockchain's decentralized system. During later phases, UML diagrams can be applied to design the system's interactions with the different actors of the SC in more detail. Conclusion Blockchain is still a relatively young technology. Nevertheless, the many benefits of its adoption in SCM are apparent. The immutable characteristic of the ledger provides enhanced transparency, improved product traceability, higher transactional speed, increased security, and an overall cost-effective approach. Simultaneous with blockchain's rise in popularity, the existing literature about blockchain's adoption in SCM is expanding as well. However, no standard modeling technique exists for the modeling of BOSE, especially with respect to its adoption within SC processes. This article therefore researches two different modeling languages, the i* framework and the UML use cases and sequence diagrams, which appear to be very well suited to model blockchain's adoption in SCM. The article contributes to the existing literature by applying both modeling languages to a blockchain case study and by making a comparative analysis between both. Also a number of model extensions are proposed to enhance the capabilities of i*. After applying both modeling languages to the case, a comparative evaluation between both approaches was performed. A set of assessment criteria was established and we conclude on the complementarity of the approaches. The in-depth comparison between both approaches has revealed that they also lack some elements to model BOSE in SCM to its full extent. Hence, new graphical concepts are proposed to enhance the models. First, in line with Ben Hamadi et al. [13], this paper recommends the inclusion of privacy concepts. Next, because of the importance of laws and upcoming regulations that will determine the future direction of blockchain technology, the enhancement of the i* framework for laws and norms is also recommended. From the underlying analysis for this article, it is clear that the i* framework can be used during the early phases of requirements engineering, to ameliorate and deepen the understanding of the social aspects, the intentions, and the underlying goals of all actors in the blockchain network. The i* focus on the interdependencies between actors in the network reflects the core of trust in the blockchain's decentralized system. The two UML models lead to late analysis and design diagrams and give an overview of how the system should work, including the interactions between the different actors. The UML diagrams can thus be used, complementary to the i* framework, during later phases of the requirements gathering process. Combined, the i* framework will help to understand the 'why' of the business processes, while the UML diagrams will focus more on the 'what'. The present study combined with Ben Hamadi et al. [13] lead us conclude that i* and its refinements are relevant for each BOSE development in SCM. Future work includes (i) the application of the enhanced i* framework in other domains, (ii) the application of other frameworks, notable the business use case model together with BPMN (see [30], [31]) and (iii) the development of transformation (forward engineering) rules first from the i* SD to the use case model and from the i* SR to sequence diagrams to have an integrated framework. After that, the trace between elements from the i* SR and sequence diagrams and object and agent messages can also be studied to transform the models into code for runtime execution.
2021-05-15T13:26:28.226Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "d38178eb53ddc785a184e518bedada0be2f005c3", "oa_license": "CCBY", "oa_url": "https://csimq-journals.rtu.lv/article/download/csimq.2021-26.02/2569", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4a614b49f53285c8aa9125eb39883710d3deb55c", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
253762090
pes2o/s2orc
v3-fos-license
MUTE: A Multimodal Dataset for Detecting Hateful Memes The exponential surge of social media has enabled information propagation at an unprecedented rate. However, it also led to the generation of a vast amount of malign content, such as hateful memes. To eradicate the detrimental impact of this content, over the last few years hateful memes detection problem has grabbed the attention of researchers. However, most past studies were conducted primarily for English memes, while memes on resource constraint languages (i.e., Bengali) are under-studied. Moreover, current research considers memes with a caption written in monolingual (either English or Bengali) form. However, memes might have code-mixed captions (English+Bangla), and the existing models can not provide accurate inference in such cases. Therefore, to facilitate research in this arena, this paper introduces a multimodal hate speech dataset (named MUTE) consisting of 4158 memes having Bengali and code-mixed captions. A detailed annotation guideline is provided to aid the dataset creation in other resource constraint languages. Additionally, extensive experiments have been carried out on MUTE, considering the only visual, only textual, and both modalities. The result demonstrates that joint evaluation of visual and textual features significantly improves (≈ 3%) the hateful memes classification compared to the unimodal evaluation. Introduction With the advent of the Internet, social media platforms (i.e., Facebook, Twitter, Instagram) significantly impact people's day-to-day life. As a result, many users communicate by posting various content in these mediums. This content includes promulgating hate speech, misinformation, aggressive and offensive views. While some contents are beneficial and enrich our knowledge, they can WARNING: This paper contains meme examples and words that are offensive in nature. also trigger human emotions that can be considered harmful. Among them, the propagation of hateful content can directly or indirectly attack social harmony based on race, gender, religion, nationality, political support, immigration status, and personal beliefs. In recent years, memes have become a popular form of circulating hate speech (Kiela et al., 2020). These memes on social media have a pernicious impact on societal polarization as they can instigate hateful crimes. Therefore, to restrain the interaction through hateful memes, an automated system is required to quickly flag this content and lessen the inflicted harm to the readers. Several works (Davidson et al., 2017;Waseem and Hovy, 2016) have accomplished hateful memes detection, most of which were for the English language. Unfortunately, no significant studies have been conducted on memes regarding low-resource languages, especially Bengali. In recent years an increasing trend has been observed among the people to use Bengali memes. As a result, it becomes monumental to identify the Bengali hateful memes to mitigate the spread of negativity. However, memes analysis is complicated as it requires a holistic understanding of visual and textual content to infer (Zhou et al., 2021). The visual content of the meme alone may not be harmful (Figure 1 (a)). However, it becomes hateful with the incorporation of textual content as it directly attacks religious beliefs. A meme's caption can be written in a mixed language (written in both English and Bengali as in Figure 1 (b)), which can evade the surveillance engine in those cases. Developing a hateful meme detection system for such a scenario is complicated as no standard dataset is available. Moreover, developing an intelligent multimodal memes analysis system for Bengali is challenging due to the unavailability of benchmark corpus, lack of reliable NLP tools (such as OCR), and the complex morphological structure of the Bengali language. Therefore, this work aims to develop a multimodal dataset for Bangla hate speech detection and investigate various models for the task. The critical contributions of the work are summarized as follows: • Created a multimodal hate speech dataset (MUTE) in Bengali consisting of 4158 memes annotated with Hate and Not-Hate labels. • Performed extensive experiments with stateof-the-art visual and textual models and then integrate the features of both modalities using the early fusion approach. Related Work This section discusses the past studies on hate speech detection based on unimodal (i.e., image or text) and multimodal data. Unimodal based hate speech detection: Hate speech detection is a prominent research issue among the researchers of different languages (Ross et al., 2016;Lekea and Karampelas, 2018 (Kiela et al., 2020) such as MMBT, ViLBERT, and Visual-BERT. Despite having the state of the art multimodal transformer architectures, these models have only applied for high resource language (i.e., English). Differences with existing researches: Though a considerable amount of work has been accomplished on multimodal hate speech detection, only a few works studied low-resource languages (i.e., Bengali). In our exploration, we found a work (Karim et al., 2022) that detects hate speech from multimodal memes for the Bengali language. However, they did not curate the social media memes for analysis; instead artificially created a memes dataset for Bengali by conjoining the hateful texts into various images. Moreover, the current works overlooked the memes containing captions written cross-lingually. Considering these drawbacks, the proposed research differs from the existing studies in three ways: (i) develops a multimodal hate speech dataset (i.e., MUTE) for Bengali considering the Internet memes, (ii) provides a detailed annotation guideline that can be followed for resource creation in other low resource languages, and (iii) consider the memes that contain code-mixed (English + Bangla) and code-switched (written Bengali dialects in English alphabets) caption. MUTE: A New Benchmark Dataset This work developed MUTE: a novel multimodal dataset for Bengali Hateful memes detection. The MUTE considered the memes with code-mixed and cod-switched captions. For developing the dataset, we follow the guidelines provided by Kiela et al. (2020). This section briefly describes the dataset development process with detailed statistics. Data Accumulation For dataset construction, we have manually collected memes from various social media platforms such as Facebook, Twitter, and Instagram. We search the memes using a set of keywords such as Bengali Memes, Bangla Troll Memes, Bangla Celebrity Troll Memes, Bangla Funny Memes etc. Besides, some popular public memes pages are also considered for the data collection, such as Keu Amare Mairala, Ovodro Memes etc. We accumulated 4210 memes from January 10, 2022, to April 15, 2022. During the data collection, some inappropriate memes are discarded by following the guidelines provided by Pramanick et al. (2021). The criteria for discarding data are: (i) memes contain only unimodal data, (ii) memes whose textual or visual information is unclear and (iii) memes contain cartoons. In this filtering process, 52 memes were removed and ended up with a dataset of 4158 memes. Afterwards, the caption of the memes is manually extracted as Bengali has no standard OCR. Finally, the memes and their corresponding captions are given to the annotators for annotation. Dataset Annotation The collected memes are manually labelled into two distinct categories: Hate and not-Hate. However, to ensure the dataset's quality, it is essential to follow a standard definition for segregating the two categories. After exploring some existing works on multimodal hate speech detection (Kiela et al., 2020;Gomez et al., 2020;Perifanos and Goutsos, 2021), we define the classes: Hate: A meme is considered as Hateful if it intends to vilify, denigrate, bullying, insult, and mocking an entity based on the characteristics including gender, race, religion, caste, and organizational status etc. Not-Hate: A meme is reckoned as not-Hateful if it does not express any inappropriate cogitation and conveys positive emotions (i.e., affection, gratitude, support, and motivation) explicitly or implicitly. Process of Annotation We instructed the annotators to follow the class definitions for performing the annotation. It also asked them to mention the reasons for assigning a meme to a particular class. This explanation will aid the expert in selecting the correct label during contradiction. Initially, we trained the annotators with some sample memes. Four annotators (computer science graduate students) performed the manual annotation process, and an expert (a Professor conducting NLP research for more than 20 years) verified the labels. Annotators were equally divided into two groups where each annotated a subset of memes. In case of disagreement, the expert decided on the final label. The expert ruled a total of 113 non-hateful and 217 hateful memes as hostile and non-hateful. An inter-annotator agreement was measured using Cohen (Cohen, 1960) Kappa Coefficient to ensure the data annotation quality. We achieved a mean Kappa score of 0.714, which indicates a moderate agreement between the annotators. Earlier, it is mentioned that this work is the very first attempt at multimodal hate speech detection that considers the social media memes of the Bengali language. Therefore, it requires more extensive scrutiny with more diverse data and a high level of annotator agreement to deploy the model trained on this dataset. The agreement score illustrates the difficulty in identifying the potential hateful memes by humans and brings a question of biases, thus limiting the broader impact of this work. Dataset Statistics For training and evaluation, the MUTE is split into the train (80%), test (10%), and validation (10%) set. Table 1 presents the class-wise distribution of the dataset. It is observed that the dataset is slightly imbalanced as the 'Not-Hate' class contains ≈60% data. set, which contains a total of 483 memes with codemixed captions. Moreover, it is also illustrate that the 'Not-Hate' class has a higher number of words and unique words than the 'Hate' class. However, the average caption length is almost identical in both classes. Apart from this, we carried out a quantitative analysis using the Jaccard similarity index to figure out the fraction of overlapping words among the classes. We obtained a score of 0.391, indicating that some common words exist between the classes. Methodology Several computational models have been explored to identify hateful memes by considering the single modality (i.e., image, text) and the combination of both modalities (image and text). This section briefly discusses the methods and parameters utilized to construct the models. Baselines for Visual Modality This work employed convolutional neural networks (CNN) to classify hateful memes based on visual information. Initially, the images are resized into 150 × 150 × 3 and then driven into the pre-trained CNN models. Specifically, we curated the VGG19, VGG16 (Simonyan and Zisserman, 2015), and ResNet50 (He et al., 2016) architectures that finetuned on MUTE dataset by using the transfer learning (Tan et al., 2018) approach. Before that, the top two layers of the models are replaced with a sigmoid layer for classification. BiLSTM + CNN: At first, the word embedding (Mikolov et al., 2013) vectors are fed to a BiLSTM layer consisting of 64 hidden units. Following this, a convolution layer with 32 filters with kernel size two is added, followed by a max-pooling layer to extract the significant contextual features. Finally, a sigmoid layer is used for the classification. The final time steps output of the BiLSTM network provides the contextual information of the overall text. BiLSTM + Attention: We applied the additive attention (Bahdanau et al., 2015) mechanism to the individual word representations of the BiLSTM cell. The CNN is replaced with an attention layer. The attention layer tries to give higher weight to the significant words for inferring a particular class. Transformers: Pretrained transformer models have recently obtained remarkable performance in almost every NLP task (Naseem et al., 2020;Yang et al., 2020;Cao et al., 2020). As the MUTE contains cross-lingual text, this work employed three transformer models, namely Multilingual Bidirectional Encoder Representations for Transformer (M-BERT (Devlin et al., 2019)), Bangla-BERT (Sarker, 2020), and Cross-Lingual Representation Learner (XLM-R (Conneau et al., 2020)). All the models are downloaded from HuggingFace 1 transformer library. We follow their preprocessing 2 and encoding technique for preparing the texts. The transformer models provide a sentence representation vector of size 768. This vector is passed to a dense layer of 32 neurons, and then using the pre-trained weights, models are retrained on the developed dataset with a sigmoid layer. Baselines for Multimodal Data In recent years, joint evaluation of visual and textual data has proven superior in solving many complex NLP problems (Hori et al., 2017;Yang et al., 2019;Alam et al., 2021). This work investigates the joint learning of multimodal data for hateful memes classification. For multimodal feature representation, we employed the feature fusion (Nojavanasghari et al., 2016) approach. In earlier experiments, all the visual and two textual (i.e., Bangla-BERT and XLM-R) models are used to construct the multimodal models. For the model construction, we added a dense layer of 100 neurons at both modality sides and then concatenated their outputs to make combined visual and textual data representations. Finally, this combined feature is passed to a dense layer of 32 neurons, followed by a sigmoid layer for the classification task. MUTE: Benchmark Evaluation The training set is used to train the models, whereas the validation set is for tweaking the hyperparameters. We have empirically tried several hyperparameters to obtain a better model's performance and reported the best one. The final evaluation of the models is done on the test set. This work selects the weighted f1-score (WF) as the primary metric for the evaluation due to the class imbalance nature of the dataset. Apart from this, we used the class weighting technique (Sun et al., 2009) to give equal priority to the minority class (hate) during the model training. Table 3 illustrates the outcome of the visual, textual, and multimodal models for hateful memes classification. In the case of the visual model, ResNet50 obtained the maximum WF of 0.641. For the text modality, the B-BERT model obtained the highest WF (0.649). The outcomes of the other textual models (i.e., BiLSTM + Attention, BiLSTM + CNN, and XLM-R) are not exhibited significant differences compared to the best model (B-BERT). On the other hand, with the multimodal information, the outcomes of the models are not improved. Almost all the models' WF lies around 0.60 except the VGG19 + B-BERT model (0.641). However, the VGG16 + B-BERT model outperformed all the models by achieving the highest weighted WF of 0.672, which is approximately 2% higher than the best unimodal model of B-BERT (0.649). Error Analysis We conducted a quantitative error analysis to investigate the model's mistakes across the two classes. To illustrate the errors, the number of misclassified instances is reported in Figure 2 for the best unimodal (ResNet50 and B-BERT) and multimodal (VGG19 + B-BERT) models. It is observed that the misclassification rate (MR) is increased ≈10% and decreased ≈9% from visual to textual model, respectively, for the 'Hate' and 'Not-Hate' classes. However, the joint evaluation of multimodal features significantly reduced the MR to 38% (from 44% and 54%) in the Hate class and thus improved the model's overall performance. Though the multimodal model showed superior performance compared to the unimodal models, there is still room for improvement. We point out several reasons behind the model's mistakes. Among them, identical words in different written formats (code-mixed, code-switched) made it difficult for the model to identify accurate labels. Moreover, the discrepancy between some memes' visual and textual information creates confusion for the multimodal model. Indeed, these are some significant factors that should be tackled to develop a more sophisticated model for Bengali hateful memes classification. Figure 2: Miss-classification rate across two classes by different models. Conclusion This paper presented a multimodal framework for hateful memes classification and investigated its performance on a newly developed multimodal dataset (MUTE) having Bengali and code-mixed (Bangla + English) captions. For benchmarking the framework, this work exploited several computational models for detecting hateful content. The key finding of the experiment is that the joint evaluation of multimodal features is more effective than the memes' only visual or textual information. Moreover, the cross-lingual embeddings (XLM-R) did not provide the expected performance compared to the monolingual embeddings (Bangla-BERT) when jointly evaluated with the visual features. The error analysis reveals that the model's performance gets biased to a particular class due to the class imbalance. In future, we aim to alleviate this problem by extending the dataset to a large scale and framing it as a multi-class classification problem. Secondly, for robust inference, advanced fusion techniques (i.e., co-attention) and multitask learning approaches will be explored. Finally, future research will explore the impact of dataset sampling and do some ablation study (i.e., experimenting with only English, only Bangla, code-mixed, and code-switched text) to convey valuable insights about the models' performance. Tommi Gröndahl, Luca Pajola, Mika Juuti, Mauro Conti, and N Asokan. 2018. All you need is" love" evading hate speech detection. In Proceedings of the 11th ACM workshop on artificial intelligence and security, pages 2-12.
2022-11-23T14:05:35.780Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "4104cfd71b219b5a532677fe8eac151ec6bf182a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "4104cfd71b219b5a532677fe8eac151ec6bf182a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252616607
pes2o/s2orc
v3-fos-license
DEPTWEET: A Typology for Social Media Texts to Detect Depression Severities Mental health research through data-driven methods has been hindered by a lack of standard typology and scarcity of adequate data. In this study, we leverage the clinical articulation of depression to build a typology for social media texts for detecting the severity of depression. It emulates the standard clinical assessment procedure Diagnostic and Statistical Manual of Mental Disorders (DSM-5) and Patient Health Questionnaire (PHQ-9) to encompass subtle indications of depressive disorders from tweets. Along with the typology, we present a new dataset of 40191 tweets labeled by expert annotators. Each tweet is labeled as 'non-depressed' or 'depressed'. Moreover, three severity levels are considered for 'depressed' tweets: (1) mild, (2) moderate, and (3) severe. An associated confidence score is provided with each label to validate the quality of annotation. We examine the quality of the dataset via representing summary statistics while setting strong baseline results using attention-based models like BERT and DistilBERT. Finally, we extensively address the limitations of the study to provide directions for further research. Introduction Analyzing the presence of mood and psychological disorders through behavioral and linguistic cues from social media data remains a critical area of interdisciplinary research. In addition to these disorders, the last decade has seen exponentially increasing attempts to assess related symptomatology such as depressive disorders, self-harm, and severity of mental illness using non-clinical data (Bucci et al., 2019). Social media platforms and other online discussion forums have been particularly appealing to the research community for various research purposes (e.g., populationlevel mental health monitoring (Conway and O'Connor, 2016), personal traits detection (Marouf et al., 2020), cyberbullying spotting (Bozyiğit et al., 2021), etc.) because of the massive scale of data. This massive data flow has resulted from increasing rates of internet access and people spontaneously sharing their suffering, pain, and struggle anonymously on these platforms (Ofek et al., 2015). Recognizing the early symptoms of depressive disorder through a person's language use can prevent many disastrous outcomes like self-harm, suicide, etc., and even help deploy effective treatment in proper time. Moreover, the outbreak of the COVID-19 pandemic is likely to have devastating impacts on the mental health of millions of individuals as lockdown in the affected areas has reported in high rises in the incident rates of mood disorder, including acute stress disorder, post-traumatic stress disorder, generalized anxiety disorder, and overall sub-clinical mental health deterioration (Singh et al., 2020). The scope of mental health deterioration during the COVID-19 pandemic and the comprehensive nature of diagnosing depressive disorders have provided an unprecedented need to infer the mental states of individuals from all-inclusive resources. Recent studies have revealed that valuable insights into the impact of the pandemic on population-level mental health can be inferred from posts or comments on social media (Low et al., 2020). A persistent challenge for the researchers specific to the mental health space is the need to: (a) establish a typology for text contents on social media to detect the severity of mental illness with clinical validation and robustness (Ernala et al., 2019), and (b) reliably apply this typology to obtain a sufficient sample size of high-quality data. Prior research has explored opportunities to capture mental health states from social media data using regular expressions to identify selfreported diagnosis or by using vectorization-driven methods to cluster activity patterns of users. However, deliberately relying on self-labeled data or unsupervised clustering leads to oversimplification and lacks clinical efficacy (Ernala et al., 2019). Practical exertion of mental health research includes identifying risky behaviors and providing timely interventions such as suicide prevention efforts adopted by Facebook (Vincent, 2017). The availability of high-quality, large-scale, annotated datasets addressing the severity of mental illness is one of the key elements for advancement on this front. Unfortunately, there are very few available datasets for depression severity which also lacks strong ground truths based on clinical validation (Tolentino and Schmidt, 2018). This study aims to contribute in this domain through (a) establishing a typology for social media contents (i.e., tweet text) built upon a psychological theory for detecting the severity of the mental condition of depressed individuals, (b) constructing a dataset named DEPTWEET 1 containing around 40191 tweets with corresponding crowdsourced labels and confidence scores. The labeling typology of the dataset assigns a higher-level classification to each tweet, such as (1) Non-depressed, (2) Mildly Depressed, (3) Moderately Depressed, and (4) Severely Depressed. There is also an associated confidence score (between 0.5 and 1) for each label. The procedure used to assess the severity of depression in this study was based on a well-established clinical assessment method known as the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) (Arbanas, 2015), and it was carried out under the supervision of two expert clinical psychologists. The DEPTWEET dataset contributes further high-quality data on attributes like none, mild, moderate or severe depression, adding to existing datasets on these and related attributes (Ahmed et al., 2021b;Mukhiya et al., 2020), and provides the first dataset of this scale on depression severities to the best of our knowledge. The approach utilized in this study can be adopted to generate high-quality mental health data from various platforms in future investigations. Moreover, given that the data was collected in the latter half of 2021, topic modeling on this dataset can provide useful insight into the impact of the COVID-19 pandemic on individuals' mental health. The remaining sections of the paper are structured as follows: Section 2 and 3 outlines the motivation and background of the DEPTWEET dataset. The data collection, quality control mechanisms, and the summary statistics of the data are described in Section 4. The baseline classification model for this dataset and evaluation metrics are presented in Section 5. Section 6 discusses the classification results, potential sources of bias in the data, and the necessary aspects to consider while conducting additional research in this domain. Finally, Section 7 draws a conclusion to the current study and discusses future directions. Related Work Computational linguistics techniques are very difficult to be opted as a complete substitute for in-person mental illness diagnosis, but the successful application of this domain in identifying the progress and level of depression of individuals in online therapy may provide clinicians with more insights, allowing them to apply interventions more effectively and efficiently. Studies analyzing web data, especially social media platforms, have piqued the interest of the research community due to their scope and deep entanglement in contemporary culture (Fuchs, 2015). Coppersmith et al. (2014) made a prominent contribution in this domain by developing a procedure of extracting mental health data from social media. In their study, tweets were crawled from 1 The DEPTWEET dataset is available at https://github.com/mohsinulkabir14/DEPTWEET user profiles who publicly stated that they had been diagnosed with various mental illnesses on their Twitter feed. They mixed control samples from the general population (people who are not depressed) with the tweets of the selfreported diagnosed group. Additionally, they conducted an LIWC (Linguistic Inquiry Word Count) analysis to measure deviations of each disorder group from the control group. They focused on the analysis of four mental illnesses: Post-Traumatic Stress Disorder (PTSD), Depression, Bipolar Disorder, and Seasonal Affective Disorder (SAD), and proposed this novel method to gather data for a range of mental illnesses quickly and cheaply. Numerous studies later followed this approach to detect relevant mental health data for various mental illnesses. For example, The Computational Linguistics and Clinical Psychology (CLPsych) 2015 shared task (Coppersmith et al., 2015) collected self-reported data on Depression and PTSD. They further annotated the data with human annotators to remove jokes, quotes, etc., from the collected data. The shared task participants had three binary classification tasks-identify depression vs. control, identify PTSD vs. control, and identify depression vs. PTSD. These datasets were used in a variety of studies to discover patterns in the language use of users suffering from various mental illnesses (Pedersen, 2015;Coppersmith et al., 2016;Amir et al., 2017). In particular, Resnik et al. (2015) conducted several topic modeling (supervised Latent Dirichlet Allocation (LDA), supervised anchor topic modeling, etc.) to differentiate the language usage of depressed and nondepressed individuals using the datasets of Coppersmith et al. (2014) andCLPSych Shared Task (2015). Following a similar approach, Chen et al. (2018) collected tweets from self-reported depressed users and investigated the potential of non-temporal and temporal measures of emotions over time to identify depression symptoms from their tweets by detecting eight basic emotions (e.g. anger, fear, etc.). Additionally, classifiers were built to label Twitter users as either depressed or non-depressed (control) groups calculating the strength scores based on the intensity of each emotion and a time series analysis of each user. Among other social medias, Tian et al. (2016) explored sleep complaints on Sina Weibo (a Chinese microblogging website) to discover users' diurnal activity patterns and gain insight into the mental health of insomniacs. Twitter data on mental health had also been collected, with specific Twitter campaigns being targeted. For instance, Jamil et al. (2017) prepared a dataset from the users who participated in the #BellLetsTalk 2015 campaign that was inaugurated to promote awareness about mental health issues. They collected public tweets from 25362 Canadian users and built a user-level classifier to detect at-risk users and a tweet-level classifier to predict symptoms of depression in tweets. From this campaign, they came across only 5% tweets that talk about depression and 95% non-depressed tweets. While these methods can extract large volumes of data for a low cost, they do not ensure a sufficient sample of interest and have inevitably resulted in a low number of positive samples (mental-health related data). Several previous studies have investigated the use of clinical methodologies along with data mining tools to extract depression symptoms from diverse sources. Yazdavar et al. (2017) created a lexicon of depression symptoms based on the nine disorders described in the clinically established Patient Health Questionnaire (PHQ-9) and utilized this to find symptoms of depression in tweets from users with selfreported depressive symptoms in their Twitter profile. They also developed a statistical model to categorize and monitor depressive symptoms for continuous temporal analysis of an individual's tweets. In a similar study, Mukhiya et al. (2020) proposed an open set of depression word embeddings that extracts depression symptoms from patient-authored text data based on PHQ-9 to deliver personalized intervention to people with symptoms of depression. Yadav et al. (2020) utilized the nine symptom classes of the PHQ-9 questionnaire to manually annotate the tweets collected from 205 self-reported depression diagnosed users. Their proposed framework took into consideration the figurative language (metaphor, sarcasm etc) wired in the communication of depressive users on Twitter. Ahmed et al. (2021b) extracted depression symptoms in patient authored text in a similar fashion with PHQ-9 questionnaire but used an attention-based in-depth entropy active learning to annotate the unlabeled texts automatically. Their mechanism increased the trainable instances of mental health data using a semantic clustering mechanism with to reduce the data annotation task. Another mental health tool used by psychiatrists, namely the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), has also been used to categorize mental disorders from social media content. Gaur et al. (2018) developed an approach to map subreddits into DSM-5 categories. They created a lexicon from various subreddit posts by extracting n-grams and topics using LDA and mapped this lexicon with DSM-5 lexicon created by available medical knowledge bases (ICD-10 2 , SNOMED-CT 3 , DataMed 4 ). Their approach attempted to connect a patient on social media platforms such as Reddit to appropriate mental health resources and to provide webbased intervention. Cavazos-Rehg et al. (2016) investigated the most common themes of depression-related chatter on Twitter that corresponded to the DSM-5 symptoms for major depressive disorder. While these methods may have clinical validity, most studies that use them lack sufficient ground truth data due to the absence of a thorough annotation procedure. Very few studies have investigated predicting the severity of depression based on users' language usage on web platforms. De Choudhury et al. (2013) proposed a metric named social media depression index (SMDI) using a probabilistic model to help characterize the levels of depression in the population level. This probabilistic model is an SVM classifier that can predict whether or not a Twitter post contains symptoms of depression. To construct and train this model, they collected data using crowdsourcing technique and derived various linguistic and network features (e.g., number of followers) from tweets of individuals suffering from clinical depression, which was measured using the CES-D (Center for Epidemiologic Studies Depression Scale) screening test (Radloff, 1977). Schwartz et al. (2014) attempted to predict and characterize the severity of depression based on people's Facebook language use. They gathered survey responses and Facebook posts from 28749 Facebook users and trained a classification model to predict depression symptoms using n-grams, linguistic behavior, and LDA topics. They tried to quantify the seasonal changes in depression symptoms based on social media posts and discovered that symptoms increase from summer to winter. These approaches had the potential to generate a large dataset with good quality data if they were developed in partnership with expert psychologists and domain experts. While previous research has made significant progress toward automatic depression assessment tools based on social media, some limitations have been identified through critical evaluation. Most previous works have relied on selfreported depressed user profiles when it comes to data extraction. While this is an inexpensive way to gather a massive scale of data, it doesn't guarantee enough samples with depressive symptoms without manual intervention. Also, this approach might lack enough clinical validation to extract depression symptoms. Studies that leveraged clinical assessment tools to extract data, such as the PHQ-9 or DSM-5, lacked supervision from domain experts and mostly annotated their data in an automated manner, such as using unsupervised topic modeling or clustering techniques. Moreover, only a few studies have investigated how to collect data on different depression severities with sufficient clinical efficacy. The existing datasets only concentrate on binary detection of whether a particular tweet manifests depression or not, the severity level of which is mostly ignored. This might lead to models competent enough in detecting subtle cues of depression turn a blind eye towards them. A dataset containing sufficient samples to train large models with strong ground truth labels depicting the severity of depression can go a long way to alleviate these issues. Measuring Severity of Depression In the current study, a user posting a tweet on social networking site Twitter is considered to be depressed if the tweet depicts behaviors portraying symptoms of depression. Such a tweet may not necessarily be complete, contain wellstructured sentences, or even grammatically correct, making the task even more difficult. According to the Diagnostic and Statistical Manual of Mental Disorders (DSM), clinical depression can be diagnosed considering the existence of a set of symptoms over a substantial amount of time (Yazdavar et al., 2017). Incorporating this idea, the Patient Health Questionnaire (PHQ-9) (Kroenke et al., 2001) provides a set of questionnaires, which is widely used to screen, diagnose and measure the The frequency of these symptoms can help classify the severity of depression as none, mild, moderate, and severe conditions. This approach is called Clinical Symptom Elicitation Process (CSEP) (World Health Organization, 1993). In the current study, this was further extended using the mood scale provided by BipolarUK 5 to identify the characteristics related to different levels of depression. The following characteristics were then verified by the collaborator psychologists and used to detect the level of depression from the user tweets: Non-depressed Tweets A tweet can be labelled as a non-depressed tweet if it expresses a person's joy or delight, or makes a generalized statement about depression that does not reflect the person's own mental state, expresses casual tiredness or sadness (For example, sadness due to the defeat of their favorite sports team), or expresses temporary hopelessness. It can also convey any other emotion except for depression. Mildly Depressed Tweets A tweet that expresses hopelessness or a feeling of disinterest that persists for a while can be labeled as a mildly depressed tweet. A mildly depressed tweet may contain symptoms of hopelessness, feelings of guilt or despair, difficulties concentrating at work, a loss of interest in activities, a sudden disinterest in socializing, a lack of motivation, insomnia, weight changes, daytime sleepiness and fatigue, appetite changes, and reckless behavior (such as, alcohol and drug abuse). Moderately Depressed Tweets Moderate depression has symptoms similar to mild depression. The differentiating factor is that the severity of symptoms hampers activities related to home and work. Tweets may contain symptoms of increased sensitivities, feeling of worthlessness, reduced productivity, problems with self-esteem, excessive worrying. Severely Depressed Tweets The symptoms of this category are more noticeable and life threatening. They contain delusions, feeling of nearunconsciousness or insensibility, hallucinations, suicidal thoughts, or behaviors. The DEPTWEET Dataset In this section, the complete methodology of constructing the DEPTWEET dataset and the summary statistics of the data is discussed extensively. TWINT 6 was used to collect tweets from Twitter for this study. The collected tweets went through a preliminary screening process before being distributed to the annotators. The annotation job was carefully observed and regulated in order to maintain the high quality of the data. An overview of the data collection and annotation procedure is displayed in Figure 1. Below, we first present how we collected the data. Then, the data annotation process is demonstrated in detail. Finally, we discuss the properties of the dataset. Data Collection Seed terms were generated from the keywords extracted from each of the symptoms of PHQ-9 questionnaire by collaborating with two professional psychologists. This is a commonly used procedure employed in many previous studies (Yazdavar et al., 2017;Mukhiya et al., 2020;Ahmed et al., 2021b). After seed terms generation, they were then extended using WordNet (Miller, 1995). It is a well-known lexical database developed by Princeton University that links words into semantic relations, including synonyms, hyponyms, meronyms, and antonyms. Each category of words is maintained according to their parts of speech, i.e. nouns, verbs, adjectives, and adverbs in the database and the synonyms are grouped into synsets. Words that are in the same synset are synonymous and interlinked using conceptual-semantic and lexical relations. There are several other methods used in different studies (Mukhiya et al., 2020;Yazdavar et al., 2017) such as Universal Sentence Encoding (USE) (Cer et al., 2018), Global vector representation (GloVe) Pennington et al. (2014), Big Huge Thesaurus (Watson, 2007), etc. In the evaluation shown by Mukhiya et al. (2020), WordNet performs significantly better in extracting symptoms from patient-authored text compared to other methods. For this study, the seed terms for each questionnaire of PHQ-9 were extended by WordNet, and the extended terms were handpicked afterward by the psychologist collaborators. After several rounds of filtration, a final lexicon list containing 88 depression-related keywords categorized into nine different clinical depression symptoms of PHQ-9 was prepared, which are likely to appear in the tweets of individuals suffering from different severities of depression. Table 1 illustrates samples of anonymized tweets, seed terms, final keywords list extended by WordNet and their associated symptoms in PHQ-9. Based on the final keyword list, a total of 344657 tweets were collected. Data Annotation Several data annotation techniques can be applied to determine the class label for the sample tweets. Since the number of classes is known beforehand, one intuitive approach can be creating vector representations of the tweets using Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), fastText (Bojanowski et al., 2017), etc. and then using unsupervised clustering to find the optimal distribution of the samples into different clusters. However, such approaches lack human input who can understand the subtle nuances of tweets to identify different levels of severity, resulting in a poorly annotated dataset (Ernala et al., 2019). To ensure clinical accuracy, annotators, trained by expert psychologists, were employed to perform dataset annotation. From the collected samples, tweets that were posted in English were only preserved for annotation. Tweets with less than eight words were discarded as they might not contain enough context. Any tweets containing mentions (@) or hashtags (#), as well as retweets, were also discarded since they could violate the privacy of the users mentioned. Finally, 44100 tweets were randomly chosen from the remaining tweets for annotation. Annotator Recruitment The annotation job was done by recruiting participants who were fluent in English and had a previous experience of text assessment. The annotator pool consisted of 111 crowdworkers, and they were pre-screened for eligibility Figure 1: Overview of the dataset creation process using two online sessions. Initially, 90 annotators were selected randomly for the annotation job after pre-screening. Each annotator received $20 for participating in the study. The task of the annotators was to label the tweets as one of the four classes, i.e., non-depressed, mildly depressed, moderately depressed, and severely depressed tweets. The annotators were briefed through 2 long online sessions under the supervision of the collaborator psychologists about the classification and were also provided with a detailed document on the severity classes. Each annotator was given a datafile with only two columns: (1) tweet texts and (2) possible label suggestions (0: non-depressed, 1: mild, 2: moderate, 3: severe) and was asked to determine the tweet's possible class label. The inherent subtlety and ambiguity of the attributes covered in this dataset makes the annotation procedure an unavoidably difficult process. Each annotator may have a unique perspective on the nuance of the context presented in tweets, as well as a unique perception of the severity of the depression. Annotators were asked to avoid personal bias while labeling the tweets and strictly follow the guidelines provided to them to classify the text. Each tweet was annotated at least three times. The final label of a tweet was determined by majority voting of the labels provided by the three annotators. Tweets with different labels from all the three annotators were discarded because of too many disagreement. Final labels of the dataset were established with a confidence score to reflect the disagreement of the annotator because of reasonable difference of opinion. Annotation Job Refinement Though it was ensured that annotators' disagreement reflected a genuine difference of opinion, a means of quality control was required to prevent annotators' inattention, or misunderstanding of context. The quality control mechanism used by Price et al. (2020) was followed in this study. This mechanism aimed to reduce the number of 'bad' annotators, those who either did not correctly understand the task or annotated the datafiles too recklessly, without giving proper attention. As part of the quality control, a set of 'control samples' was collated with the actual data sample, for which the correct labels were manually established. Annotators encountered one control sample per batch of fifty tweets without knowing which of the tweets was the control sample. The running accuracy of these control samples was defined as annotator's 'trustworthiness score (T)'. The threshold trustworthiness score for this study was set to be at least 90%. If an annotator dropped below this level, all of their annotations were discarded, and the annotator was removed from the annotator pool. Afterwards, another annotator from the pool was assigned to re-annotate those data samples. A total of 900 control samples were added for quality control with the previously chosen 44100 data samples. To generate datafiles for the annotators, the actual dataset containing 44100 samples were divided into 30 parts, each part containing (44100∕30) = 1470 samples. For every 49 tweets in these 1470 samples, one unique control sample was added at a random position. The control samples were from the non-depressed category and were limited to only obvious and conclusive instances of attributes. Thus, one would fail on these control samples only if they had an incorrect comprehension of the attributes of the class labels or were too reckless while annotating. The tweet ID of the control samples were also tracked. Following this method, 30 datafiles were created containing (1470 data samples + 30 control samples) = 1500 tweets each. Each datafile consisted of two columns: one having tweet texts, and another empty column for annotator label. All the other data columns were kept hidden from the annotators. The metadata related to the datafile creation procedure is summarized in Table 2. To annotate these datafiles, ninety annotators were divided into three groups, each with thirty annotators. Each datafile was given to three different annotators from three different groups. Before partitioning, the data samples were randomized so that no two data files contained identical tweets in the same order. Once the annotation process was finished, all the datafiles were merged and the control samples were removed from the dataset. Dataset Properties & Analysis From the 44100 tweets considered for annotation, 1399 data samples were removed from the dataset because they were damaged (i.e., tweet text or tweet ID was changed) during the annotation process, and 2510 data samples were discarded due to annotator disagreement, as they received three different labels from three different annotators. The final dataset comprises a total of 40191 tweets along with their tweet_id, replies_count, retweets_count, likes_count, target, label and confidence_score. The label for each tweet was determined based on the aggregation of the labels provided by different annotators. If at least two of the three annotators agreed on the label of a tweet, the matched annotation was accepted as the final label. Tweets that had three different annotations from three annotators, were discarded and saved in a separate datafile 7 . The corresponding confidence score for each label was determined by an weighted average of the annotator's 'trustworthiness score'. Confidence Score for a particular label of a tweet sample can be written as: where denotes trustworthiness of th annotator whose annotations matched and denotes sum of the trustworthiness score of all the annotators who annotated the tweet. To demonstrate this process, consider a tweet sample annotated by three annotators , , and having trustworthiness scores = 0.90, = 0.93, and = 1.00. If the annotated label of annotators and matches, then the confidence score of the label will be ( + )∕ , where is the sum of the trustworthiness score of the three annotators. In this case, the confidence score for the label of the particular tweet would be 0.647. Manual analysis was performed in two stages of the study to gain insights into the dataset: (i) while randomly choosing data samples for annotation, and (ii) during the initial iterations of the annotation job. The proportion of classes shown in Table 3 indicates that the non-depressed samples outnumber the other classes by a wide margin. Though all the data samples were scraped based on the keywords related to different severity levels of depression and the control samples were removed prior to the final preparation of the dataset, the number of data samples for different severities of depression is inevitably low. This class imbalance represents an important characteristic in the identification of various depressive disorders on social media. The final class proportions roughly represent the percentage of similar attributes in similar live contexts. Generally, the overall positive content shared in social media outnumbers the negative content. This is because people usually show their positive, friendly side over social media and tend to talk less about their struggles (Vermeulen et al., 2018). To mitigate this problem, previous studies depended on self-labeled data for collating large and balanced datasets on different mental disorders (Kim et al., 2020;Low et al., 2020). However, depending only on self-labeled data to understand mental health from personal levels and measure the severity of the condition is not feasible without the intervention from expert psychologists. But considering the lack of resources in the mental health sector, only relying 7 Further annotation is required to achieve a class label for these samples and were left because of budget and time constraint. on psychologists can be time-consuming and expensive. As a result, in this study, crowdsourcing supervised by psychologists was opted to obtain high-quality data on different depression severities. Despite the measures undertaken to ensure the quality of the dataset, the method of annotation warrants a certain level of noise. This results in different yet rational interpretations of the same tweet. The kernel density estimation of the confidence scores portrayed in Figure 2 indicates that there was reasonable agreement among the annotators on deciding the class label of the non-depressed and severe classes. While these two classes lies on two different polarities of attributes, the subtle nuances of the mild and moderate classes allowed for rational disagreement among the annotators, which is evident from the high concentration of probability density for mild and moderate classes between 0.6 and 0.7 in Figure 2. This may be attributed not only to the lack of apprehension or awareness of the annotator, but also on the subjectivity of the topic at hand. It highlights the difficulty of using typical reliability metrics such as Inter-Rater Reliability (IRR), which calculates the level of agreement between two or more annotators. More sophisticated metrics like Fleiss' Kappa (Fleiss et al., 2013) can be applied in this scenario since the sample tweets were distributed randomly among the annotators and each annotator chose from one of the four mutually exclusive labels to indicate the severity of depression per tweet (Gwet, 2014;Leard Statistics, 2019). However, Fleiss' Kappa assumes that the disagreement among the annotators on the same sample reduces the reliability of the dataset. Considering the subjective nature of the severity of depression detected by different annotators, that might not be the case (Salminen et al., 2018). In spite of that, Fleiss' Kappa was calculated to get an understanding of the overall agreement of the annotators in this study. The value of Fleiss' Kappa ranges from -1 (indicating no observed agreement) to +1 (indicating a perfect agreement) (Leard Statistics, 2019). Here, a value less than 0.20 indicates a poor agreement, 0.21 to 0.40 indicates a fair agreement, 0.41 to 0.60 indicates moderate agreement, 0.61 to 0.80 indicates substantial agreement and 0.81 to 1 indicates a near perfect agreement among the annotators. As reported in Table 4, the Fleiss' Kappa for the nondepressed and severe classes show a moderate agreement among the annotators. This can be explained considering the extreme nature of these two classes as they tend to be the polar opposite of each other. On the other hand, a fair agreement in mild and moderate classes highlight the intricate relationship among these two classes and the difficulty in identifying the subtle cues to differentiate them, even for the humans. However, despite the subjective nature of the severity of depression, an overall fair agreement provides indication of the quality of the annotation, and the dataset in general. Experimental Design The choice of baseline models and evaluation metrics for this study are discussed in this section. Baseline Models Selection One of the main challenges in language related tasks comes from the use of homonyms and synonyms as well as different kinds of ambiguity in sentences such as, lexical, semantic, and syntactic ambiguity. Another challenging task for a model is to extract context from various domain specific language. Empirical studies have shown that rule-based methods and traditional machine learning-based methods fail to overcome these complexities by understanding the inherent meaning of the sentences (Kansara and Sawant, 2020;González-Carvajal and Garrido-Merchán, 2020). Multilingualism is another challenge with classic machine learning techniques (González-Carvajal and Garrido-Merchán, 2020). Rules for a specific language can be formed, but alphabet and even the sentence structure can differ from one language to other, requiring the development of new rules. Most of these aforementioned shortcomings are alleviated by transformer (Vaswani et al., 2017) based architectures that use attention mechanism to capture bi-directional context and also capable of handling larger datasets than traditional machine learning-based architectures. Considering these issues, a series of baseline models were chosen to evaluate the proposed dataset, namely Support Vector Machine (SVM) (Cortes and Vapnik, 1995), Bidirectional LSTM (BiLSTM) (Schuster and Paliwal, 1997), BERT (Devlin et al., 2019) and DistilBERT (Sanh et al., 2020). Bidirectional LSTM (BiLSTM) was selected as it is a widely used recurrent neural network based on deep learning architecture, Support Vector Machine (SVM) as a classical machine learning model, and BERT and DistilBERT as two transformer-based models to evaluate the dataset. While the word embedding of SVM and BiLSTM models rely on choice, both BERT and DistilBERT are pre-trained using a large amount of data from English Wikipedia 8 and Toronto Book Corpus (Zhu et al., 2015). The pre-training is generic enough to be fine-tuned for downstream tasks such as sequence classification, named entity recognition, natural language inference, etc. Reasons for choosing these models can be summarized as follows: • A diverse set of classifiers are chosen as baseline models to evaluate the validity of the dataset. SVM has already been used by De Choudhury et al. (2013) to create a probabilistic model to predict severity of depression from tweets. BiLSTM is a sequence processing model that calculates the input sequence from the opposite direction to a forward hidden sequence and a backward hidden sequence. Due to it's effective contextual understanding ability, BiLSTM has been frequently used as a baseline classifier (Moon et al., 2020). • Implementing a system that can detect the severity of depression from social media texts on devices with limited computational power may be difficult due to the high parameter count of BERT (Base: 110 million). According to research on pre-trained models such as MegatronLM (Shoeybi et al., 2019), bigger models with several billions, if not trillions, of parameters usually result in superior performance on downstream tasks. However, the overall performance boost comes at the price of higher computational power and memory needs for both training and inference, rendering them unsuitable for use on the edge devices, such as smartphones. To address this issue, Sanh et al. (2020) proposed DistilBERT, which has the same architecture as BERT and is pre-trained on the same corpus. By removing token-type embeddings and the pooler from the BERT implementation, DistilBERT reduces the number of layers by a factor of two, 8 www.wikipedia.org because hidden size dimensions have less of an influence on computation efficiency than the number of levels. DistilBERT is pre-trained through knowledge distillation via the supervision of a larger model incorporating triple loss functions (Distillation Loss -, Masked Language Modelling Loss -, and Cosine Embedding Loss -). DistillBERT maintains 97% of BERT performance on downstream tasks with 40% fewer parameters. Additionally, it reduces the inference time of BERT in downstream tasks by around 60%. The fundamental reason for this is a compression method called knowledge distillation, which enables a compact model to replicate the behavior of larger models as well as the components of triple loss. Both BERT and DistilBERT relies on Auto Encoding (AE) language modeling during pre-training since the aim is to understand natural language representations. Although general transformer architecture proposed by Vaswani et al. (2017) utilizes an encoder and a decoder network, BERT and DistilBERT, as pre-training models, only use the encoder to interpret the content of input sequences. It is to be noted that, all of the baseline models that were chosen are data-driven approaches. As a result, these models are unable to extract semantic information from a context that is not explicitly in the data, unlike humans who can use their pre-existing knowledge to judge new contexts that they never encountered before (d'Avila Garcez et al., 2019;Cocarascu and Toni, 2018). One solution to this problem could be the use of symbolic approaches. Unfortunately, these approaches fall short due to scalability. Recent approaches combine symbolic and data-driven approaches to solve this problem (Cocarascu and Toni, 2018;Faghihi et al., 2021;Schockaert and Gutiérrez-Basulto, 2022). However, we limit ourselves to data-driven approaches to keep the baseline models simple. Classifier Configuration The training procedure of the baseline classifiers is demonstrated below, followed by the training parameters of the experiment. Support Vector Machine (SVM) SVM tries to draw a hyperplane that bests separates multi-dimensional data points in their potential classes and is ideal for binary classification (Cortes and Vapnik, 1995). For multi-class classification, an 'one-versus-one' approach with a Radial Basis Function (RBF) kernel was implemented. The values of the two crucial parameters for RBF kernel, C=0.5 and gamma=0.5, were chosen based on several iterations of experiments. The entire dataset were split into 80%-20% partitions for training and testing the model. Several text pre-processing techniques, such as stopwords removal, bad symbols removal, text lower-casing, etc., were applied to both the training and testing data. Bidirectional LSTM (BiLSTM) BiLSTM can preserve the sequence information in both direction, backwards (right to left) or forward (left to right). To train the model, a bidirectional layer of 64 units were added after the word-embedding layer generated from the training data. The overall architecture of the BiLSTM network is illustrated in Figure3. Similar text pre-processing techniques like SVM was deployed for BiLSTM as well. During the training phase, the hyperparameters for this experiment were fine-tuned using cross-validation, adopting 10% of the data from the training samples as the validation set. Fine-tuning BERT & DistilBERT Fine-tuning the pre-trained model weights in a task specific manner with respect to the tweet texts and their annotated labels is necessary to improve the classification performance considering that they are pre-trained using data from various sources. Input Representation Before being fed into the pre-trained models for embedding, each tweet text were converted into an acceptable format. A single vector representing the entire input sentence is required to be passed to a classifier in order to complete the classification operation. BERT-based models use WordPiece tokenizer (Wu et al., 2016), which works by splitting the input sequence into full forms or word pieces. In case of full form, a word is represented by one token string, whereas, for word pieces, a word is represented by multiple token strings. Using word pieces helps the models to identify related words as they share similar token strings, which is crucial for context understanding. Some special token strings are generated during tokenization to indicate the task type, beginning of input sequence, mask, etc., e.g., • '[SEP]' refers to the end of one input sequence and the beginning of another. • ' [PAD]' is used to indicate the necessary padding. Classifiers used in this study require the input sequences to be of the same length, i.e., each tweet text should have an equal number of tokens after converting them to token strings. Since a maximum token length of 128 is used, if a comment contains less than 128 tokens, extra '[PAD]' tokens are added at the end of the token sequence. Both BERT and DistilBERT are pre-trained with 30K token vocabularies. So some new input data might appear while fine-tuning, which was not present in the pre-trained vocabulary. In that case, the new input substring is replaced by '[UNK]' token. Subsequently, the final input vector for the models was prepared by converting the token strings to integer token IDs. Hyper-parameters Selection Fine-tuning and evaluating the classifiers required the proposed dataset to be splitted into three sets -train, validation, and test. Randomly selected 60% tweets from each class were placed into the train set, and the rest of the tweets were equally distributed among the validation and test sets. Base-uncased 9 versions of the pre-trained models were implemented for fine-tuning with a total of 768 hidden output states. Categorical Cross-Entropy loss function with AdamW optimizer (Loshchilov and Hutter, 2019) was used that utilizes a fixed weight decay unlike common implementations of Adam optimizer (Kingma and Ba, 2015). Considering that the learning rate was set to 3 × 10 -5 and 20% of the steps were designated as warm-up steps, the training phase would use the first 20% of the steps to raise the learning rate from 0 to 3 × 10 -5 . Here, steps denote the total number of times when the model weights get updated during the fine-tuning phase. Both of these models were fine-tuned in a supervised manner for 10 epochs with a training batch size of 16 on the proposed dataset to predict the severity of depression from tweets and achieved a good performance on all four classes. Figure 4 depicts the process of predicting the severity of depression using the fine-tuned classifiers from a sample tweet. To tackle this issue, the Receiver Operating Characteristic (ROC) curve and area under the ROC curve (AUC-ROC) were used as evaluation measures in this work, such that models are evaluated based on how good they are at separating classes. ROC curve is a diagnostic diagram that calculates the False Positive Rate (FPR), and True Positive Rate (TPR) for a series of predictions made by the model at different thresholds to summarize the model's behavior which can be used to analyze the model's ability to discriminate classes. In the ROC graph, each probability threshold is represented by a point, linked to form a curve. A model that with no discriminatory power between the classes will be represented by a diagonal line between fpr 0 and tpr 0 (coordinate: 0,0) to fpr 1 and tpr 1 (co-ordinate: 1,1). Points below this line reflect models with less competence than none. A flawless model will be represented as a point in the plot's upper left corner. Results and Discussions The performance of the baseline models on our dataset will be discussed in this section, followed by the potential unintended bias of this study. Classification Performance & Analysis According to the results shown in Table 5, it can be observed that SVM and BiLSTM were outperformed by the two transformer based models by a large margin. Transformerbased models that are used in this study can learn each word's context from the words that appear before and after it and are also pre-trained on a large corpus. Since effective context understanding from the input representations is very crucial to the task of severity detection from tweets, these models are likely to outperform traditional deep learning based models such as LSTM, BiLSTM or unidirectional transformer based models such as OpenAI GPT (Radford et al., 2018) where each token is capable of managing only the preceding tokens in the transformer's self-attention layers. As BiLSTM can also learn contexts of words in both directions, it seems to achieve decent performance in some classes as well. It can also be observed that DistilBERT outperformed BERT in all classes. Since DistilBERT is pre-trained under the supervision of its parent model, BERT through knowledge distillation, it is able to preserve 95% performance of the base uncased BERT (Sanh et al., 2020) which is divergent to the experimental results shown in this study. The experiments were conducted in a computationally limited environment with a comparatively smaller batch size and fine-tuned only for 10 epochs. It is likely that, BERT will outperform DistilBERT if the models are fine-tuned for higher number of iterations with further hyper-parameter tuning. As seen from Table 3, the proposed dataset is mostly comprised of the samples from the 'non-depressed' class, in which both models showed commendable performance in detecting classes with relatively smaller number of samples for other classes as well. From the confusion matrices in severe non-depressed Figure 5, it can also be noticed that both the models performed better on the two terminal classes 'non-depressed' and 'severe' than the two closely related classes, 'mild' and 'moderate'. Upon careful observation, it was found that wrong predictions of the samples were mostly due to models failing to comprehend the contextual meaning of the comments properly and instead generalizing based on specific keywords to predict the final label. For example, as shown in Table 6, in few cases where the ground truth is 'nondepressed' but the predicted label by the models is 'severe' and vice-versa, most of these cases contain words related to suicide, depression, self-destruction, self-harm, etc. So, this enables the room for further improvement through error analysis. For the proposed dataset, ROC curves using the test predictions from the baseline classifiers is presented in Figure 6. These plots are summarized by calculating the area under the ROC curve (AUC-ROC) in Table 5. The better performance of DistilBERT and BERT are also distinguishable from the class-wise AUC-ROC curves in Figure 6. Figure 2 shows that non-depressed and severe classes are more condensed towards the complete agreement of the annotators. As these two classes lie on the two polarities and have distinguishable attributes, the annotators were likely to agree more on these two class labels while annotating. The main challenge was to differentiate between the other two classes, i.e., moderate and severe for their inherent subtleties and congruent attributes. With the tweet corpus being in English, and considering the subtle attributes of the different severities of depression, the dataset was likely to achieve higher annotation quality if the annotation was done by annotators with first-language proficiency in English. As the study requires a large pool of annotators and demands consistent supervision and interaction of the annotators with the collaborator psychologists, it limits the choice of recruiting only English-speaking annotators. This was attempted to be reduced by recruiting annotators with excellent abilities in English and pre-screening was done before the final pool of annotators were selected. Potential Unintended Bias Another challenge that appeared in a similar context for the annotators was to avoid their individual bias while deciding the class labels. The source of the tweets and their nuances in attributes complicated the annotation task and potentially introduced bias into the dataset. From the manual inspection of the scraped tweet samples, it was observed that the majority of the samples were from the North American region, while all the annotators were from South Asia. This can introduce a clear cultural and geographic bias in the annotation procedure. Though the tweets were presented in isolation to the annotators, without all the related information (i.e., tweet ID, retweets, location, etc.) and without the surrounding context of scraping the tweets, the collaborator psychologists speculated a bias in the annotation as there is a clear cultural and expressional difference between the users and annotators of the tweets. The annotators were reminded several times throughout the annotation process to avoid their personal bias and strictly follow the guidelines laid out by the psychologists, which included a document containing high-level descriptions of the attributes of the classes. This issue of systematic bias is common for large datasets, as addressed by Vidgen et al. (2019), especially for complex multi-class tasks of this kind. The data extension tool used for this study is Wordnet, which was initially released in the mid 1980s. Though it has been updated over time, due to the continuous evolution of language, people today often use a vocabulary on social media that can differ significantly from the one that Wordnet represents. Moreover, some of the semantic relations enlisted in wordnet are more suited to concrete notions than to abstract ones (Rudnicka et al., 2018). For example, it is easy to create hyponyms/hypernym relationships to illustrate that a "Pinaceae" is a type of "tree", a "tree" is a type of "plant", and a "plant" is a type of "organism", but it is difficult to classify emotions like "anxiety" or "delight" into equally deep and well-defined associations. Finding appropriate seed terms that best capture the depressive emotion of people on social media might be substantially hindered by these limitations. Conclusions and Future Work This work introduced a new typology for diagnosing depression severities from social media texts, as well as a unique dataset of labeled tweets with a confidence score for each label. The dataset was constructed based on strong ground truths and clinical validation, and it is expected to help alleviate the scarcity of mental health data to some extent. The description of the process and challenges in creating such a dataset may motivate researchers to collect similar corpora of this scale from other social media and discussion forums. The experimental results indicated that existing state-of-the-art models often fail to understand the contextual undertone of the data samples. Developing a model that is capable of comprehending the subdued relationship and differences among the depression severities can result in an even better understanding of human cognition. Moreover, analysis of the classification performance indicates that there is no distinct division of keywords among different depression severities. Same keyword might be used differently to express different emotions, rather it is more important to understand the context of the tweet to diagnose the severity of depression. Broader implications of this research may include personalizing and directing preventative and awareness messages by health professionals to the users in need. The seed terms for each symptoms of PHQ-9 in this study was extended by Wordnet (Miller, 1995). Considering the fast evolving nature of languages in social media, future studies can utilize more recent lexical databases with larger semantic network to extend the seed terms. For example, CMU pronouncing dictionary 10 , MRC Psycholinguistic Database (Coltheart, 1981), The Verb Semantics Ontology Project (Fukushima, 1984) are other available lexical databases that can be used in seed term extension. Additionally, authors can also develop their own domainspecific lexical database by vector proximity using a domainspecific corpus as a starting point. These approaches can build a keyword list that better extracts depression related symptoms posted on social media nowadays. The baseline classification result of the dataset was provided by fine tuning two modern pre-trained models, namely BERT and DistilBERT. It is worth noting that several features in the dataset, such as replies_count and retweets_count, were not used during training, and no pre-processing was performed on the data. Therefore, more accurate classification might be achieved on this dataset by: (1) including a pre-processing technique to clean the data before training, (2) increasing trainable instances by augmentation to eliminate the class imbalance of the dataset, (3) utilizing other features of the dataset during training, (4) fine-tuning more robust pretrained models, etc. Because the data was collected during the post COVID-19 pandemic phase, careful examination of the dataset can provide valuable insight into the impact of the pandemic on people's mental health. Moreover, the DEPTWEET dataset can be expanded by annotating the remaining 2510 data samples for which a class label could not be determined due to annotators' disagreement. Further work may also include refining the annotation task by including annotators from similar cultural and geographic contexts and exploring the unintended biases in the data and model.
2022-09-30T15:16:16.726Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "1616c7b377b9961a4cfcb5931f141e84d6c9d044", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0c8dde01fac0d0508c0520f619cdd695fcda63a6", "s2fieldsofstudy": [ "Psychology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252965950
pes2o/s2orc
v3-fos-license
Dynamic Predictive Models with Visualized Machine Learning for Assessing the Risk of Lung Metastasis in Kidney Cancer Patients Objective To establish and verify the clinical prediction model of lung metastasis in renal cancer patients. Method Kidney cancer patients from January 1, 2010, to December 31, 2017, in the SEER database were enrolled in this study. In the first section, LASSO method was adopted to select variables. Independent influencing factors were identified after multivariate logistic regression analysis. In the second section, machine learning (ML) algorithms were implemented to establish models and 10-foldcross-validation was used to train the models. Finally, receiver operating characteristic curves, probability density functions, and clinical utility curve were applied to estimate model's performance. The final model was shown by a website calculator. Result Lung metastasis was confirmed in 7.43% (3171 out of 42650) of study population. In multivariate logistic regression, bone metastasis, brain metastasis, grade, liver metastasis, N stage, T stage, and tumor size were independent risk factors of lung metastasis in renal cancer patients. Primary site and sequence number were independent protection factors of LM in renal cancer patients. The above 9 impact factors were used to develop the prediction models, which included random forest (RF), naive Bayes classifier (NBC), decision tree (DT), xgboost (XGB), gradient boosting machine (GBM), and logistic regression (LR). In 10-foldcross-validation, the average area under curve (AUC) ranked from 0.907 to 0.934. In ROC curve analysis, AUC ranged from 0.879–0.922. We found that the XGB model performed best, and a Web-based calculator was done according to XGB model. Conclusion This study provided preliminary evidence that the ML algorithm can be used to predict lung metastases in patients with kidney cancer. This low cost, noninvasive and easy to implement diagnostic method is useful for clinical work. Of course this model still needs to undergo more real-world validation. Introduction Kidney cancer, accounting for 5% of all cancers, originates from the renal tubular and collecting tubular epithelial system [1]. e incidence trend has been gradually increasing in recent years, resulting in a huge medical burden. e prevalence rate of men is approximately twice that of women [1]. Additionally, obesity, diabetes, hypertension, smoking, kidney injury, and drugs are major risk factors of kidney cancer. e principal manifestations of kidney cancer were hematuria, renal pain, and mass [2,3]. In the early stage of the disease, the symptoms are not noticeable. As the result, when patients intend to seek a healing care, they may have been in a metastatic state of kidney cancer and are suffering from the corresponding complications. e fiveyear survival rates of stage I and II were about 88% to 95%, and cancer-specific survival (CSS) rates were 84% to 95% [4]. Renal cell carcinoma (RCC), making up 90% of kidney cancer, is the sixth and eighth most common cancer among American men and women in 2021 [5]. RCC is mainly composed of clear-cell RCC, papillary RCC, and chromophobe RCC [6,7]. Renal clear cell carcinoma, accounts for about 70% of RCC, is invasive and has a poor prognosis. e survival time of renal clear cell carcinoma is from 3 months to 5 years. 60% of these patients die within 1 to 2 years after diagnosis [5,[8][9][10][11]. Metastasis from kidney cancer is not rare. Highly vascularization can lead to local progression and increase the chance of distant spread [6]. ere have been relevant studies on the occurrence, development, and metastasis. Hypoxia-irreducible factor (HIF) and epithelialmesenchymal transition (EMT) and so on are important molecular events [6,11]. Nishida et al. indicated that amplification of cancer-cell-intrinsic inflammation can trigger neutrophil-dependent lung metastasis during RCC progression [8]. Lung and bone are common metastatic sites of kidney cancer [12]. At the time of initial diagnosis, 18%-40% of patients have already developed systemic metastases. In addition, metastasis is widespread in the long-termfollow-up after nephrectomy [4,7,9,13,14]. e study of Jianxin Xue and colleagues reported that 2931 of 33449 RCC had distant metastasis and lung (6.19%) was the most common site of metastasis [7]. Pulmonary metastases are multiple nodules with bilateral distribution or solitary masses. e lower lobes of the lung were common sites. Immune checkpoint inhibitors (ICI), antiprogrammed death-1 (PD-1) antibody, and anticytotoxic T lymphocyte-associated antigen 4 (CTLA-4) antibody were accepted as treatments for metastatic RCC [15]. However, the survival rate of metastatic kidney cancer is just about 20% [6]. Clinical models of kidney cancer have been established, but the main focus is to predict the prognosis. e UCLA (University of California, Los Angeles) integrated staging system (UISS) and the risk model of the International Metastatic RCC Database Consortium (IMDC) are examples [16]. Machine learning is a subfield of artificial intelligence. It has many applications in kidney cancer such as identifying pathological variants, grading judgments, and differentiating benign from malignant renal tumors [17]. At present, there are few reports of machine learning model to predict lung metastasis of kidney cancer. In this study, we collected data from the SEER database to establish models. After checking performance of model, a Web calculator was conducted to assist clinicians in predicting lung metastasis from kidney cancer. Patients' Populations. Patients with kidney cancer from January 1, 2010, to December 31, 2017, in the SEER database were enrolled in this study. e inclusion criteria were listed as follows: (1) patients definitely diagnosed as primary kidney cancer when they were alive with ICD-O (International Classification of Diseases for Oncology) of 8120/ 3, 8130/3, 8260/3, 8310/3, 8312/3, and 8317/3; (2) histological subtypes of kidney cancer were clear cell RCC, papillary, chromophobe, and any others. e exclusion criteria were listed as follows: (1) age of patients was younger than 18; (2) patients with other primary tumors at diagnosis; and (3) the clinicopathological results were uncompleted. Data Collections. Marital, age, race, sequence number, survival time, status, sex, primary site, grade, laterality, pathological, T stage, N stage, tumor size, bone metastasis, brain metastasis, liver metastasis, and lung metastasis were collected retrospectively. Data were extracted from the SEER database with the help of SEER * STAT software 85 (version 8.3.5). e process of extraction was carried out by two independent data collectors. If there was any disagreement, a third collector would bring in to assist with the final decision. Statistical Methods. Mean was used to describe continuous variables following a normal distribution. Numerical values and proportions were used to describe categorical variables. We concluded a comparison between groups using chi-squared tests, t-tests, and logistic regression analysis. Variables with nonzero coefficients in the least absolute shrinkage and selection operator (LASSO) analysis were chosen for further analysis. Variables with p < 0.05 in univariate logistic regression analysis were put into multivariate logistic regression analysis. Independent risk factors were determined after multivariate logistic regression analysis. ML algorithms, such as RF, NBC, DT, XGB, GBM, and LR, were implemented to establish models. We ranked the importance of the variables for each model. XGB is an integration algorithm based on boost. It is typical of the integration of cart tree, which is an improvement of the gradient tree boosting. Here, l is a differentiable convex loss function that measures the difference between the prediction^yi and the target yi. e second term Ω penalizes the complexity of the model. e probabilistic output results are evaluated using receiver operating characteristic curve (ROC). 10-foldcrossvalidation and ROC curve analysis were conducted to evaluate the performance of models. Maximum AUC was the basis for determining the best model. Heatmap showed the correlation between various variables in the models. e number in each grid of heatmap represented the correlation coefficient, and the color depth was negatively correlated with the correlation of variables. According to the results of the best model, a Web calculator was established. Basic Characteristics. A total of 42650 kidney cancer patients from the SEER database were enrolled in this study. A total of married was 25058 (58.75%) with a median age of 64.000 [55.000, 73.000]. Marital, race, primary site, grade, laterality, pathological, T stage, N stage, bone metastasis, and liver metastasis were variables with statistically significant differences (p < 0.05). White male was the main population. As shown in Table 1, there were 3171 kidney cancer patients with lung metastasis and 39479 kidney cancer patients without lung metastases. rough comparing data of the two groups above, we obtained the result that the differences of all variables were statistically significant (p < 0.05). Figure 1, nine variables with nonzero coefficients in LASSO analysis were selected for logistic regression. As shown in Table 2, bone metastasis, brain metastasis, grade, liver metastasis, N stage, primary site, sequence number, Tstage, and tumor size were factors with p < 0.05 in univariate logistic regression analysis. After multivariate regression analysis, we identified that bone metastasis (yes, OR � 4.83, 95% CI � 4.27-5.46, p < 0.001), brain metastasis (yes, OR � 8.41, 95% CI � 6.72-10.51, p < 0.001; unknown, OR � 6. 13 .94, p < 0.001), and tumor size (OR � 1.01, 95% CI � 1-1.01, p < 0.001) were independent risk factors of LM in renal cancer patients. Furthermore, we found that primary site (C65.9-Renal pelvis, OR � 0.38, 95% CI � 0.3-0.49, p < 0.001) and sequence number (more, OR � 0.62, 95% CI � 0.56-0.69, p < 0.001) were independent protection factors. As shown in Figure 2, each grid in the heatmap visually showed the correlation coefficient between each variable with color depth. Development and Validation of Predictive Models. For developing ML models, nine independent predictors, with p < 0.05 in the multivariate regression analysis, were used for model establishment. And lung metastasis status was also included as the outcome index in the models. Figure 3 demonstrated the relative importance ranking of each input variable in the models. e ranking of variables in each model was very different. e patients with bone metastasis and the T stage were variables with relatively high importance ranking in all models. However, primary site and sequence number were variables with relatively low importance ranking in all models. For the XGB, the relative importance rank of all variables from high to low was bone metastasis, tumor size, T Stage, N stage, grade, liver metastasis, brain metastasis, primary site, and sequence number. We applied ML algorithms such as RF, NBC, DT, XGB, GBM, and LR to establish models. e results of 10foldcross-validation ( Figure 4) show that the average AUC of all models was above 0.9. And all six ML models fitted well during the course of ten iterations. e XGB's average AUC was 0.934 (std � 0.001). As a result, XGB model was selected as the final prediction model. Web-Based Calculator. In order to facilitate clinical application, a Web-based calculator was established on the basis of XGB model (https://share.streamlit.io/liuwencai4/ renal_lung/main/renal_lung.py). As shown in Figure 5, users can input values of each variable through clicking and selecting. Risk grouping for LM and probability of LM in renal cancer will be showed. Discussion Lung is the most common metastatic site of kidney cancer [7]. Early diagnosis of metastasis can improve the feasibility of surgery and increase the survive chance. e profile of kidney cancer patients is complex and involves multidisciplinary treatment issues. Artificial intelligence can be well applied in this field because of its powerful information extraction and processing ability [16]. erefore, this study aimed to develop a highly accurate model capable of predicting lung metastasis from kidney cancer. We identified nine influence factors, included bone metastasis, brain metastasis, grade, liver metastasis, T stage, N stage, primary site, sequence number, and tumor size. In addition, 10-foldcross-validation was adopted to check the performance of models. Finally, the model with the highest accuracy is presented as a Web calculator for application. Our study found that organ metastases were important influencing factors. Many patients will develop multiple organ metastases. In the study of Wei Xi, metastases of two or more sites accounted for 33% [18]. Jianxin Xue's study also found that there were 8.76% patients with clear-cell RCC, which had distant metastases at the time of diagnosis, and 35.01% (1026/2931) metastatic patients had multiple metastases [7]. is finding was consistent with the results of the present study. Furthermore, organ metastases as predictors have also been reported in previous studies. Shengtao Journal of Oncology 3 Dong et al. constructed a bone metastasis risk prediction model based on brain metastasis, liver metastasis, and lung metastasis as predictors [19]. Bone metastasis, liver metastasis, and brain metastasis were strong predictors in the models of our study. As shown in Figure 3, important factors in constructing XGB, RF, and NBC models to predict lung metastasis from kidney cancer were prioritized. Variables including T stage, N stage, and pathological grade were associated with the development of LM in renal Journal of Oncology cell carcinoma [20]. ese risk factors were also important in other distant metastases of kidney cancer [1,7]. is highlights the significance of the stage and grade in predicting renal cell carcinoma organ metastasis. In addition, N stage and T stage were used not only to predict kidney cancer metastasis but also as an important parameter in prognostic models. For example, the University of California School of Medicine used the stage to predict five years survival in metastatic and nonmetastatic patients [21]. Tumor size was an independent predictor of overall survival [4]. e pseudocapsule (PS) in kidney cancer is the fibrous interface between the tumor and renal parenchyma [22]. ere is a richer blood supply system around PS. With PS from being infiltrated to penetrate, the incidence of venous tumor thrombus (VTT) and microvascular invasion (MVI) increases [23]. us, further distant metastasis occurs. e probability of distant metastasis may increase in the process of primary lesions expansion owing to an increase of PS surface area. Our study revealed that primary site was a protection factor. In addition, the renal pelvic cancer was less likely to transfer to the lung than renal cancer. Because renal cancer originates from the epithelium of the proximal tubules, renal pelvic cancer originates from the urothelium. It is more likely to be diagnosed and treated in the early stage because of the high incidence of hematuria. Vascularization is an important condition for tumor growth, invasion, and metastasis [24]. e blood supply of renal pelvic cancer may be less than that of renal cancer [25]. Sequence number was another independent protection factor of LM in kidney cancer. We found that patients with >1 primary tumor were less likely to spread to lung. One of our guesses was that patients with multiple tumors may have insufficient time to form LM because of poor prognosis. Another explanation was that Journal of Oncology more symptoms could promote early diagnosis and medical treatment. e exact mechanism needs to be further explored. Few studies have been performed to predict LM in patients with renal cell carcinoma. Although some studies have reported some biomarkers other than the above predictors for LM prediction [26], few of these markers have been applied. Previously, Xinyu Sheng's team at the Zhejiang University School predicted LM in kidney cancer patients based on patient data from the SEER database, with a column line plot of development and a model constructed based on TNM stages with ground AUC of 0.780 and 0.618, respectively, and the study was not externally validated [20,27]. Although the AUC of the models developed in the training set is greater than 0.50, there is still room for improvement. However, the AUCs of the six models constructed in this study based on machine learning are all above 0.9, which reflects the good robustness of the models. We expect that the network calculator constructed with the XGB model in this study can be applied or tested in the future. is study also had some limitations. First of all, the indicators including metastasis sites and some serological data in SEER database are not comprehensive [7,12]. Secondly, further verification of multicenters is indeed in the future. Conclusion is study provided preliminary evidence that the ML algorithm can be used to predict lung metastases in patients with kidney cancer. However, the prediction model cannot specify the genetic characteristics of these patients. However, this low-cost, noninvasive, and easy to implement diagnostic method is useful for clinical work. Of course this model still needs to undergo more real-world validation. Data Availability e data used in this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest. Authors' Contributions Wenle Li, Qian Zhou, Wencai Liu, and Shengtao Dong have contributed equally to this work. CLY, JFC, and YLJ designed the study. WLL and ZQ collected and evaluated the data and wrote the first draft of the manuscript. All authors contributed to the interpretation of the results and the final draft of the manuscript.
2022-10-18T16:14:50.740Z
2022-10-14T00:00:00.000
{ "year": 2022, "sha1": "8a5f529208f6f101cc65654daba4bdd457203bc2", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jo/2022/5798602.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6bf21e7a9b3db57de2f81c677e9ed23cb42ce94e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222225683
pes2o/s2orc
v3-fos-license
APPLICATION OF TEMPORAL CONVOLUTIONAL NEURAL NETWORK FOR THE CLASSIFICATION OF CROPS ON SENTINEL-2 TIME SERIES The recent development of Earth observation systems like the Copernicus Sentinels has provided access to satellite data with high spatial and temporal resolution. This is a key component for the accurate monitoring of state and changes in land use and land cover. In this research, the crops classification was performed by implementing two deep neural networks based on structured data. Despite the wide availability of optical satellite imagery, such as Landsat and Sentinel-2, the limitations of high quality tagged data make the training of machine learning methods very difficult. For this purpose, we have created and labeled a dataset of the crops in Slovenia for the year 2017. With the selected methods we are able to correctly classify 87% of all cultures. Similar studies have already been carried out in the past, but are limited to smaller regions or a smaller number of crop types. INTRODUCTION In the presented work we focus on the classification of crops, a task that is common with satellite data. This has been previously done with methods of varying complexity, such as traditional supervised classification methods, random forest (Breiman, 2001), support vector machines (Raj, SivaSathya, 2014) and recurrent neural networks (Rußwurm, Körner, 2018). But when dealing with temporal data, traditional approaches cannot take full advantage of such structured data because the order of the data has no effect on the model and thus time is not considered as a separate feature. Deep learning offers a variety of approaches to resolve such tasks. In our work we investigate two architectures of deep neural networks for the classification of crops. Progress has already been made by several authors in the past, that have used the segmentation of satellite images using recurrent neural networks (Rußwurm, Körner, 2018) which are capable of processing temporal data. With such an approach it is not necessary to pre-process the data; the model e.g. learns to mask the clouds by training and optimizing the weights. However, such approaches are not without shortcomings. They have many parameters and each state depends on the previous one, which increases the learning time, and requires very large amounts of training data. There have been also advances in architectures (Bai et al., 2018) that are able to deal with temporal information more efficiently. In this case, one of the main problems with deep learning remains -the need for very large amounts of well annotated data. Given the scale of these problems, we have limited ourselves to preparing the data, analysing and implementing selected architectures and comparing the results. The reference data used in this study was for Slovenian crops in the year 2017, shown in Figure 1. The dominant class in the area are meadows followed by maize. Region marked in red is dominated by vineyards and only further away we have meadows. Differently the region in SATELLITE DATA This research is focused on the use of Sentinel-2 data, which is openly accessible within the Copernicus program. Sentinel-2A and B together cover every area on Earth in at least 5 days in 13 bands. This high temporal resolution makes it possible to track seasonal trends, such as crop development, well. The most commonly used bands for vegetation mapping are the visual bands (2, 3, 4) and the near infrared band (8). These bands are also the only ones available at 10 m, as others are acquired in 20 and 60 m and were re-sampled to 10 m resolution. With all the raw bands at the same resolution we reduce the complexity of further processing steps. We divided the area of Slovenia into squares of 1000 x 1000 pixels (i.e. 10 x 10 km), so it can also be processed by PC or laptop for simple analysis. In total approximately 300 patches were generated. Patches are visualised in Figure 2, yellow patches are used in training, the data colored in green was used for testing. The remaining patches were discarded as they had little to no crops. We separated the data spatially to ensure that the results were spatially generalised. Data was downloaded from Sentinel HUb using the sentinelhubpy (Sinergise EO Research team et al., 2017) Python library and the study period was limited to the months from January to September of 2017, as this are the months when the changes in agricultural land are most visible. In subsequent months, in some areas winter crops for the next year are already being prepared. All data was pre-processed using the eo-learn (Sinergise EO Research team et al., 2018) Python library to remove cloudy observations and construct indices which have been used also to classify crops also in related work (Pelletier et al., 2019). All values are normalised using min-max normalisation as suggested in (Pelletier et al., 2019). This normalisation subtracts the minimum value from each band and then divides it by its maximum. As this normalisation is highly sensitive to extreme values they further propose to use 2% and 98% percentile rather than the minimum and maximum value. This retain the temporal profile of the observed classes, as shown in Figure 3, and it retains all values within [-1,1]. After removing the clouds we are left with missing values in the time series. Which are most frequently weeks but can in some cases extend to a few months. Using linear interpolation we fill the gaps and provide a common time interval of the satellite data. This interpolation is very fast in comparison to alternatives, it is not computationally expensive and still retains enough information (Valero et al., 2016). But in case of larger gaps caused by clouds we now only have an average value between the measurements. This posses an issue when analysing seasonal trends of crops in cloudier regions. Which could be avoided by smoothing, but it comes with other challenges. The entire processing pipeline consists of four steps. First we erode the polygons, to remove the effect of the edge values. We used a buffer of size 7 m, with this we excluded pixels that could potentially include other bordering classes or neighbouring fields. Then we transform the polygons into a matrix which corresponds to the size of the observed area. Lastly we randomly sample the pixels of each patch. Alternatively, weighted sampling could be used to attain equal distribution of all classes. We choose to better capture the data distribution and tackle the class imbalance at the training phase. The selected pixels were then interpolated to the 5 day interval, matching the Sentinel-2 revisit interval. Higher frequency of interpolation could provide more detailed trend without loosing some information. The downside of higher frequency would be the increased complexity both for data storage and computational power. With more time between each observation we are risking of missing sudden changes, such as sowing, that would be a indicate ripeness of crops and their collection. REFERENCE DATA The reference data was extracted from the database used for agricultural subsides, collected and managed by the Slovenian Agency for Agricultural Markets and Rural Development. Access to the data was granted within the project Perceptive Sentinel 1 funded by the EU. Provided data consisted of 200 crops, the classification is very detailed, for most classes, several sub spices of crops are listed. Separating into such detailed groups is not always possible based on satellite imagery alone. Most groups are also were very few in number so joining them provided some larger classes that are better represented. For the purpose of this study crops were aggregated to 25 taxonomically similar groups. In Figure 1, we can see coverage of final crop classes in Slovenia. Some classes, such as hop and vineyards, are present only in certain regions, which effects the both training and results. When a class is not present in the training the model will predict that class at random and with low probability. In case of a class missing in the testing set we have to handle that separately. Whenever the class would be predicted, but not present, the prediction would be wrong. This can negatively effect the performance of the model. Figure 4 shows the distribution of crop classes in the data, including corresponding colors and names. Some classes were discarded as they presented less then 0.4% of crops in Slovenia. The remaining groups were: (Pelletier et al., 2019) and (Rußwurm, Körner, 2018) approaches. Related studies have been limited to smaller regions and/or polygon count as is presented in Table 1, where we compare area and number of polygons of each study. Further comparison to related work was not possible as in both studies RNN (Rußwurm, Körner, 2018) and TempCNN (Pelletier et al., 2019) reference data was provided by local agencies, which have made the data available only for the specific studies and not for sharing. Some differences are expected as Slovenia has smaller fields and consequently most pixels are on the edge, so we expect the data to contain more noise. METHODS In the first step, we used an algorithm similar to a random forest (Breiman, 2001), Since it has achieved good results in various classification tasks. The input of the training algorithm is a vector that includes spectral bands and indices for each observed point. In case of temporal information the vector size increases to indices * temporalSteps and the temporal structure of the point is lost. We used a gradient boosting framework, that uses tree based learning algorithms. It differs from random forest algorithms in construction of trees. In every iteration we construct a new tree which minimises the error of the previous ones. Specifically, we choose LightGBM (Ke et al., 2017). It is faster, more efficient and simple to use than most similar implementations. The major advantages are in needing less RAM, can be speed up by using a GPU and offers many parameters that can be fine tuned to achieve desired performance. We have compared it with two convolutional neural networks that are capable of processing temporal data. TempCNN was recently proposed and tested for classification of crops in South West France (Pelletier et al., 2019). As it had outperformed random forests, we expected it to outperform even gradient boosting methods as they to do not retain the temporal structure. The TempCNN architecture show in Figure 5 consist of three convolutional layers which are used to join the temporal information. Which is a fully connected layer that based on the condensed information provided by the previous layer predicts probability of the input belonging to the specified classes. Compared to TCN (Bai et al., 2018), that was proposed as an alternative to RNN when working with temporal. This approach has not yet been tested on satellite imagery. Main advantage over the RNN is computational power and memory needed. States are not depended of the previous ones as is the case with RNN, which makes backpropagation faster and learning more memory efficient. The method in some cases outperforms RNN, especially when longer history is needed. The architecture is entirely made of convolutional layers, which are well optimised to be run on GPUs. An example of such architecture is shown in Figure 6 with the blue lines showing the captured information of each filter and layer. Architecture used in this task has two more convolution layers, that can be interpret similarly. Additional layers are required so the entire input vector is covered. One of key differences from the previous approach is the dilatation on each layer. In each layer the filter uses bigger dilatation, that grows exponentially with the depth of the network and effectively expands the receptive field of the net- work. In Figure 6 we can also see that by using dilatation we only overlap on neighbouring values. With this changes networks are more efficient and we can have large effective history, without requiring a lot of memory or computational power during training. As the same filters are applied thought the entire layer and can be run in parallel. EVALUATION To evaluate performance of each approach we first divide the data into smaller parts which represent the dataset. In case of multi-class classification we have to make sure all classes are present in both training and testing dataset. With this we have a supervised learning problem, as we have classes corresponding to all input sequences. Throughout the training process the method adapts the network weights to map the input values to the desired classes on the output. Many different metrics are available to asses the performance of methods. Results can be displayed in a confusion matrix. In case of binary classification the table has two rows and two columns. Which can be expanded to include more classes with additional columns and rows, one for each class. In all cases columns contain classes predicted by the models and rows present the reference class which each example belongs to. Most commonly accuracy is used which represents the percentage of correctly classified samples (True Positive) against all samples. Recall measures how many samples (TP) of the class were correctly classified as belonging to the class divided by all samples of the class in the data (TP+False Negative). Metric that combines both is F1 which offers a single value to present the two. As we have multiple classes we measure all the metrics per each class. Usually during training we monitor overall accuracy. Which is accuracy weighted by the number of samples. It is most informative when all classes are equally represented. This is not always true in real life examples. In our case, the models quickly learned to classify meadows and achieved over 70% accuracy but performed poorly on other classes. We weighted all classes equally during the training of the model and monitored the macro accuracy. RESULTS The class distribution in Slovenia is shown in Figure 4. The landscape is dominated by meadows, which account for 60% of the data. In some regions there are very specific groups of crops such as hop and vineyards. Based on the class distribution, we could achieve an overall accuracy of 60% with the prediction of the class meadows for all pixels. So in Table 2, we focus on per class accuracy. In general, the results are comparable for most classes. The average F1 score is between 51%-53%. All methods have high success in classifying meadows, maize and winter cereals. Difficulties occur in classifying grassland, vegetables, summer cereals, potatoes and orchards. This is probably due to the overlapping of the temporal pattern for the classes. Meadows are similar to grassland and leafy legumes and/or grass mixture. Vegetables contains a lot of different vegetables types, which seems to results in lower performance. Even with some classes having low F1 score we still achieve high weighted average of 87% as the data distribution is in favor of meadows. With the difficulty mainly in classes with fewer samples the overall performance is promising. As can be seen in Table 2, Neural networks outperform Light-GBM, but only by one to two percent in the F1 score. Light-GBM surpasses both other methods in the classification of summer cereals. The two neural networks achieve similar results, differences are visible in hops classification, while TempCNN achieves a lower accuracy but higher recall, which is more important because we want our predictions to be correct more often. The reason for the lower F1 result could be that the TCN has four times fewer parameters. Weighted average at the bottom of the Table 2 represents the accuracy for all crops based on the number of samples. Both neural networks correctly classify 87% of all pixels. Since the test and training data were spatially separated, we assume that the score represents the model's ability to generalise. The models could be fine-tuned with appropriate data for each region or year. We expect that the model could achieve similar scores in countries with similar geography and for the same crop types. With high quality reference data, networks can achieve good performance on well represented crops. Problems occur when we have similar classes or mixture reference is provided as was the case in vegetables. LightGBM TempCNN TCN Accuracy Recall F1 Accuracy Recall F1 Accuracy Recall F1 Meadows 95 71 81 98 87 93 97 89 93 Hop 30 87 44 82 92 87 87 58 70 Grassland 5 28 8 2 54 5 0 0 0 Winter rape 82 87 84 75 98 85 84 93 88 Maize 95 87 91 95 90 92 93 90 92 Winter cereals 92 85 89 93 91 92 93 88 90 Leafy legumes and/or grass mixture 23 41 30 27 63 38 22 57 32 Pumpkins 64 73 68 54 89 68 73 65 69 Summer cereals 18 54 27 15 52 23 7 56 12 Vegetables 3 54 27 5 7 6 8 6 7 Potatoes 8 55 14 39 40 39 37 17 24 Vineyards 47 67 55 21 94 34 51 70 In Figure 7 a visualisation of a reference area and the corresponding prediction of TempCNN is shown. TempCNN achieves higher recall which means it is more frequently correct. The method correctly predicts the majority of the classes. Issues are most common on the edges of the fields, but not exclusively. Meadows are in some cases predicted as pumpkins or orchards. This could be a problem from the definition of the class as orchards commonly have some space in between filled by meadows. Pumpkins grow more in width and can be overshadowed by tall grass which in turn causes confusion between the classes. As we know where the polygons are, we could achieve better classification visualisation by taking the most frequently predicted class. More intriguing would be to expand the method to include additional spatial information which is available in satellite imagery. It would most certainly remove the confusion within the fields, as it is uncommon for a single observation to belong to a different class then its neighbours. Which is often the case in the observed area. CONCLUSIONS Machine learning and remote sensing data are becoming more and more widely accessible and are thus gaining importance in many applications. Several machine learning algorithms have been used in the remote sensing community since decades, but only recently the availability of dense high resolution satellite image time series enabled the application of more advanced methods. In this paper we used Sentinel-2 data for classification of crops in Slovenia for the growth year 2017. We have compared three approaches, the baseline LightGBM and two deep learning approaches to handling temporal data. Both TempCNN and TCN achieved comparable results for classification. TempCNN has been proven to work well by us and (Pelletier et al., 2019), while the evaluated TCN architecture offers an alternative when we have less data, computing power or time available. Both methods achieve 52%-53% F1 score for selected crop types and would perform equally good when presented with well annotated data. For future work both methods could be extended to the use of spatial information (context). These models would potentially be more robust and would remove noise in individual polygons, i.e fields. As both models achieve similar performance, TCN would be more suited due to its lower computational speed. It has fever parameters which increase drastically with inclusion of another dimension to the data. Clouds still pose a major challenge in classification of land use and land cover, and radar images could provide additional information for periods and areas with high cloud cover. Deep learning offers various ways for multi-sensor merging, each having their advantages and drawbacks.
2020-08-20T10:12:03.976Z
2020-08-14T00:00:00.000
{ "year": 2020, "sha1": "5e2061f1aaa941ddc55d31bf05d15b3ef539c98b", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B2-2020/1337/2020/isprs-archives-XLIII-B2-2020-1337-2020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "180c97cfa34e7ad609db7d81f74f8fb0a074b6b6", "s2fieldsofstudy": [ "Computer Science", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Computer Science" ] }
90618867
pes2o/s2orc
v3-fos-license
Reduced Tillage Impacts on Pumpkin Yield, Weed Pressure, Soil Moisture, and Soil Erosion Conservation tillage has the potential to decrease the environmental footprint of pumpkin production, but possible trade-offs with yield are not well understood. This study experimentally tested the effects of three cultivation techniques (conventional-till, strip-till, and no-till) on pumpkin production, weed pressure, soil moisture, and soil erosion. Randomized complete block field experiments were conducted on Cucurbita pepo L. ‘Gladiator’ pumpkins in 2014 and 2015. Overall yields were higher in 2015, averaging 45.2 t·ha, compared with 37.4 t·ha in 2014. In 2014, pumpkin yields were similar across tillage treatments. In 2015, the average fruit weight of no-till pumpkins was significantly greater than strip-till and conventional-till pumpkins, which corresponded to a marginally significant 13% and 22% yield increase, respectively (P = 0.11). Weed control was variable between years, especially in the strip-till treatment. Soil moisture was consistently highest in the no-till treatment in both years of study. Conventional-till pumpkin plots lost’9 times more soil than the two conservation tilled treatments during simulated storm events. The 2015 yield advantage of no-till pumpkins seems related to both high soil moisture retention and weed control. Research results suggest that no-till and strip-till pumpkin production systems yield at least as well as conventional-till systems with the advantage of reducing soil erosion during extreme rains. Over 35,000 ha of pumpkins (Cucurbito pepo L.) are grown each year in the United States resulting in farm receipts of about $170 million (Lucier and Dettmann, 2007; USDA-NASS, 2012). Jack-o-lantern pumpkins represent a major portion of the industry and are typically grown regionally due to the high cost of shipping (Lucier and Dettmann, 2007). As such, they are grown under a wide variety of production scenarios with states as different as California, Texas, and New York among the top 10 producers of fresh market pumpkins (USDA-NASS, 2012). Given the diversity of locations where pumpkins are produced in the United States, it is useful to develop production technologies that can be integrated into different systems and that appeal to general consumer demand. Increasingly, consumers are demanding products with reduced environmental impact. Consumers prefer to purchase produce that has low environmental impact if it does not increase price (Dimitri and Greene, 2002). Meanwhile, climate scientists predict increased average temperatures and frequency of extreme weather events like drought and flood, which can significantly disrupt production systems if growers do not adapt (IPCC, 2013, Walsh et al., 2014). Already across the United States, average temperatures have increased significantly during the 20th century, and the number of extreme precipitation events during the period 1957–2010 has also increased, especially in theMidwest, southeast, and northeast (Kunkel et al., 2013; Walthall et al., 2012). Adopting conservation tillage systems may be a means of meeting consumer demand for more sustainable products while adapting to climate change (Lal, 2007). No-till is one form of conservation tillage that has been widely adopted in agronomic cropping systems to reduce soil erosion and improve sustainability (Derpsch et al., 2010). In contrast, no-till vegetable production is relatively rare due to challenges with delayed soil warming, reduced germination and root growth, and weed control (Hoyt et al., 1994). However, pumpkin production may be particularly amenable to a no-till system (Harrelson et al., 2008). Pumpkins are planted well after frost when soils have had time to warm, seeds are large and germinate reliably, and plants can create a canopy that shades out weeds (Bratsch, 2009; Hoyt et al., 1994). Strip-till pumpkin production is an alternative conservation tillage strategy that may combine the benefits of conventional-till and no-till systems. Strip tillage is a form of reduced tillage where the planting zone is tilled and the rest of the soil remains undisturbed. This provides a friable seedbed similar to conventional-till systems while leaving the rest of the soil undisturbed. However, there are some indications that weeds are more problematic in strip-till compared with no-till and conventional-till systems (Hoyt et al., 1994; Morse, 1999). It is also unclear whether erosion control and soil moisture retention will be as great in strip-till as no-till systems (Fern andez et al., 2015). Furthermore, there has been little research on strip tillage as part of a pumpkin production system or comparing strip-till to no-till pumpkin production. Cover cropping can be an essential part of a conservation tillage system by providing a mulch layer to retain soil moisture, reduce erosion, suppress weeds, and keep fruits dry. Cover crop residue can reduce water evaporation and water runoff, which reduces irrigation needs and soil erosion (Lal, 2004). Cover crop mulch can suppress weeds by physical interference and/or allelopathy throughout the production season (Barnes and Putnam, 1983; Creamer et al., 1996), which can potentially reduce the need for herbicides (Hartwig and Ammon, 2002). A thick cover crop residue layer can also keep fruits off the soil surface and relatively dry, which can help manage disease pressure and reduce the need for fungicides (Everts, 2002; Ogutu, 2004). This research compares the effects of notill, strip-till, and conventional-tillage in a rain-fed pumpkin production system that included cover cropping during the winter. Previous studies do not show clear impacts of tillage on pumpkin production. Rapp et al. (2004) reported that pumpkin yields in no-till were greater than in both strip-till and disked plots. Similarly, Walters et al. (2008) found that no-till pumpkin yields were nearly double conventional-till yields. On the other hand, Ogutu (2004) found that conventional-till resulted in greater pumpkin yields than striptill systems. We hypothesized that pumpkin yields in reduced tillage systems, including no-till and strip-till, would be equivalent to conventionaltill. Furthermore, we hypothesized that reduced tillage would have the added benefit of protecting soils under extreme rainfall events. We also tracked weeds and soil moisture in the different tillage systems to understand the mechanisms by which tillage could impact yields. We hypothesized that weed control would be higher in no-till than in strip-till and conventional-till pumpkin production, and that soil moisture would be highest in no-till, lowest in conventional-till, and intermediate in strip-till. Received for publication 3 Aug. 2016. Accepted for publication 6 Oct. 2016. This work was supported by start-up funds provided to M.E. O’Rourke through the college of agriculture and life sciences at Virginia Tech. We thanks to V. Groover and S. Francis for support with field work and to T. Kuhar, A. Straw, and R. Morse for thoughtful feedback about the project and manuscript. Corresponding author. E-mail: megorust@vt.edu. 1524 HORTSCIENCE VOL. 51(12) DECEMBER 2016 Materials and Methods Site description and experimental design. Field experiments were conducted in 2014 and 2015 at Virginia Tech’s Kentland Farm inBlacksburg, VA. The soil at the experimental site is a Hayter loam in hydrologic soil group A and the land has 2% to 7% slope (NRCS, 2016). Before the experiment, the site had been in alfalfa production since 2008. The experiment was laid out as a randomized complete block design with four replicates. The main treatment in both years was tillage: conventional-till, strip-till, and no-till. In 2014, there was an additional split-plot treatment: simulated flooding and ambient rainfall. Each experimental unit measured 7 · 14.5 m in both years so that the total experimental area was twice as large in 2014 (with tillage and flooding treatments) compared with 2015 (only tillage treatments). Plots in 2015 were in the same physical location as one half of the plots in 2014 but with a new randomization of treatments. Pumpkin rows were spaced 1.8 m apart and each plot consisted of four rows with 13 plants spaced equidistantly (1.2m)within each row. The field was planted with 18 jack-o-lantern type ‘Gladiator’ pumpkins (Harris Seeds, Rochester, NY) planted in the center of the plot (9 plants/row · 2 rows). ‘Field Trip’ pumpkins (Stokes Seeds, Buffalo, NY), which have smaller fruits than ‘Gladiator’, were planted in the two outside rows of all plots and at the first and last two plant sites at the ends of plots. The ‘Field Trip’ pumpkins served to eliminate edge effects in the test pumpkins and visually differentiated ‘Gladiator’ and ‘Field Trip’ fruits. In 2014, the plots that received simulated flooding treatments were only used to assess soil erosion and were not used for any other measurements. Pumpkin production. Across the entire experimental area, a cover crop was seeded on 16Oct. 2013 (50% rye and 50%winter pea) and 27 Oct. 2014 (50% rye, 25% vetch, and 25% winter pea) at a rate of 112 kg·ha. Vetch was added to the cover crop mix in 2014 with the goal of increasing the proportion of legume biomass. Cover crops were treated with glyphosate (4.7 L·ha) on 14 May 2014 and 20 May 2015. One day before glyphosate applications, cover crop biomasswas sampled using one 0.5· 0.5 m quadrat per plot in 2014 and two of the same-sized quadrats per plot in 2015; samples were dried at 65 C before weighing. The desiccated cover crop in no-till plots was rolled on 2 June 2014 and 1 June 2015 using a flail mower (Alamo-Mott Brand, Seguin, TX)with the blades turned off. The conventional-till and striptill plots were mowed using a Woods BW 183Series 3mower.Conventional-till plotswere chisel plowed on 4 June 2014 and 31 May 2015. They received an additional tillage treatment in 2014 only using a KUHNEL62 rotary tiller (Brodhead, WI) on 10 June. In 2014, strip-tilling was conducted using a multivator (Ford Distributing, Inc., Marysville, OH) to mimic the strip tiller’s disc on 16 June. In 2015, strip tilling was conducted using a Troybilt model walk behind rototiller on 10 June. The width of the strips was 40 cm in both years of study. All plots were broadcast fertilized before planting on 2 June 2014 (10N–10P–10K at 448 kg·ha) and 11 May 2015 (10N–20P– 20K at 560 kg·ha). Given the field’s history in alfalfa, we chose fertilizer rates on the conservative end of the fertilizer recommendations for Virginia, with an increase in the second year to try to achieve higher yields (VCE, 2014). Plots were hand seeded on 18 June 2014 and 11 June 2015 with two ‘Gladiator’ seeds per germination site, which were then thinned to one plant after germination. ‘Gladiator’ seed was treated with mefenoxam (Apron XL), fludioxonil (Maxim), and azoxystrobin (Dynasty) fungicides, and thiamethoxam (Cruiser 5FS) insecticide. A pre-emergent herbicide mix composed of ethalfluralin (Curbit at 2.34 L·ha) + paraquat (Gramoxone at 2.92 L·ha) + nonionic surfactant (Induce at 0.25% v/v) was applied to all plots on 19 June 2014 and 12 June 2015, both one day after planting (DAP). Postemergent herbicides were applied only in 2015 on 7 July (26 DAP): halosulfuron-methyl (Sandea at 0.05 L·ha) + sethoxydim (Poast at 2.3 L·ha) + nonionic surfactant (Induce at 0.25% v/v). Fungicides and insecticides were used on developing plants (VCE, 2014). There was only one insecticide application during the study on 23 July 2014 (35 DAP), using esfenvalerate (Asana XL at 0.7 L·ha). In 2014, fungicide applications consisted of chlorothalonil (Bravo Weatherstik at 3.5 L·ha) on 23 July (35 DAP), chlorothalonil (3.5 L·ha) + cyazofamid (Ranman at 0.2 L·ha) on 15 Aug. (58 DAP), and chlorothalonil (3.5 L·ha) + cymoxanil (Curzate at 0.4 L·ha) on 22 Aug. (65 DAP). In 2015, fungicide applications consisted of chlorothalonil (3.5 L·ha) + Fontelis (0.4 L·ha) on 27 July (46 DAP), chlorothalonil (3.5 L·ha) + azoxystrobin (Abound at 0.4 L·ha) on 5 Aug. (55 DAP), and chlorothalonil (3.5 L·ha) + cyazofamid (0.2 L·ha) on 19 Aug. (69 DAP). All fruits from the 18 ‘Gladiator’ plants in each plot were counted and harvested on 8 Oct. 2014 (112 DAP) and 7 Oct. 2015 (118 DAP). Fruits were categorized by color (orange or green) and quality (damaged or undamaged) at the time of harvest. Damage included rot or extensive scarring of the skin. Only orange, undamaged fruits were considered marketable and used to determine yield; data on green fruit are not reported. In 2014, all the marketable fruits were weighed in the entire plot to determine yields. In 2015, marketable fruit yields were estimated by multiplying the number of fruits in different size categories by the average weights of those sized pumpkins. Fruit size was categorized as small (<70 cm circumference), medium (70–85 cm circumference), or large (>80 cm circumference). The average weight of each size class was determined by weighing up to three fruits per size class from each plot and averaging across all the plots ( 36 pumpkins per size class). Estimated Fig. 1. Daily rainfall (black line) and maximum air temperature (gray line) during the 2 years of study in (A) 2014 and (B) 2015. Solid arrows point to soil moisture sampling dates (2 Aug., 2014, 45 DAP; 23 Sept., 2014, 97 DAP; and 1 Oct., 2014, 105 DAP; 20 July, 2015, 39 DAP; 29 July, 2015, 48 DAP; 6 Aug., 2015, 56 DAP, and 11 Aug., 2015, 61 DAP). Dotted arrows point to rainfall simulation dates (5 Aug., 2014, 48 DAP and 17 Sept., 2014, 91 DAP). HORTSCIENCE VOL. 51(12) DECEMBER 2016 1525 yields using this method differed from weighing all the pumpkins in two plots by 1%. Soil erosion, soil moisture, and weeds. A flooding simulation experiment was conducted in 2014 to mimic the extreme flooding events predicted under climate change. Flooding simulations were conducted using an oscillating nozzle rainfall simulator (Tlaloc 3000 Rainfall Simulator, Joern’s Inc., West Lafayette, IN) placed 1.8 m above the erosion plot and turned on for 30 min. The amount of water added to the erosion plots was 4.4 ± 0.09 cm. This amount represents a heavy storm and is near to the 5.0 cm daily rainfall estimated to have historically occurred about once per year in Virginia (Kunkel et al., 1999). Runoff was collected from erosion plots by sinking strips of metal sheets 10 cm into the ground to form a border around the ‘Gladiator’ pumpkins (2.1 · 2 m) with a funnel-shaped collection pan at one end. Runoff was analyzed in the laboratory for total suspended solids (TSS) using a standard filtration process (APHA, 1995). Two rainfall simulation experiments were conducted in 2014 on 5 Aug. (48 DAP) and 17 Sept. (91 DAP) with such strong and consistent effects on TSS that the rainfall simulations were not continued in 2015. Soil moisture was measured using a Model 6050X1 Trase System Time-Domain Reflectometer (Santa Barbara, CA). Measurements were taken at 22 cm depth using 0.25-cmdiameter stainless steel brazing rods that were pounded into the soil between plants within the ‘Gladiator’ rows of the experimental plots. Soil moisture content was averaged across two subsamples in each plot. Samples were taken three times in 2014 (2 Aug., 45 DAP; 23 Sept., 97 DAP; and 1 Oct., 105 DAP), and four times in 2015 (20 July, 39 DAP; 29 July, 48 DAP; 6 Aug., 56 DAP; and 11 Aug., 61 DAP). Measurements were taken during a variety of growing conditions including hotter and cooler periods in 2014, and wetter and drier periods during rapid vine growth in 2015 (Fig. 1). Weeds were sampled from plots on 5 Sept. 2014 (79DAP) and 24Aug. 2015 (74DAP). In 2014, weeds were sampled using a 0.5 · 0.5 m quadrat that was randomly placed between ‘Gladiator’ rows at two locations in each plot. In 2015, all the weeds surrounding the ‘Gladiator’ pumpkins up to the border rows of ‘Field Trip’ pumpkins were harvested. Weeds were cut at ground level and dried at 65 C. Dried weed weights were standardized to g·m. Statistical analysis. Data were analyzed separately each year using JMP software. Data were examined using standard least squares models that included tillage as a main treatment effect and block as a random factor. In 2014, simulated flooding plots were analyzed for TSSwhile all other response variables were analyzed from ambient rainfall plots. Where there were statistically significant treatment effects, all pairwise contrasts were analyzed using Tukey honestly significant difference at a = 0.05 level. Dried weed weight was log transformed to meet assumptions of normality. Results and Discussion Growing conditions and management were somewhat different in the 2 years of study and may have contributed to higher overall yields in 2015 that averaged 52.5 t·ha compared with 37.4 t·ha in 2014 (Table 1). Cover crop residue at termination averaged 8.0 ± 0.7 tons·ha in 2014, and 9.1 ± 0.9 tons·ha in 2015. Total rainfall and average high temperature were 37.5 cm and 26.0 C, respectively, during the pumpkin growing season in 2014, while there was 40.0 cm of rain and an average high of 27.4 C in 2015. Temporal differences in rainfall between the 2 years included more early summer and fall rain in 2015 than in 2014 (Fig. 1). In addition, there was 25% more N and over twice the amount of phosphorous and potassium applied in 2015 compared with 2014. Nevertheless, yields in both years were well within the range of normal for the growing region (Bratsch, 2009; Harrelson et al., 2008). No-till and strip-till ‘Gladiator’ pumpkin production systems yielded at least as well as conventional-till over the 2 years of this study. Average pumpkin fruit weight was not significantly different among tillage treatments in 2014 (F = 1.36, P = 0.32; Table 1). However, in 2015, average pumpkin fruit weight differed among treatments with significantly larger pumpkins in the no-till treatment compared with the strip-till and conventional-till treatments (F = 21.18, P = 0.002) (Table 1). With a similar number of pumpkins produced in all three tillage treatments in 2015, this translated into a 22% higher average yield in no-till than conventional-till pumpkins, though this result was not statistically significant at a = 0.05 level (F = 3.13, P = 0.11). Low numbers of fruit in 2014 translated to fruit weight per pumpkin being higher in 2014 than 2015 (Reiners and Riggs, 1997), but this was not sufficient to produce total yields per hectare that were as high in 2014 as in 2015 (Table 1). These results are consistent with other published studies demonstrating that conservation tillage can produce high pumpkin yields (Hoyt, 1999; Rapp et al., 2004; Walters et al., 2008). Fruit damage was low among all treatments in both years of study (Table 1). Weed management is an important facet of a pumpkin production system that can be affected by tillage and can ultimately affect yield (Walters et al., 2008; Walters and Young, 2010). Tillage treatment was a significant factor predicting average dried weed weight in 2014 (F = 29.70, P = 0.0008) and there were significantly more weeds in the strip-till treatment compared with no-till and conventional-till treatments (Table 2). The five most common weeds in the study were hairy galinsoga (Galinsoga ciliata), redroot pigweed (Amaranthus retroflexus), common lambsquarters (Chenopodium album), yellow nutsedge (Cyperus esculentus), and common purslane (Portulaca oleracea). Along the edges of the tilled strips, weeds were vigorous, and perhaps competed with pumpkin plants, potentially reducing yields in strip-till plots in 2014. In 2015, weeds were better controlled compared with 2014, possibly due to the postemergence herbicide application Table 1. Mean and standard error statistics associated with pumpkin production under three different tillage regimes: no-till (NT), strip-till (ST), and conventional-till (CT), across two study years. Study yr Tillage treatment Avg wt pumpkin (kg) Yield (kg·ha) No. of marketable pumpkins/ha No. of damaged pumpkins/ha 2014 NT 7.82 ± 0.04 39,698 ± 1,067 5,077 ± 118 125 ± 0 ST 7.46 ± 0.22 36,025 ± 1,922 4,828 ± 179 125 ± 0 CT 7.80 ± 0.29 36,590 ± 2,489 4,703 ± 319 187 ± 8 Over 35,000 ha of pumpkins (Cucurbito pepo L.) are grown each year in the United States resulting in farm receipts of about $170 million (Lucier and Dettmann, 2007;USDA-NASS, 2012). Jack-o-lantern pumpkins represent a major portion of the industry and are typically grown regionally due to the high cost of shipping (Lucier and Dettmann, 2007). As such, they are grown under a wide variety of production scenarios with states as different as California, Texas, and New York among the top 10 producers of fresh market pumpkins (USDA-NASS, 2012). Given the diversity of locations where pumpkins are produced in the United States, it is useful to develop production technologies that can be integrated into different systems and that appeal to general consumer demand. Increasingly, consumers are demanding products with reduced environmental impact. Consumers prefer to purchase produce that has low environmental impact if it does not increase price (Dimitri and Greene, 2002). Meanwhile, climate scientists predict increased average temperatures and frequency of extreme weather events like drought and flood, which can significantly disrupt production systems if growers do not adapt (IPCC, 2013, Walsh et al., 2014. Already across the United States, average temperatures have increased significantly during the 20th century, and the number of extreme precipitation events during the period 1957-2010 has also increased, especially in the Midwest, southeast, and northeast (Kunkel et al., 2013;Walthall et al., 2012). Adopting conservation tillage systems may be a means of meeting consumer demand for more sustainable products while adapting to climate change (Lal, 2007). No-till is one form of conservation tillage that has been widely adopted in agronomic cropping systems to reduce soil erosion and improve sustainability (Derpsch et al., 2010). In contrast, no-till vegetable production is relatively rare due to challenges with delayed soil warming, reduced germination and root growth, and weed control (Hoyt et al., 1994). However, pumpkin production may be particularly amenable to a no-till system (Harrelson et al., 2008). Pumpkins are planted well after frost when soils have had time to warm, seeds are large and germinate reliably, and plants can create a canopy that shades out weeds (Bratsch, 2009;Hoyt et al., 1994). Strip-till pumpkin production is an alternative conservation tillage strategy that may combine the benefits of conventional-till and no-till systems. Strip tillage is a form of reduced tillage where the planting zone is tilled and the rest of the soil remains undisturbed. This provides a friable seedbed similar to conventional-till systems while leaving the rest of the soil undisturbed. However, there are some indications that weeds are more problematic in strip-till compared with no-till and conventional-till systems (Hoyt et al., 1994;Morse, 1999). It is also unclear whether erosion control and soil moisture retention will be as great in strip-till as no-till systems (Fern andez et al., 2015). Furthermore, there has been little research on strip tillage as part of a pumpkin production system or comparing strip-till to no-till pumpkin production. Cover cropping can be an essential part of a conservation tillage system by providing a mulch layer to retain soil moisture, reduce erosion, suppress weeds, and keep fruits dry. Cover crop residue can reduce water evaporation and water runoff, which reduces irrigation needs and soil erosion (Lal, 2004). Cover crop mulch can suppress weeds by physical interference and/or allelopathy throughout the production season (Barnes and Putnam, 1983;Creamer et al., 1996), which can potentially reduce the need for herbicides (Hartwig and Ammon, 2002). A thick cover crop residue layer can also keep fruits off the soil surface and relatively dry, which can help manage disease pressure and reduce the need for fungicides (Everts, 2002;Ogutu, 2004). This research compares the effects of notill, strip-till, and conventional-tillage in a rain-fed pumpkin production system that included cover cropping during the winter. Previous studies do not show clear impacts of tillage on pumpkin production. Rapp et al. (2004) reported that pumpkin yields in no-till were greater than in both strip-till and disked plots. Similarly, Walters et al. (2008) found that no-till pumpkin yields were nearly double conventional-till yields. On the other hand, Ogutu (2004) found that conventional-till resulted in greater pumpkin yields than striptill systems. We hypothesized that pumpkin yields in reduced tillage systems, including no-till and strip-till, would be equivalent to conventionaltill. Furthermore, we hypothesized that reduced tillage would have the added benefit of protecting soils under extreme rainfall events. We also tracked weeds and soil moisture in the different tillage systems to understand the mechanisms by which tillage could impact yields. We hypothesized that weed control would be higher in no-till than in strip-till and conventional-till pumpkin production, and that soil moisture would be highest in no-till, lowest in conventional-till, and intermediate in strip-till. Materials and Methods Site description and experimental design. Field experiments were conducted in 2014 and 2015 at Virginia Tech's Kentland Farm in Blacksburg, VA. The soil at the experimental site is a Hayter loam in hydrologic soil group A and the land has 2% to 7% slope (NRCS, 2016). Before the experiment, the site had been in alfalfa production since 2008. The experiment was laid out as a randomized complete block design with four replicates. The main treatment in both years was tillage: conventional-till, strip-till, and no-till. In 2014, there was an additional split-plot treatment: simulated flooding and ambient rainfall. Each experimental unit measured 7 · 14.5 m in both years so that the total experimental area was twice as large in 2014 (with tillage and flooding treatments) compared with 2015 (only tillage treatments). Plots in 2015 were in the same physical location as one half of the plots in 2014 but with a new randomization of treatments. Pumpkin rows were spaced 1.8 m apart and each plot consisted of four rows with 13 plants spaced equidistantly (1.2 m) within each row. The field was planted with 18 jack-o-lantern type 'Gladiator' pumpkins (Harris Seeds, Rochester, NY) planted in the center of the plot (9 plants/row · 2 rows). 'Field Trip' pumpkins (Stokes Seeds, Buffalo, NY), which have smaller fruits than 'Gladiator', were planted in the two outside rows of all plots and at the first and last two plant sites at the ends of plots. The 'Field Trip' pumpkins served to eliminate edge effects in the test pumpkins and visually differentiated 'Gladiator' and 'Field Trip' fruits. In 2014, the plots that received simulated flooding treatments were only used to assess soil erosion and were not used for any other measurements. Pumpkin production. Across the entire experimental area, a cover crop was seeded on 16 Oct. 2013 (50% rye and 50% winter pea) and 27 Oct. 2014 (50% rye, 25% vetch, and 25% winter pea) at a rate of 112 kg · ha -1 . Vetch was added to the cover crop mix in 2014 with the goal of increasing the proportion of legume biomass. Cover crops were treated with glyphosate (4.7 L · ha -1 ) on 14 May 2014 and 20 May 2015. One day before glyphosate applications, cover crop biomass was sampled using one 0.5 · 0.5 m quadrat per plot in 2014 and two of the same-sized quadrats per plot in 2015; samples were dried at 65°C before weighing. The desiccated cover crop in no-till plots was rolled on 2 June 2014 and 1 June 2015 using a flail mower (Alamo-Mott Brand, Seguin, TX) with the blades turned off. The conventional-till and striptill plots were mowed using a Woods BW 183-Series 3 mower. Conventional-till plots were chisel plowed on 4 June 2014 and 31 May 2015. They received an additional tillage treatment in 2014 only using a KUHN EL62 rotary tiller (Brodhead, WI) on 10 June. In 2014, strip-tilling was conducted using a multivator (Ford Distributing, Inc., Marysville, OH) to mimic the strip tiller's disc on 16 June. In 2015, strip tilling was conducted using a Troybilt model walk behind rototiller on 10 June. The width of the strips was 40 cm in both years of study. All fruits from the 18 'Gladiator' plants in each plot were counted and harvested on 8 Oct. 2014 (112 DAP) and 7 Oct. 2015 (118 DAP). Fruits were categorized by color (orange or green) and quality (damaged or undamaged) at the time of harvest. Damage included rot or extensive scarring of the skin. Only orange, undamaged fruits were considered marketable and used to determine yield; data on green fruit are not reported. In 2014, all the marketable fruits were weighed in the entire plot to determine yields. In 2015, marketable fruit yields were estimated by multiplying the number of fruits in different size categories by the average weights of those sized pumpkins. Fruit size was categorized as small (<70 cm circumference), medium (70-85 cm circumference), or large (>80 cm circumference). The average weight of each size class was determined by weighing up to three fruits per size class from each plot and averaging across all the plots (36 pumpkins per size class). Estimated yields using this method differed from weighing all the pumpkins in two plots by 1%. Soil erosion, soil moisture, and weeds. A flooding simulation experiment was conducted in 2014 to mimic the extreme flooding events predicted under climate change. Flooding simulations were conducted using an oscillating nozzle rainfall simulator (Tlaloc 3000 Rainfall Simulator, Joern's Inc., West Lafayette, IN) placed 1.8 m above the erosion plot and turned on for 30 min. The amount of water added to the erosion plots was 4.4 ± 0.09 cm. This amount represents a heavy storm and is near to the 5.0 cm daily rainfall estimated to have historically occurred about once per year in Virginia (Kunkel et al., 1999). Runoff was collected from erosion plots by sinking strips of metal sheets 10 cm into the ground to form a border around the 'Gladiator' pumpkins (2.1 · 2 m) with a funnel-shaped collection pan at one end. Runoff was analyzed in the laboratory for total suspended solids (TSS) using a standard filtration process (APHA, 1995). Two rainfall simulation experiments were conducted in 2014 on 5 Aug. (48 DAP) and 17 Sept. (91 DAP) with such strong and consistent effects on TSS that the rainfall simulations were not continued in 2015. Soil moisture was measured using a Model 6050X1 Trase System Time-Domain Reflectometer (Santa Barbara, CA). Measurements were taken at 22 cm depth using 0.25-cmdiameter stainless steel brazing rods that were pounded into the soil between plants within the 'Gladiator' rows of the experimental plots. Soil moisture content was averaged across two subsamples in each plot. Samples were taken three times in 2014 (2 Aug., 45 DAP; 23 Sept., 97 DAP; and 1 Oct., 105 DAP), and four times in 2015 (20 July, 39 DAP; 29 July, 48 DAP; 6 Aug., 56 DAP; and 11 Aug., 61 DAP). Measurements were taken during a variety of growing conditions including hotter and cooler periods in 2014, and wetter and drier periods during rapid vine growth in 2015 (Fig. 1). Weeds were sampled from plots on 5 Sept. 2014 (79 DAP) and 24 Aug. 2015 (74 DAP). In 2014, weeds were sampled using a 0.5 · 0.5 m quadrat that was randomly placed between 'Gladiator' rows at two locations in each plot. In 2015, all the weeds surrounding the 'Gladiator' pumpkins up to the border rows of 'Field Trip' pumpkins were harvested. Weeds were cut at ground level and dried at 65°C. Dried weed weights were standardized to g · m -2 . Statistical analysis. Data were analyzed separately each year using JMP software. Data were examined using standard least squares models that included tillage as a main treatment effect and block as a random factor. In 2014, simulated flooding plots were analyzed for TSS while all other response variables were analyzed from ambient rainfall plots. Where there were statistically significant treatment effects, all pairwise contrasts were analyzed using Tukey honestly significant difference at a = 0.05 level. Dried weed weight was log transformed to meet assumptions of normality. Results and Discussion Growing conditions and management were somewhat different in the 2 years of study and may have contributed to higher overall yields in 2015 that averaged 52.5 t · ha -1 compared with 37.4 t · ha -1 in 2014 (Table 1). Cover crop residue at termination averaged 8.0 ± 0.7 tons · ha -1 in 2014, and 9.1 ± 0.9 tons · ha -1 in 2015. Total rainfall and average high temperature were 37.5 cm and 26.0°C, respectively, during the pumpkin growing season in 2014, while there was 40.0 cm of rain and an average high of 27.4°C in 2015. Temporal differences in rainfall between the 2 years included more early summer and fall rain in 2015 than in 2014 (Fig. 1). In addition, there was 25% more N and over twice the amount of phosphorous and potassium applied in 2015 compared with 2014. Nevertheless, yields in both years were well within the range of normal for the growing region (Bratsch, 2009;Harrelson et al., 2008). No-till and strip-till 'Gladiator' pumpkin production systems yielded at least as well as conventional-till over the 2 years of this study. Average pumpkin fruit weight was not significantly different among tillage treatments in 2014 (F = 1.36, P = 0.32; Table 1). However, in 2015, average pumpkin fruit weight differed among treatments with significantly larger pumpkins in the no-till treatment compared with the strip-till and conventional-till treatments (F = 21.18, P = 0.002) ( Table 1). With a similar number of pumpkins produced in all three tillage treatments in 2015, this translated into a 22% higher average yield in no-till than conventional-till pumpkins, though this result was not statistically significant at a = 0.05 level (F = 3.13, P = 0.11). Low numbers of fruit in 2014 translated to fruit weight per pumpkin being higher in 2014 than 2015 (Reiners and Riggs, 1997), but this was not sufficient to produce total yields per hectare that were as high in 2014 as in 2015 (Table 1). These results are consistent with other published studies demonstrating that conservation tillage can produce high pumpkin yields (Hoyt, 1999;Rapp et al., 2004;Walters et al., 2008). Fruit damage was low among all treatments in both years of study (Table 1). Weed management is an important facet of a pumpkin production system that can be affected by tillage and can ultimately affect yield (Walters et al., 2008;Walters and Young, 2010). Tillage treatment was a significant factor predicting average dried weed weight in 2014 (F = 29.70, P = 0.0008) and there were significantly more weeds in the strip-till treatment compared with no-till and conventional-till treatments ( Table 2). The five most common weeds in the study were hairy galinsoga (Galinsoga ciliata), redroot pigweed (Amaranthus retroflexus), common lambsquarters (Chenopodium album), yellow nutsedge (Cyperus esculentus), and common purslane (Portulaca oleracea). Along the edges of the tilled strips, weeds were vigorous, and perhaps competed with pumpkin plants, potentially reducing yields in strip-till plots in 2014. In 2015, weeds were better controlled compared with 2014, possibly due to the postemergence herbicide application Table 1. Mean and standard error statistics associated with pumpkin production under three different tillage regimes: no-till (NT), strip-till (ST), and conventional-till (CT), across two study years. Study yr Tillage treatment Avg wt pumpkin (kg) z Yield (kg·ha -1 ) z Pumpkin weight and yield are based on only those fruit that were marketable (orange and undamaged) at the time of harvest. y The only significant differences between tillage treatment effects were for average weights per pumpkin in 2015 at a # 0.05 level using Tukey honestly significant difference tests. that year. Better weed control in 2015 may have also contributed to overall higher yields that year compared with 2014 (Table 1). Variable weed control between years in notill and strip-till treatments indicates that careful weed management may be key to successful conservation-tillage pumpkin production (Chauhan et al., 2012). Soil moisture retention can be another important benefit of a reduced tillage system (Blevins et al., 1983). There was a significant tillage treatment effect on soil moisture content in both years of study (2014: F = 6.49, P = 0.03; 2015: F = 7.79, P = 0.02). Soil moisture content averaged over three sample dates in 2014 and four sample dates in 2015 was higher in the strip-till and no-till treatments compared with the conventional-till treatment in both years (Table 2). In 2014, soil moisture measurements taken on 23 Sept. (97 DAP) and 1 Oct. (105 DAP), when the weather had started to cool, did not differ significantly among tillage treatments, whereas the measurement on 2 Aug. (45 DAP), during a period of rapid vine growth, showed significant tillage treatment effects (F = 20.85, P = <0.0001) with strip-till and no-till treatments similarly retaining about 25% more moisture than the conventional-till treatment. In 2015, soil moisture measurements were all taken during the hot months of July and August during the period of rapid vine growth and showed consistent trends on all dates and significant treatment effects for all but the last sampling date (11 Aug., 61 DAP). Soil moisture in no-till was significantly higher on 20 July (39 DAP), 29 July (48 DAP), and 6 Aug. (56 DAP) than in conventional-till plots, with intermediate soil moisture in strip-till plots. These results indicate that one potential mechanism of greater pumpkin yield and fruit weight in reduced tillage treatments may be due to greater soil moisture retention. Consistent with other research on the benefits of strip-till and no-till production systems, soil erosion was greatly reduced in these treatments compared with the conventional-till treatment (Blevins et al., 1983). TSS from runoff following simulated flooding was significantly different between tillage treatments (F = 28.85, P = 0.0002). On both rainfall simulation dates, there were similar levels of runoff in strip-till and notill treatments, which were significantly less than in the conventional-till treatment (Table 2). In fact, there was 10.8 and 6.9 times as much soil in the runoff from conventional-till plots compared with the average across notill and strip-till treatments on the first (6 Aug., 2014, 48 DAP) and second (17 Sept., 2014, 91 DAP) rainfall simulation dates, respectively. These differences were readily apparent with runoff from the conservation tillage treatments being considerably clearer than the obviously muddy runoff from conventional-till plots with TSS levels in the conservation tillage samples almost as low as the 100 mg/L level allowed in rivers near the experiment site (VDEQ, 2003). Since these simulated rainfall events represent storm levels that have historically only occurred about once per year (Kunkel et al., 1999), and TSS measured in waterways are the result of all different surrounding land-uses, many of which cause less erosion than agriculture (VDEQ, 2003), conservation tillage in pumpkin production would likely help maintain TSS in waterways below total maximum daily load targets. In addition to conserving water quality, preventing soil erosion through conservation tillage can be important for maintaining the longterm fertility of production fields, especially on sloped land (Pimentel et al., 1995). Strip-till and no-till pumpkin production were compatible with the relatively lowinput system studied here, which also incorporated cover crops, low fertilizer rates, scouting to reduce pesticide sprays, and no irrigation (Harrelson et al., 2007(Harrelson et al., , 2008Heckman et al., 2003;Rapp et al., 2004;Reiners and Riggs, 1997;Walters et al., 2008). Strip-till and no-till may be particularly useful in maintaining high pumpkin yields during dry years due to enhanced soil moisture retention in these systems. They also provide the added benefit of conserving soil during extreme rainfall events. Developing low input, conservation tillage vegetable production systems may become increasingly important as a means of adapting to the erratic rainfall patterns predicted by climate change while reducing fuel consumed for tillage (Clements et al., 1995) and meeting customer demands for high-quality products.
2019-04-02T13:02:39.773Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "ed4582ab53be7721f6868640756c0abe6e251574", "oa_license": null, "oa_url": "https://journals.ashs.org/downloadpdf/journals/hortsci/51/12/article-p1524.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d2bf4c5d889e4a62869aba500cd5d33ed7ed3247", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
268190677
pes2o/s2orc
v3-fos-license
Nanoparticle-Based Secretory Granules Induce a Specific and Long-Lasting Immune Response through Prolonged Antigen Release Developing prolonged antigen delivery systems that mimic long-term exposure to pathogens appears as a promising but still poorly explored approach to reach durable immunities. In this study, we have used a simple technology by which His-tagged proteins can be assembled, assisted by divalent cations, as supramolecular complexes with progressive complexity, namely protein-only nanoparticles and microparticles. Microparticles produced out of nanoparticles are biomimetics of secretory granules from the mammalian hormonal system. Upon subcutaneous administration, they slowly disintegrate, acting as an endocrine-like secretory system and rendering the building block nanoparticles progressively bioavailable. The performance of such materials, previously validated for drug delivery in oncology, has been tested here regarding the potential for time-prolonged antigen release. This has been completed by taking, as a building block, a nanostructured version of p30, a main structural immunogen from the African swine fever virus (ASFV). By challenging the system in both mice and pigs, we have observed unusually potent pro-inflammatory activity in porcine macrophages, and long-lasting humoral and cellular responses in vivo, which might overcome the need for an adjuvant. The robustness of both innate and adaptive responses tag, for the first time, these dynamic depot materials as a novel and valuable instrument with transversal applicability in immune stimulation and vaccinology. Introduction Protein materials, that is, supramolecular protein complexes with defined physicochemical and biological properties, are gaining interest in biomedicine because of their potent applications in drug delivery and in regenerative medicine [1][2][3][4][5].Several approaches allow the controlled oligomerization of selected polypeptides into fibrils, layers, matrices, nanoparticles, or microparticles [6][7][8][9].One of the most versatile, promising and technologically simple strategies for protein assembly is the exploitation of the coordination capabilities between divalent cations (such as Zn 2+ , Ca 2+ , Mg 2+ and Mn 2+ ) and histidine Nanomaterials 2024, 14, 435 2 of 16 (His) residues.This category of interactivity allows the cross-molecular binding of poly-His-tagged proteins into nanoparticles.When increasing the amounts of crosslinking ions, these materials are progressively clustered as granular microscale particles [10,11].Both processes can be reversed by the exposure to chelating agents or, more slowly, by the mere physiological equilibrium-based dilution of the gluing cation (Figure 1A).This fact results in the progressive disintegration of the material into their forming building blocks, either nanoparticles or their protomers, that are released to the media [12].In addition, it is known that the formation of the nanoparticle building blocks is favored if the N-terminal segment of the polypeptide is cationic [13].Taking this approach and by regulating the cation/His ratio in the coordination mixture, both nanoparticles and microparticles have been generated in a controlled way through robust and highly efficient protocols that keep the folding status and functionality of the forming polypeptides.While the intermediate nanoparticles have been mostly adapted as vehicles for the cell-targeted delivery of small molecular weight drugs and cytotoxic protein in oncology [14,15], microparticles show appealing properties as slow drug delivery systems [16,17]. Once these micron-scale materials are administered subcutaneously, they leak the forming polypeptides that reach the bloodstream [17].Notably, if fused to specific ligands of cell surface receptors, the released protein accumulates in receptor-overexpressing target organs [17].This principle, which mimics the secretion process of peptide hormones in the mammalian endocrine system [18][19][20][21][22][23], has been explored for the delivery of diverse functional proteins with therapeutic applications [5].A main advantage of this system over other slow drug delivery platforms is that the protein drug is self-contained as mechanically stable dynamic depot in the absence of any chemically heterogeneous scaffold.Thus, inert holding materials or matrices, that might pose compatibility issues, are not needed here to support the endocrine-like character of these protein materials. The fact that these secretory microparticles undergo a disintegration process in vivo results in a time-prolonged release of functional, properly folded proteins, in contrast to a conventional single shot.The leakage profile is dependent on protein properties and on the used gluing cation [12].This concept might be highly appealing in vaccinology as a way to expose a given antigen to the immune system during a prolonged period upon a single administration, mimicking the immune stimulation during a natural infection.Therefore, secretory microparticles might represent a novel approach to clinical immunization based on the subcutaneous implantation of antigen-delivering protein materials.This possibility has been explored here through the preparation and immunogenicity evaluation of proteinonly secretory materials, utilizing two distinct antigens and in two different experimental models.Firstly, the in vivo safety and immunogenicity of protein-only microparticles based on p30 (also named p32) [24], a main antigen of the African swine fever virus (ASFV) [25,26], were examined in pigs.This structurally complex virus [25] causes severe hemorrhagic disease in domestic pigs [27,28].Being a significant global veterinary concern [26,[29][30][31][32], primarily due to the lack of effective vaccine and vaccine prototypes [33], it is particularly worrying in large countries such as China in which the pig industry is a strong economic supporter [34,35].Secondly, a mouse model was employed to assess the safety and immunogenicity of secretory granules based on the green fluorescent protein (GFP).Additionally, to obtain a deeper understanding of the immunological mechanisms triggered by these materials, the specific cytokine profile induced in vitro in primary cells was examined.The obtained data have been evaluated in the context of simple, cost-effective, and efficient new generation protein-based vaccination platforms that, based on sustained antigen release, might offer appealing properties over the current immunization methods. Ethics Statement Animal care and procedures were performed in accordance with the guidelines of the Good Experimental Practice and with the approval of the Ethics Committee on Animal Experimentation of the Generalitat de Catalunya (pig experiments project codes: CEA-OH/11580/1 and CEA-OH/10298/2; mice experiment project code: CEA-OH/11691/2). Protein Design, Production and Purification The in-house designed synthetic gene for the modular protein RK4-P30-H6 was provided by Gene Art (Thermo Fisher, Waltham, MA, USA), and subcloned in a pET22b plasmid (Novagen, Madison, WI, USA).The construct contains four repetitions of RK at the N-terminal that serve as a cationic peptide to stimulate nanoparticle formation together with the poly-His tag at the C-terminal end.The production of the protein was tested at 20 °C (overnight) and 37 °C (3 h) with two different concentrations of IPTG (0.1 mM, 1 mM) in an Escherichia coli BL21 strain.Protein production was optimized at 20 °C overnight with 0.1 mM IPTG.After the production, cells were harvested at 5000× g for 15 min and the pellets were washed with PBS and stored at −80 °C until further use.Prior to the purification, cells were resuspended in Wash Buffer (20 mM Tris-HCl pH 8, 500 mM NaCl and 10 mM imidazole) with a cOmplete Protease Inhibitor Cocktail EDTA-free tablet Ethics Statement Animal care and procedures were performed in accordance with the guidelines of the Good Experimental Practice and with the approval of the Ethics Committee on Animal Experimentation of the Generalitat de Catalunya (pig experiments project codes: CEA-OH/11580/1 and CEA-OH/10298/2; mice experiment project code: CEA-OH/11691/2). Protein Design, Production and Purification The in-house designed synthetic gene for the modular protein RK4-P30-H6 was provided by Gene Art (Thermo Fisher, Waltham, MA, USA), and subcloned in a pET22b plasmid (Novagen, Madison, WI, USA).The construct contains four repetitions of RK at the N-terminal that serve as a cationic peptide to stimulate nanoparticle formation together with the poly-His tag at the C-terminal end.The production of the protein was tested at 20 • C (overnight) and 37 • C (3 h) with two different concentrations of IPTG (0.1 mM, 1 mM) in an Escherichia coli BL21 strain.Protein production was optimized at 20 • C overnight with 0.1 mM IPTG.After the production, cells were harvested at 5000× g for 15 min and the pellets were washed with PBS and stored at −80 • C until further use.Prior to the purification, cells were resuspended in Wash Buffer (20 mM Tris-HCl pH 8, 500 mM NaCl and 10 mM imidazole) with a cOmplete Protease Inhibitor Cocktail EDTA-free tablet (Roche Diagnostics, Rotkreuz, Switzerland).Cells were then disrupted in an EmulsiFlex-C5 system (Avestin, Ottawa, ON, Canada) for 3 rounds at approximately 7500 psi.After the disruption, the mixture was centrifuged at 15,000× g for 45 min and the soluble fraction was retained and filtered through sterile 0.22 µm filters (Millipore, Burlington, MA, USA).Next, the filtered soluble fraction was loaded into a HisTrap HP 5 mL column (GE Healthcare, Chicago, IL, USA) using an ÄKTA Pure chromatography system (GE Healthcare).Elution was performed through a linear gradient of Elution Buffer (20 mM Tris-HCl pH 8, 500 mM NaCl, 500 mM imidazole).The fractions were analyzed by SDS-PAGE and Western blot (anti-His tag, Genscript, Piscataway, NJ, USA) and those containing RK-P30-H6 were dialyzed thrice against a saline buffer (166 mM NaCO 3 H, 333 mM NaCl). Dynamic Light Scattering The volume size distribution of nanoparticles was determined by Dynamic Light Scattering (DLS) at 633 nm (Zetasizer Pro, Malvern, Malvern, UK).Samples were diluted in their respective buffer to a concentration of 1 mg/mL.The samples were measured in triplicate. Microparticle Formation and Protein Release RK4-p30-H6 and GFP-H6 (at 1 mg/mL) were mixed individually with a zinc chloride solution at a 200:1 divalent cation-to-protein ratio.The mixture was incubated at room temperature for 10 min, and then the microparticles were recovered by centrifugation (10,000× g, 5 min).The supernatant was discarded.To assess the release of protein from these microparticles, these materials were resuspended in PBS (diluted to a final concentration of 1 mg/mL) and further incubated at 37 • C for seven days.Samples were extracted on day 0, 1, 3, and 7.Then, we loaded the samples into a polyacrylamide gel which was later transferred to a PVDF membrane.The percentage of protein released during the experiment was assessed by Western Blot using an anti-polyhistidine tag antibody.Images were processed with the Image Lab (BioRad, Hercules, CA, USA) software (v.6.1). Scanning Electron Microscopy High-resolution images of cation-induced microparticles were obtained by field emission scanning electron microscopy (FESEM).A volume of 10 µL of each microparticle sample (0.5 mg/mL) was deposited on silicon wafers (Ted Pella Inc., Redding, CA, USA) overnight and then observed, without coating, in a FESEM Zeiss Merlin (Zeiss, Jena, Germany) operating at 1 kV and equipped with a high-resolution secondary electron detector. Study Design of the Pig Experiments Landrace × Large White male piglets at 8 weeks of age at arrival were used.Pigs were housed in the experimental farm of IRTA Monells (Girona, Spain).Animals were fed ad libitum, and an acclimation period of one week was allowed before the initiation of the study.In the first experiment, animals were subcutaneously inoculated once or twice three weeks apart with 50 µg (final volume of 0.5 mL) of the soluble or proteinonly-microparticles (POMs) form of the ASFV protein p30 in sodium carbonate buffer (166 mM NaHCO 3 , 333 mM NaCl buffer, pH 8), in the presence or absence of CAF01 (half of the volume, following manufacturer's instructions).A group of three pigs received PBS following the same procedure as controls.Blood samples were collected weekly.In the second experiment, pigs received either a high dose of 150 µg of p30 POMs or PBS (control group), following the same immunization regimen as experiment 1. Study Design of the Mice Experiment Sixteen BALB/c mice at 7 weeks of age at arrival were used (half females and half males).Mice were allocated to cages according to sex and fed ad libitum during the experiment.Ear cuts were used to differentiate the animals.After one week of acclimation, six mice (3 females and 3 males) were subcutaneously inoculated twice 3 weeks apart with 50 µg of GFP POMs, while six others (3 females and 3 males) received 5 µg of the same antigen following the same regimen.GFP POMs were suspended in sodium carbonate buffer (166 mM NaHCO 3 , 333 mM NaCl buffer, pH 8), and the inoculated volume was 0.3 mL.Four mice (2 females and 2 males) were used as controls and received 0.3 mL of PBS in each administration.Blood for the generation of sera was taken from the facial vein before the first inoculation (SD0), and 2 and 9 weeks after the second inoculation (SD35 and SD85, respectively).Two weeks after the second immunization, three mice receiving a 50 µg/dose of GFP POMs, three receiving a 5 µg/dose, and two controls were euthanized to assess the cellular response induced by GFP POMs using splenocytes.The six remaining animals were kept for seven more weeks to analyze long-term immunogenicity. Mouse Splenocytes Collection and Flow Cytometry Mice splenocytes were obtained by the mechanical dissociation of spleens and filtration through a 40 µm cell strainer.Red blood cells were then lysed using NH4Cl for 5 min at room temperature, and splenocytes were finally suspended in RPMI 1640 medium (Gibco, Waltham, MA, USA) supplemented with 10% FBS (Cultek), 2 mM L-glutamine (Invitrogen), 100 IU/mL penicillin/streptomycin (Invitrogen), 0.05 mM β-mercaptoethanol, and 1 mM sodium pyruvate.Fresh mouse splenocytes were used for flow cytometry analysis.One million cells were used per condition in U-bottom 96-well plates (100 µL/well).Splenocytes were stimulated for 5 days with 5 µg/mL of GFP POMs.Complete RPMI was used as the negative control, while stimulation with phorbol myristate acetate (PMA) plus ionomycin (at 5 ng/mL and 500 ng/mL, respectively) was used as the positive control.After stimulation, cells were stained with the Zombie NIR fixable viability kit (Biolegend, 423106, San Diego, CA, USA) following manufacturer's instructions, and then a blockage of Fc receptors was performed with PBS 5% FBS for 15 min on ice.Extracellular staining was performed for 20 min on ice in PBS 2% FBS using 50 µL of a mix of the following antibodies: APC hamster anti-mouse CD3e at a 1/20 dilution (BD Biosciences, #553066, Franklin Lakes, NJ, USA), PerCP-Cy5.5 rat anti-mouse CD4 at a 1/300 dilution (BD Biosciences, #550954), and PE-Cy7 rat anti-mouse CD8a at a 1/150 dilution (BD Biosciences, #552877).Afterwards, the BD Cytofix/Cytoperm Kit (BD Biosciences) was used according to the manufacturer's protocol to fix and permeabilize the cells.Intracellular staining using BV421 mouse anti-Ki67 at a 1/150 dilution (BD Biosciences #652411) was then performed for 30 min on ice in Perm/Wash buffer (BD Biosciences).Samples were acquired on a BD FACSAria IIu flow cytometer (BD Biosciences) and data were analyzed using FlowJo v10.7.1 software (Tree Star Inc., San Carlos, CA, USA). ELISpot Assay with Porcine Peripheral Blood Monocyte Cells (PBMCs) PBMCs were separated from whole blood by density-gradient centrifugation with Histopaque 1077 (Sigma).Red blood cells from PBMC were lysed for 5 min with ammonium chloride.Final cell cultures were suspended in RPMI 1640 medium (Gibco) supplemented with 10% FCS, 100 IU of penicillin/streptomycin/mL (Invitrogen), 2 mM L-glutamine (Invitrogen), and 0.05 mM 2-mercaptoethanol.Trypan blue was used to assess cell viability.IFNγ-secreting cells were assessed by ELISpot assay using purified mouse anti-pig IFNγ (clone P2G10, BD Pharmingen) as the capture antibody and biotinylated mouse anti-porcine IFNγ antibody (clone P2C11, BD Pharmingen) as the detection antibody, following a previously reported method [36].Cells were stimulated with p30 or p30 POMs at 5 µg/mL, and incubated for 16 h at 37 • C in 5% CO 2 . Isolation of Porcine Alveolar Macrophages (PAMs) PAMs were isolated from pig lungs following an adapted protocol [37].Briefly, pigs were euthanized by exsanguination.The trachea was ligated to prevent total pulmonary collapse, followed by the removal of the heart and lungs from the thorax.Alveolar macrophages were collected in PBS from lungs by bronchioalveolar lavage.PAMs were cultured in complete RPMI-1640 medium [10% fetal bovine serum (FBS), 2 mM L-glutamine, 1 µg/mL fungizone, 100 U/mL penicillin, and 100 µg/mL streptomycin] in Petri dishes for 2 h at 37 • C in a humidified 5% CO 2 atmosphere.Some PAMs were cultured for 5 days at 37 • C, and qPCR was performed to ensure the cells were negative for the presence of Porcine circovirus 2, Porcine reproductive and respiratory virus, and Mycoplasma.The remaining cells were stored in liquid nitrogen until use. Multiplex Luminex Assay PAMs were seeded in 24-well plates (10 6 cells/well) in RPMI 1640 medium supplemented with 10% FBS, 100 IU/mL of penicillin/streptomycin, 2 mM L-glutamine, and 0.5% nystatin (sigma-MERCK) and incubated overnight at 37 • C in 5% CO 2 .Afterwards, the medium was removed and replenished with fresh RPMI either with or without 5 µg/mL of GFP POMs.After a 24 h incubation at 37 • C, 5% CO 2 , cell supernatants were collected and cytokines were determined using the Luminex xMAP technology and the ProcartaPlex Porcine Cytokine & Chemokine Panel 1 (Thermo Fisher Scientific).Cytokine concentrations were calculated using the xPONENT 4.3 software (Luminex, Austin, TX, USA) and expressed as pg/mL (except for TNF, for which a technical problem with the standard curve occurred and only the mean fluorescence intensity (MFI) values could be assessed). Statistical Analyses Prism version 8.3.0 software (GraphPad, La Jolla, CA, USA) was used to plot the results and perform statistical analyses.The tests used are specified on figure legends.Statistical significance was set at p < 0.05 and is displayed in GraphPad style (p > 0.05 ns, * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001). Design and Construction of an ASFV-Antigen Nanoparticle A full-length, proteolytically stable version of the antigen p30 [38], (also named p32) [24], was engineered and produced in Escherichia coli as a multidomain protein, placed between a cationic N-terminal peptide (RK4) and a C-terminal hexahistidine (H6) tail (RK4-P30-H6, Figure 1B).While this antigen by itself is not protective (as any other from the virus, [33]), it elicits potent specific antibody and cellular responses [39], representing a fully valid model for the testing of our concepts.In the recombinant construct, the combination of these end terminal peptides was expected to promote the self-assembling of the construct as oligomeric nanoparticles assisted by divalent cations from the media.A simpler GFP-H6 construct was used as a control for some experiments (Figure 1B).Upon bioproduction, RK4-P30-H6 was recovered, in a single step, by immobilized metal affinity chromatography (IMAC).The size and integrity of the construct were confirmed through Western Blot (Figure 1C, left).Notably, we observed that, as predicted, the protein spontaneously organized as monodisperse nanoparticle populations of around 52 nanometers in size, which could be disassembled by submitting the materials to denaturing plus chelating conditions (Figure 1C, right).In contrast, the control GFP-H6, also well-produced (Figure 1D, left), remained unassembled because of the non-cationic character of its N-terminal end (Figure 1D, right). RK4-p30-H6 Microparticles Promote Consistent and Prolonged Antigen Release Using pure solutions of RK4-p30-H6 and GFP-H6, we generated secretory microscale granules by mixing the stored pure protein with a zinc chloride solution at a 200:1 divalent cation-to-protein ratio.The formation of these granules occurs through the Zn-mediated cross-molecular clustering via the overhanging H6 tails [10], and it results in mechanically stable particles that leak the forming protein, in vitro and in vivo, during prolonged time periods [12,17].The size of the resulting materials, named p30-or GFP-POMs (from Protein-Only Microparticles), was determined by both SEM and DLS (Figure 2A,B), resulting in values ranging from 0.5 µm to 4 µm.We also evaluated the protein released from these POMs linked to their slow disintegration, in vitro, under physiological conditions, for seven days.Under this experimental setting, a highly regular and sustained protein leakage was observed during the entire experiment in the case of p30-POMs (Figure 2C), but faster for a large fraction of the GFP-POMs content (Figure 2D).A small fraction of GFP-H6 was, however, released progressively for a few days, in a time-sustained way.The differential leakage pattern of GFP-H6 was probably related to the missing N-terminal cationic peptide, which in this type of modular protein construction shows architectonic roles through electrostatic interactions between protein monomers [13].The size of the released RK4-p30-H6 and GFP-H6 proteins were compatible with their dimeric forms (Figure 2C,D), that appeared as highly stable.This fact indicated that the building block nanoparticles (sizing around 50 nm) out of which the p30 POMs were formed (Figure 1A) were unstable upon their transit through POMs, disassembling in lower order structures in parallel or immediately after leakage in vitro.In other parallel platforms based on different proteins of oncological interest, the leaked nanoparticles have been shown more stable under in vitro testing [17].However, we assumed that the material produced here might be more robust in vivo, in which a more crowded ionic environment would be supportive of the intermolecular protein-protein interactions within the nanoparticles [40].In this context, the nanoparticle version of the antigen might be at least temporarily available for immune stimulation. Subcutaneous Administration of p30 POMs Are Safe and Immunogenic in Pigs In previous studies, the subcutaneous inoculation of protein-only secretory mi particles formed by therapeutic proteins did not show any side-toxicities while the leased protein was fully functional [17].Here we aimed to assess the safety and immu genicity of protein-only secretory microparticles for their potential use as slow-rel systems for antigens.To do so, the soluble and POMs forms of the ASFV p30 were sub taneously administered to pigs, and the induced antibody response was analyzed.In e group, pigs received one dose of 50 µg at SD0, and a booster was administered at SD1 only half of them.For comparative purposes, two additional groups of pigs received same protein versions but formulated with the commercial adjuvant CAF01.No adv effects were noticed in the animals after administration of any of the formulations.Nei control pigs injected with PBS nor pigs receiving one single dose of any of the tested tein versions and formulations showed detectable ASFV-specific IgGs at any of the te time points (Figure 3A-E).In contrast, the administration of two doses of p30 resulte seroconversion in all cases, detectable from day 7 after the second inoculation, peak after 14 days, and slowly declining by day 21 (Figure 3A-D).Importantly, the antib Subcutaneous Administration of p30 POMs Are Safe and Immunogenic in Pigs In previous studies, the subcutaneous inoculation of protein-only secretory microparticles formed by therapeutic proteins did not show any side-toxicities while the released protein was fully functional [17].Here we aimed to assess the safety and immunogenicity of protein-only secretory microparticles for their potential use as slow-release systems for antigens.To do so, the soluble and POMs forms of the ASFV p30 were subcutaneously administered to pigs, and the induced antibody response was analyzed.In each group, pigs received one dose of 50 µg at SD0, and a booster was administered at SD14 in only half of them.For comparative purposes, two additional groups of pigs received the same protein versions but formulated with the commercial adjuvant CAF01.No adverse effects were noticed in the animals after administration of any of the formulations.Neither control pigs injected with PBS nor pigs receiving one single dose of any of the tested protein versions and formulations showed detectable ASFV-specific IgGs at any of the tested time points (Figure 3A-E).In contrast, the administration of two doses of p30 resulted in seroconversion in all cases, detectable from day 7 after the second inoculation, peaking after 14 days, and slowly declining by day 21 (Figure 3A-D).Importantly, the antibody levels induced after two doses of soluble p30 or p30 POM, both in the presence of the adjuvant CAF01, demonstrated the higher capability of the microparticulated protein to enhance the induction of a protein-specific antibody response (Figure 3G,H).levels induced after two doses of soluble p30 or p30 POM, both in the presence of the adjuvant CAF01, demonstrated the higher capability of the microparticulated protein to enhance the induction of a protein-specific antibody response (Figure 3G,H).In preceding the testing of secretory granules, doses of 0.5-1 mg were routinely administered to mouse models with a good accumulation of the tumor-targeted released protein in target tissues [17].Considering the small amount of protein used here (50 µg/dose), we wondered whether increasing the injected POM amount could optimize the antigen formulation, and even trigger a potent immune response that might prevent the booster or the use of adjuvant.To test this hypothesis, six pigs were inoculated twice with In preceding the testing of secretory granules, doses of 0.5-1 mg were routinely administered to mouse models with a good accumulation of the tumor-targeted released protein in target tissues [17].Considering the small amount of protein used here (50 µg/dose), we wondered whether increasing the injected POM amount could optimize the antigen formulation, and even trigger a potent immune response that might prevent the booster or the use of adjuvant.To test this hypothesis, six pigs were inoculated twice with 150 µg of p30 POMs, while another group of six control pigs received PBS.Again, no seroconversion was observed after one single shot, but detectable ASFV-specific IgGs were observed after a second administration (Figure 3F).To allow for better comparison between groups, the ASFV-specific IgG titers two weeks after the second administration (SD35) were determined.Remarkably, immunization with soluble p30 alone did not render significant levels compared to the control group, while 50 µg of soluble p30 plus CAF01 showed a slight but significant increase in specific antibodies compared to the control group at a dilution of 1/40 (Figure 3G,H).Notably, the use of p30 POMs without CAF01 also resulted in a significant increase in the antibody levels at this dilution (Figure 3G,H), suggesting that the particulate nature of the antigen was capable of enhancing vaccine immunogenicity without the need of an adjuvant.Indeed, while the inoculation of CAF01with 50 µg of p30 POMs resulted in significantly higher levels of ASFV-specific IgGs at both tested dilutions, 150 µg of p30 POMs without CAF01 rendered even more significantly high ASFV-specific antibody levels (Figure 3G,H). To further characterize the antibody response triggered by p30 POMs, the induced IgG isotypes as well as the presence of specific IgA in serum were assessed by ELISA.Immunization with p30, both as the soluble protein version and in the POMs form, induced IgG2 anti-ASFV antibodies but no detectable IgG1 (Figure 4A), indicating a Th1-like bias [41].This total bias towards a Th1-like response is in contrast with the balanced IgG1/IgG2 profile typically observed in sera from ASF-immune pigs after vaccination with the live attenuated ASFV BA71∆CD2, a protective vaccine prototype developed by us [41].Again, only the groups receiving 50 µg of p30 POMs plus F01 or 150 µg of p30 POMs showed statistically significant higher levels of IgG2 compared to the controls, and more uniform levels were found in animals inoculated with the high dose of p30 POMs (Figure 4A). Notably, significant levels of ASFV-specific IgA were only found in serum after administration of the higher 150 µg p30 POMs amount (Figure 4B), stressing the importance of an adequate dose. 150 µg of p30 POMs, while another group of six control pigs received PBS.Again, no seroconversion was observed after one single shot, but detectable ASFV-specific IgGs were observed after a second administration (Figure 3F).To allow for better comparison between groups, the ASFV-specific IgG titers two weeks after the second administration (SD35) were determined.Remarkably, immunization with soluble p30 alone did not render significant levels compared to the control group, while 50 µg of soluble p30 plus CAF01 showed a slight but significant increase in specific antibodies compared to the control group at a dilution of 1/40 (Figure 3G,H).Notably, the use of p30 POMs without CAF01 also resulted in a significant increase in the antibody levels at this dilution (Figure 3G,H), suggesting that the particulate nature of the antigen was capable of enhancing vaccine immunogenicity without the need of an adjuvant.Indeed, while the inoculation of CAF01with 50 µg of p30 POMs resulted in significantly higher levels of ASFV-specific IgGs at both tested dilutions, 150 µg of p30 POMs without CAF01 rendered even more significantly high ASFV-specific antibody levels (Figure 3G,H). To further characterize the antibody response triggered by p30 POMs, the induced IgG isotypes as well as the presence of specific IgA in serum were assessed by ELISA.Immunization with p30, both as the soluble protein version and in the POMs form, induced IgG2 anti-ASFV antibodies but no detectable IgG1 (Figure 4A), indicating a Th1like bias [41].This total bias towards a Th1-like response is in contrast with the balanced IgG1/IgG2 profile typically observed in sera from ASF-immune pigs after vaccination with the live attenuated ASFV BA71ΔCD2, a protective vaccine prototype developed by us [41].Again, only the groups receiving 50 µg of p30 POMs plus F01 or 150 µg of p30 POMs showed statistically significant higher levels of IgG2 compared to the controls, and more uniform levels were found in animals inoculated with the high dose of p30 POMs (Figure 4A). Notably, significant levels of ASFV-specific IgA were only found in serum after administration of the higher 150 µg p30 POMs amount (Figure 4B), stressing the importance of an adequate dose. POM Proteins Stimulate the Immune System in a Non-Specific Manner As showed above, the molecular architecture of POMs might have an immunostimulatory capability that overcomes the lack of adjuvant (Figure 4B).This is likely due to the capacity of particulate antigens to enhance vaccine immunogenicity compared to soluble antigens [42].Thus, to gain further insight on the immunomodulatory properties of POM Proteins Stimulate the Immune System in a Non-Specific Manner As showed above, the molecular architecture of POMs might have an immunostimulatory capability that overcomes the lack of adjuvant (Figure 4B).This is likely due to the capacity of particulate antigens to enhance vaccine immunogenicity compared to soluble antigens [42].Thus, to gain further insight on the immunomodulatory properties of POMs, we performed two in vitro experiments to evaluate the capability of POMs to trigger a non-specific stimulation of immune cells.First, PBMCs from three naïve pigs were stimulated with p30 or p30 POMs for 16 h, and IFNγ-producing cells were quantified by ELISpot assay.The results showed that p30 POMs had a higher capability to non-specifically activate PBMCs compared to the non-microparticulated p30 (Figure 5A).Second, pulmonary alveolar macrophages (PAMs) were stimulated in vitro for 24 h with GFP POMs or left untreated (RPMI), and a multiplex Luminex assay was used to assess the levels of cytokines in supernatants.Our results demonstrated that GFP POMs were able to significantly stimulate the secretion of several pro-inflammatory cytokines, including TNF, IFNγ, IL-1β, and IL-4 (Figure 5B).No deleterious effect was observed on PAMs after in vitro stimulation with POMs, supporting their safety profile for future applications. POMs, we performed two in vitro experiments to evaluate the capability of POMs ger a non-specific stimulation of immune cells.First, PBMCs from three naïve pig stimulated with p30 or p30 POMs for 16 h, and IFNγ-producing cells were quanti ELISpot assay.The results showed that p30 POMs had a higher capability to noncally activate PBMCs compared to the non-microparticulated p30 (Figure 5A).S pulmonary alveolar macrophages (PAMs) were stimulated in vitro for 24 h wi POMs or left untreated (RPMI), and a multiplex Luminex assay was used to ass levels of cytokines in supernatants.Our results demonstrated that GFP POMs we to significantly stimulate the secretion of several pro-inflammatory cytokines, inc TNF, IFNγ, IL-1β, and IL-4 (Figure 5B).No deleterious effect was observed on PAM in vitro stimulation with POMs, supporting their safety profile for future applicati Subcutaneous Administration of GFP POMs Are Safe and Immunogenic in Mice With the intention of compiling more data regarding the immunogenicity a immune response triggered by POM formulations, our next step was to test them other animal model, specifically in mice.To do so, BALB/c mice were inoculated weeks apart with either 5 or 50 µg of GFP POMs.Administration of GFP POMs i induced long-term GFP-specific antibodies in a dose-dependent manner.Thus, o 50 µg high dose induced significant levels of GFP-specific antibodies that were det two weeks after the second shot (SD35, first time point tested) and lasted at lea Subcutaneous Administration of GFP POMs Are Safe and Immunogenic in Mice With the intention of compiling more data regarding the immunogenicity and the immune response triggered by POM formulations, our next step was to test them in another animal model, specifically in mice.To do so, BALB/c mice were inoculated twice 2 weeks apart with either 5 or 50 µg of GFP POMs.Administration of GFP POMs in mice induced long-term GFP-specific antibodies in a dose-dependent manner.Thus, only the 50 µg high dose induced significant levels of GFP-specific antibodies that were detectable two weeks after the second shot (SD35, first time point tested) and lasted at least nine weeks (SD85, last time point tested) (Figure 6A,B).The POM formulation of GFP induced significantly higher levels of GFP-specific IgG2 antibodies (Figure 6C), in accordance with what was observed in pigs receiving p30 POMs.Moreover, in this case, levels of IgG2 were also augmented, although not significantly, in mice receiving the high dose of 50 µg of GFP POMs (Figure 6C).Also, in line with what was observed in pigs, anti-GFP IgA was detected in mice serum only when prime boosting with 50 µg of GFP POMs, but not with 5 µg doses (Figure 6D).These results suggested the potential of POM formulations to induce mucosal immunity, as well as the dose dependence of the immune response induced.Two weeks after the second administration, half of the mice were sacrificed to analyze the cellular response induced by GFP POMs.Thus, splenocytes were isolated and stimulated in vitro for 5 days with 5 µg/mL of GFP POMs to be assayed for lymphoproliferative activity by flow cytometry, using the Ki67 proliferation marker.In vitro stimulation of splenocytes obtained from mice immunized with GFP POMs, either with 5 or 50 µg/dose, resulted in the specific proliferation of CD8 + T cells (Figure 6E).Despite the proliferation extent not being significantly higher, these data provide the first set of evidence that GFP POMs can induce memory CD8 + T cells in mice, even at low concentrations of antigen. significantly higher levels of GFP-specific IgG2 antibodies (Figure 6C), in accordance what was observed in pigs receiving p30 POMs.Moreover, in this case, levels of were also augmented, although not significantly, in mice receiving the high dose of of GFP POMs (Figure 6C).Also, in line with what was observed in pigs, anti-GFP IgA detected in mice serum only when prime boosting with 50 µg of GFP POMs, but no 5 µg doses (Figure 6D).These results suggested the potential of POM formulations duce mucosal immunity, as well as the dose dependence of the immune response ind Two weeks after the second administration, half of the mice were sacrificed to analy cellular response induced by GFP POMs.Thus, splenocytes were isolated and stimu in vitro for 5 days with 5 µg/mL of GFP POMs to be assayed for lymphoproliferati tivity by flow cytometry, using the Ki67 proliferation marker.In vitro stimulation of nocytes obtained from mice immunized with GFP POMs, either with 5 or 50 µg/do sulted in the specific proliferation of CD8 + T cells (Figure 6E).Despite the prolife extent not being significantly higher, these data provide the first set of evidence tha POMs can induce memory CD8 + T cells in mice, even at low concentrations of antig Discussion In the context of global health and facing potential pandemic threats, developin and more efficient immunization systems is an inexcusable need.Much has been le about vaccination strategies from the still ongoing COVID-19 pandemic, but a cons Discussion In the context of global health and facing potential pandemic threats, developing new and more efficient immunization systems is an inexcusable need.Much has been learned about vaccination strategies from the still ongoing COVID-19 pandemic, but a consensus has been reached about the fact that both the classical and the recently developed vaccination strategies are still far from optimal [42][43][44][45].A main and generic problem of the current vaccination boosts is the limited time of exposure to the antigen.In natural infections, the sensing elements of the immune system perceive the immunogens during days or weeks, while pathogen multiplication is active.Among other studies, a recent analysis of the response against the human immunodeficiency virus (HIV) Env protein has determined the relevance of prolonged exposure to relevant antigens in the development of vaccination strategies [46], versus the conventional single antigen shot.In fact, a slow-delivery immunization strategy based on Env over 12 days dramatically improved the immunological outcomes, as such prolonged dosage mimics the features of the immune response to natural infections and expands the durability and effectiveness of such response [46].Therefore, time-sustained dosage of immunogens is now observed as a ground-breaking and highly promising alternative to the boost-based current approaches [47]. Linked to this scenario, divalent cations such as Zn 2+ are excellent protein-clustering agents that exploit its interactivity with His residues to crosslink His-tagged polypeptides as nanoparticles or, at higher metal concentrations, nanoparticle-secreting microparticles [10] (Figure 1A).The microparticle version of the resulting materials shows an amyloidal architecture, inner organization, and protein-leaking properties [12].This is similar to the properties observed in the secretory granules from the mammalian endocrine system that support the secretion of peptide hormones from such granular depots [21,23,48,49].This category of protein-based dynamic repositories, in the synthetic version described here, has so far been adapted to the release of protein drugs [17]; of course, it might be very convenient for the slow-style approach of antigen release in vaccinology.In addition, compared to other sustained drug delivery systems that require holding through non-functional scaffolding materials [5], bioactive polypeptides, formulated as artificial secretory granules, are self-contained and self-released in the absence of potentially toxic assisting containers.Apart from the time-sustained vaccination ideology [47], the removal of any drug vehicle is also a main goal in the emerging conceptual setting around nanomedical drug delivery [50].This, of course, would be of perfect applicability to immunization approaches based on selected antigens, under the umbrella of subunit vaccines in contrast to the use of the whole pathogen. By testing here the above principles with an innovative POM formulation, based on metal-assisted, self-assembling nanoparticles (Figures 1 and 2), we demonstrate the induction of a balanced immune response, including the activation of the innate immunity, as well as antigen-specific antibody and cellular responses (Figures 3-6).The activation of the innate immune system by POMs most probably provides an optimal environment for the induction of antigen-specific adaptive immune responses in the moment when the POMs slowly disintegrate in the form of soluble protein components, namely unstable nanoparticles that progressively disassemble, at least in vitro, until their stable dimeric protein forms.Furthermore, the induction of specific IgAs after parenteral immunization in both pig and mice (Figures 4 and 6), opens new expectations for using the secretory granules as a transversal platform to generate broad, systemic, and probably mucosal immunity in any animal species.Of course, further studies are required to test the potential efficacy of POMs in the induction of mucosal immunity upon parenteral or intranasal administration and regarding the potential involvement of traces of bacterial molecules that might be present in the preparations.In this context, the immunogenicity of POMs has to be validated using other antigens, ideally using infection models which allow testing their protective capability.The p30 protein from ASFV has demonstrated to be a good tool to investigate the potential of POMs as a vaccine platform.However, for this particular pathology, this protein should be combined in the future with other ASFV antigens to test its protective capacity against this complex disease [51,52]. In vaccinology, the activation of specific CD8 T cell responses is of high importance to achieve sterilizing immunity against intracellular pathogens.Thus, while antibodies can block the pathogens in fluids, CD8 T cells are the only ones capable to specifically killing the infected cells, also destroying the intracellular pathogen replicating within them.In fact, subunit and inactivated vaccines are traditionally not efficient at inducing CD8+ T cells.The ability of POMs to induce a CD8+ T cell response observed here might be linked to their original, aggregated microscale nature, which can facilitate cross-presentation and, therefore, the priming of CD8 T cell responses [51,52]. Different antibody isotypes play distinct roles in antiviral immunity.Among them, IgG2 antibodies are specifically triggered during Th1-type immune responses [53] and exhibit superior capabilities in activating Fc receptor-mediated effector responses, which are crucial for resolving infections caused by intracellular pathogens.On the other hand, elevated levels of IgG1 are associated with Th2 immune responses and do not stimulate Fc receptor-mediated immune responses as effectively [54].Both POM formulations tested in this study induced IgG2a of IgG2 antibodies in mice and pigs, respectively.This observation aligns with the activation of CD8 + T cells by cross-priming and the induction of a Th1 adaptive response because of the aggregated nature of the microparticles. Conclusions As summarized from the present data, the combination of several intrinsic features of nanoparticle-based POMs, as described here, make them a promising platform for the further development of vaccine candidates.Among them, it is important to stress that (i) the supramolecular architecture of the antigens might be protective from proteolytic degradation in vivo; (ii) the time-extended release of the protein mimics the constant antigen stimulation during a natural infection; (iii) the supramolecular structure of the antigen might act as an adjuvant and promote antigen presentation; and (iv) the delivered antigens might transit from nanoparticles to dimeric forms through intermediate oligomers during the disintegration of the depot.Such a disintegration process might induce specific antibodies and T-helper responses, and also cytotoxic T cells by cross-priming, as already shown for conventional nanoparticle-based vaccines [55].Together with these features, the easy and cost-effective manufacturing, as well as the safety profile due to their intrinsic molecular purity, make POMs a very promising immunization platform with a high versatility in its further adaptation for new generation vaccine formulation. Figure 1 . Figure 1.Construction and characterization of RK4-P30-H6 and GFP-H6.(A) Architectonic principles governing the POM principle, namely microparticle generation out of nanoparticles and further nanoparticle release.His-tagged proteins tend to self-assemble, upon recombinant production and Ni 2+ -based purification, into oligomeric nanoparticles, assisted by divalent cations.A cationic amino acid N-terminal stretch favors this process.The addition of a molar excess of cationic Zn produces the immediate formation of microscale particles.Upon in vivo administration and upon Zn dilution, these materials release stable nanoparticles differently.(B) Schematic representation of RK4-P30-H6 and GFP-H6 constructs.In RK4-P30-H6, a flexible peptide linker (GGSSRSS) was incorporated.(C) RK4-P30-H6 characterization by H6 immunodetection in Western blot with anti-His monoclonal antibody ((C), left).Size of the purified protein determined by DLS.The protein size was also measured under chelating conditions (2 mM EDTA + 1% SDS) ((C), right).(D) Immunodetection of GFP-H6 by Western Blot ((D), left) and size of the construct analyzed by DLS ((D), right). Figure 1 . Figure 1.Construction and characterization of RK4-P30-H6 and GFP-H6.(A) Architectonic principles governing the POM principle, namely microparticle generation out of nanoparticles and further nanoparticle release.His-tagged proteins tend to self-assemble, upon recombinant production and Ni 2+ -based purification, into oligomeric nanoparticles, assisted by divalent cations.A cationic amino acid N-terminal stretch favors this process.The addition of a molar excess of cationic Zn produces the immediate formation of microscale particles.Upon in vivo administration and upon Zn dilution, these materials release stable nanoparticles differently.(B) Schematic representation of RK4-P30-H6 and GFP-H6 constructs.In RK4-P30-H6, a flexible peptide linker (GGSSRSS) was incorporated.(C) RK4-P30-H6 characterization by H6 immunodetection in Western blot with anti-His monoclonal antibody ((C), left).Size of the purified protein determined by DLS.The protein size was also measured under chelating conditions (2 mM EDTA + 1% SDS) ((C), right).(D) Immunodetection of GFP-H6 by Western Blot ((D), left) and size of the construct analyzed by DLS ((D), right). Figure 2 . Figure 2. Formation and characterization of p30 POMs and GFP POMs.Representative microgr of p30 POMs (A) and GFP POMs (B) obtained by SEM (scale bar represents 1 µm).(C) Size of POMs and of the soluble protein released in vitro after seven days as determined by DLS (left).relative amount of soluble protein released from p30 POMs for seven days is also shown (right) Size of GFP POMs and of the soluble protein released from these microparticles as determine DLS at day seven (left).The relative amount of soluble protein released from GFP POMs for s days is also depicted (right). Figure 2 . Figure 2. Formation and characterization of p30 POMs and GFP POMs.Representative micrographs of p30 POMs (A) and GFP POMs (B) obtained by SEM (scale bar represents 1 µm).(C) Size of p30 POMs and of the soluble protein released in vitro after seven days as determined by DLS (left).The relative amount of soluble protein released from p30 POMs for seven days is also shown (right).(D) Size of GFP POMs and of the soluble protein released from these microparticles as determined by DLS at day seven (left).The relative amount of soluble protein released from GFP POMs for seven days is also depicted (right). Figure 3 . Figure 3. Subcutaneous inoculation of p30 secretory granules in pigs induces ASFV-specific antibodies.ASFV-specific IgGs in sera from pigs inoculated with 50 µg of p30 Soluble (A) without or (B) with CAF01, or with 50 µg of p30 POMs (C) without or (D) with CAF01, and (E) the control group.(F) ASFV-specific IgGs in sera from pigs receiving 150 µg of p30 POMs or PBS as control from the second experiment.Arrows indicate the two vaccination days.(G) ASFV-specific IgG titers in pig sera two weeks after the second administration (SD35).(H) Statistical analyses of ASFV-specific IgG titers in pig sera two weeks after the second administration (SD35).Statistical significance was determined by one-way ANOVA followed by Tukey's multiple comparisons test and is displayed in GraphPad style (ns p > 0.05, * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001, **** p ≤ 0.0001). Figure 3 . Figure 3. Subcutaneous inoculation of p30 secretory granules in pigs induces ASFV-specific antibodies.ASFV-specific IgGs in sera from pigs inoculated with 50 µg of p30 Soluble (A) without or (B) with CAF01, or with 50 µg of p30 POMs (C) without or (D) with CAF01, and (E) the control group.(F) ASFV-specific IgGs in sera from pigs receiving 150 µg of p30 POMs or PBS as control from the second experiment.Arrows indicate the two vaccination days.(G) ASFV-specific IgG titers in pig sera two weeks after the second administration (SD35).(H) Statistical analyses of ASFV-specific IgG titers in pig sera two weeks after the second administration (SD35).Statistical significance was determined by one-way ANOVA followed by Tukey's multiple comparisons test and is displayed in GraphPad style (ns p > 0.05, * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001, **** p ≤ 0.0001). Figure 4 . Figure 4. Subcutaneous inoculation of p30 POMs in pigs induces an IgG2 bias and detectable IgA in serum.(A) ASFV-specific IgG1 and IgG2 in sera (1/100 dilution) from pigs receiving p30 or p30 POMs two weeks after the second inoculation (SD35) assessed by ELISA.(B) ASFV-specific IgA in sera (1/100 dilution) from pigs receiving p30 POMs two weeks after the second inoculation (SD35) determined by ELISA.Statistical significance was determined by one-way ANOVA followed by Tukey's multiple comparisons test and is displayed in GraphPad style (* p ≤ 0.05, ** p ≤ 0.01). Figure 4 . Figure 4. Subcutaneous inoculation of p30 POMs in pigs induces an IgG2 bias and detectable IgA in serum.(A) ASFV-specific IgG1 and IgG2 in sera (1/100 dilution) from pigs receiving p30 or p30 POMs two weeks after the second inoculation (SD35) assessed by ELISA.(B) ASFV-specific IgA in sera (1/100 dilution) from pigs receiving p30 POMs two weeks after the second inoculation (SD35) determined by ELISA.Statistical significance was determined by one-way ANOVA followed by Tukey's multiple comparisons test and is displayed in GraphPad style (* p ≤ 0.05, ** p ≤ 0.01). Figure 5 . Figure 5.In vitro stimulation with POMs induces non-specific cell activation.(A) IFNγ-pro PBMCs from three naive pigs measured by ELISpot assay.Cells were stimulated in vitro µg/mL of p30 or p30 POMs.(B) Cytokine levels in culture supernatants of PAMs stimulated for 24 h with 5 µg/mL of GFP POMs or left unstimulated (RPMI) quantified by Luminex-bas tiplex assay.Statistical significance was determined by unpaired two-tailed t-test for norm tributed data, or two-tailed Mann-Whitney U test for not normally distributed data and is di in GraphPad style (p > 0.05 ns, * p ≤ 0.05, ** p ≤ 0.01). Figure 5 . Figure 5.In vitro stimulation with POMs induces non-specific cell activation.(A) IFNγ-producing PBMCs from three naive pigs measured by ELISpot assay.Cells were stimulated in vitro with 5 µg/mL of p30 or p30 POMs.(B) Cytokine levels in culture supernatants of PAMs stimulated in vitro for 24 h with 5 µg/mL of GFP POMs or left unstimulated (RPMI) quantified by Luminexbased multiplex assay.Statistical significance was determined by unpaired two-tailed t-test for normally distributed data, or two-tailed Mann-Whitney U test for not normally distributed data and is displayed in GraphPad style (p > 0.05 ns, * p ≤ 0.05, ** p ≤ 0.01). Figure 6 . Figure 6.Subcutaneous inoculation of microparticulated GFP induces long-lasting dose-depe production of GFP-specific antibodies and CD8 T cells in mice.(A,B) GFP-specific antibody in sera from mice receiving GFP POMs assessed by ELISA two (SD35, (A)) and nine (SD8 weeks after the second inoculation.(C) GFP-specific IgG1 and IgG2a in sera from mice rec GFP POMs nine weeks after the second inoculation.(D) GFP-specific IgA in sera from mice ing GFP POMs nine weeks after the second inoculation.(E) Percentage of proliferating (Ki67+ and CD8 + T cells in splenocytes after in vitro stimulation for five days with 5 µg/mL of GFP P Percentages obtained from untreated cells were subtracted.Statistical significance was asses one-way ANOVA followed by Tukey's multiple comparisons test and is displayed in Gra style (* p ≤ 0.05, *** p ≤ 0.001). Figure 6 . Figure 6.Subcutaneous inoculation of microparticulated GFP induces long-lasting dose-dependent production of GFP-specific antibodies and CD8 T cells in mice.(A,B) GFP-specific antibody levels in sera from mice receiving GFP POMs assessed by ELISA two (SD35, (A)) and nine (SD85, (B)) weeks after the second inoculation.(C) GFP-specific IgG1 and IgG2a in sera from mice receiving GFP POMs nine weeks after the second inoculation.(D) GFP-specific IgA in sera from mice receiving GFP POMs nine weeks after the second inoculation.(E) Percentage of proliferating (Ki67+) CD4 + and CD8 + T cells in splenocytes after in vitro stimulation for five days with 5 µg/mL of GFP POMs.Percentages obtained from untreated cells were subtracted.Statistical significance was assessed by one-way ANOVA followed by Tukey's multiple comparisons test and is displayed in GraphPad style (* p ≤ 0.05, *** p ≤ 0.001).
2024-03-03T18:44:08.995Z
2024-02-27T00:00:00.000
{ "year": 2024, "sha1": "3a80b591250ce7f1727e9239046514942542d0ca", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/14/5/435/pdf?version=1709043744", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "17ed167c45c6e3a5caf1daa9ac385c1d8053975f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
44126513
pes2o/s2orc
v3-fos-license
Hypertension and increased endothelial mechanical stretch promote monocyte differentiation and activation: roles of STAT3, interleukin 6 and hydrogen peroxide Abstract Aims Monocytes play an important role in hypertension. Circulating monocytes in humans exist as classical, intermediate, and non-classical forms. Monocyte differentiation can be influenced by the endothelium, which in turn is activated in hypertension by mechanical stretch. We sought to examine the role of increased endothelial stretch and hypertension on monocyte phenotype and function. Methods and results Human monocytes were cultured with confluent human aortic endothelial cells undergoing either 5% or 10% cyclical stretch. We also characterized circulating monocytes in normotensive and hypertensive humans. In addition, we quantified accumulation of activated monocytes and monocyte-derived cells in aortas and kidneys of mice with Angiotensin II-induced hypertension. Increased endothelial stretch enhanced monocyte conversion to CD14++CD16+ intermediate monocytes and monocytes bearing the CD209 marker and markedly stimulated monocyte mRNA expression of interleukin (IL)-6, IL-1β, IL-23, chemokine (C-C motif) ligand 4, and tumour necrosis factor α. STAT3 in monocytes was activated by increased endothelial stretch. Inhibition of STAT3, neutralization of IL-6 and scavenging of hydrogen peroxide prevented formation of intermediate monocytes in response to increased endothelial stretch. We also found evidence that nitric oxide (NO) inhibits formation of intermediate monocytes and STAT3 activation. In vivo studies demonstrated that humans with hypertension have increased intermediate and non-classical monocytes and that intermediate monocytes demonstrate evidence of STAT3 activation. Mice with experimental hypertension exhibit increased aortic and renal infiltration of monocytes, dendritic cells, and macrophages with activated STAT3. Conclusions These findings provide insight into how monocytes are activated by the vascular endothelium during hypertension. This is likely in part due to a loss of NO signalling and increased release of IL-6 and hydrogen peroxide by the dysfunctional endothelium and a parallel increase in STAT activation in adjacent monocytes. Interventions to enhance bioavailable NO, reduce IL-6 or hydrogen peroxide production or to inhibit STAT3 may have anti-inflammatory roles in hypertension and related conditions. Introduction In 2016, hypertension was ranked as the leading risk factor for global burden of disease in both developed and underdeveloped countries. 1 In the past 10 years, it has become evident that activated immune cells infiltrate the kidney and other organs and that these cells contribute to the end-organ damage in this disease. 2 In particular, monocytes seem to play a particularly important role in hypertension. Wenzel et al. showed selective ablation of lysozyme M-positive (LyzM þ ) myelomonocytic cells in mice completely prevented Angiotensin II (Ang II) induced hypertension and prevented the endothelial dysfunction and vascular oxidative stress generally observed in this model. 3 The mechanism by which monocytes promote hypertension remains undefined but likely involves transformation into activated states or into other cell types, including macrophages and monocyte-derived dendritic cells (DCs). Indeed, De Ciuceis et al. found that mice lacking macrophage colony-stimulating factor, required for the stimulation of macrophage formation from monocytes, are protected against blood pressure (BP) elevation. 4 Furthermore, these mice are protected from vascular remodelling, vascular superoxide production and the alteration of endothelium-dependent vasodilation that normally accompanies hypertension. 4 Likewise, monocyte-derived DCs seem to play a critical role in hypertension. DCs potently activate T cells, which are essential for full development of hypertension. 5 We have shown that in hypertension DCs accumulate isolevuglandin (IsoLG)-adducted proteins that are immunogenic, and that adoptive transfer of DCs from hypertensive mice primes hypertension in recipient mice. DCs of hypertensive mice produce large quantities of cytokines including IL-6, IL-23, and TNFa and exhibit enhanced ability to drive proliferation of T cells from other hypertensive mice. 6 These cytokines are activated in response to the phosphorylation of signal transducer and activator of transcription 3 (STAT3), 7 and their production can skew T cells towards T helper 17 (T H 17) differentiation. The production of IL-17 by T cells is critical for maintenance of Ang II-induced hypertension and vascular dysfunction. 8 Indeed, we have observed increased IL-17A producing T cells in the circulation of hypertensive humans. 9 Circulating monocytes in humans have been classified into three subpopulations depending on their surface expression of the toll-like receptor 4 (TLR4) co-receptor CD14 and the FccIII receptor CD16. 10 Most circulating monocytes are classified as 'classical' and exhibit surface expression if CD14 and little or no CD16 (CD14 þþ CD16 -). These are thought to represent cells newly released from the bone marrow, and they circulate for approximately 1 day before either dying, transmigrating or transforming into another phenotype. 11 Non-classical monocytes, characterized by their expression of CD16 and low levels of CD14 or CD14 low CD16 þþ , and are known to expand in inflammatory states. Upon stimulation, these CD14 low CD16 þþ cells exhibit increased production of TNFa. 12 In 1988, a small population of monocytes expressing both CD14 and CD16 was identified, 13 subsequently termed intermediate monocytes or CD14 þþ CD16 þ . 14 These cells are also expanded in inflammatory states such as rheumatoid arthritis, psoriasis, and peripheral artery disease. [15][16][17][18] Recent deuterium labelling studies indicate that intermediate and non-classical monocytes arise sequentially from the CD14 population. 11 When placed in culture, classical monocytes acquire increasing levels of CD16 with time. 13 The population of monocytes that gives rise to human monocyte-derived DCs remains poorly defined, but includes CD16 þ cells. 19 Another population are monocytes that express the DC-specific intracellular adhesion molecule (ICAM)-3 grabbing nonintegrin (DC-SIGN) or CD209 DC receptor, which interacts with the leukocyte cell-derived chemotaxin 2 (LECT2). LECT2 is crucial for the process of adhesion and rolling of DCs on endothelial cells and can mediate macrophage activation and protection against bacterial sepsis. 20 The expression of CD209 on DCs is considered a marker of maturation. 21 A potential source of monocyte activation in hypertension is interaction of these cells with the vascular endothelium. In this regard, Randolph et al. showed that monocytes cultured with endothelial cells that had been stimulated with IL-1b, LPS, or zymosan particles differentiate into either DCs or macrophages depending on reverse transmigration through the endothelium. 22 These investigators further showed that CD16 þ cells are more likely to reverse transmigrate and that reverse transmigration of monocytes promotes the formation of a CD16 þ population. 23 A feature of hypertension that can activate the endothelium is increased mechanical stretch. Indeed, increased cyclic stretch activates transcription factors including AP1, the cAMP responsive binding protein, NFjB in human endothelial cells, 24 and a variety of signalling molecules including ERK1/2, the focal adhesion kinase pp125fak, PI3 kinase, and p21Ras. [25][26][27] Gene array studies have indicated that endothelial stretch increases expression of inflammatory mediators including IL-6, IL-8, the monocyte chemoattractant protein 1 (MCP-1) and the vascular cell adhesion molecule-1 (VCAM-1). 28 These events promote monocyte adhesion, rolling, and transmigration through the endothelium, 29 and some are redox sensitive. 30 Hishikawa and Luscher showed that 10% stretch of human aortic endothelial cells (HAECs) enhances superoxide production, mediated initially by the NADPH oxidase and subsequently by the nitric oxide (NO) synthase depending on the presence of tetrahydrobiopterin. 31 In the present study, we examined the role of endothelial mechanical stretch and STAT3 in promoting transformation of co-cultured human monocytes into the intermediate-phenotype and to cells bearing DC properties. We also examined monocyte subsets and STAT phosphorylation status in humans with hypertension. Our findings provide new insight into how altered mechanical forces in the vessel can promote immune activation. Human subjects We performed three studies: one involved obtaining monocytes from normotensive subjects to analyse their response to endothelial stretch. For this analysis, we included male and female normotensive participants who had BP between <135/80 mmHg. In a second study, we examined the phenotype of circulating monocytes from 20 normotensive subjects, 52 subjects with mild hypertension (systolic BP from 130 to 140 mmHg), and 60 subjects with more severe hypertension (systolic BP >140 mmHg). In a third study, we recruited 15 normotensive subjects and 12 hypertensive subjects for analysis of phospho-STAT levels in circulating monocytes. For this third study, participants were considered hypertensive if they had a systolic BP higher than 140 mmHg, a diastolic BP higher than 90 mmHg or had a diagnosis of hypertension and were currently treated with anti-hypertensive agents. Normal and hypertensive volunteers were included between ages 18 and 55 years. Exclusion criteria included the following: (i) autoimmune disease or history of inflammatory diseases; (ii) recent vaccinations within the last 3 months; (iii) confirmed or suspected causes of secondary hypertension; (iv) severe psychiatric disorder; (v) HIV/AIDS; (vi) current treatment with steroids or antihistamines; (vii) liver or renal disease and (viii) history of cancer. Protocol 1 and 3 were approved by the Vanderbilt Institutional Review Board and conformed to standards of the US Federal Policy for the Protection of Human Subjects. The Ethics Committee of Jagiellonian University approved protocol two. Written informed consent was obtained from all patients. Monocyte isolation and monocyte-HAECs cultures HAECs (Lonza, Walkersville, MD, USA) were grown to confluency on flexible six-well culture plates that permit uniaxial stretch (Flexcell V R International Corporation, Burlington, NC, USA). These were coated with Collagen I and 1% gelatin crosslinked with 0.05% of glutaraldehyde. Cells were fed every other day with EBM-2 medium (Lonza) containing EGM-2 growth factors and supplements and 2% foetal bovine serum and stored in 37 C incubator with 5% CO 2 . Peripheral blood mononuclear cells (PBMCs) from each volunteer were isolated initially by Ficoll-density gradient centrifugation and subsequently CD14 þ monocytes were further isolated from PBMCs using negative selection with the monocyte isolation kit (Miltenyi Biotec 130-096-537; Miltenyi Biotec, Auburn, CA, USA) as previously described. 32 Monocytes from each volunteer were added to the endothelial cells previously grown on Uniflex V R six-well culture plates so that we could simultaneously examine the response of monocytes to endothelial cells undergoing 5% and 10% uniaxial stretch. In indicated experiments, 6% and 8% stretch was applied. In some experiments, one million human monocytes were added to Uniflex V R six-well culture plates coated with Collagen I and Pronectin V R (RGD) (Flexcell V R International Corporation) in the absence of HAECs. Uniaxial cyclical mechanical stretch was applied to the endothelial cell monolayers at 5% and 10% elongation strain, 1 Hz, and 1 =2 sine curve. T-cell proliferation assay One million CD14 þ monocytes were exposed to endothelial cells undergoing either 5% or 10% stretch as described above. Forty-eight hours later, the endothelial cells were removed by Fluorescence Activated Cell-Sorting (FACS) using CD31 staining. The same subjects returned on this day, additional blood was sampled and T cells were harvested from PBMCs using negative selection (130-096-535; Miltenyi Biotec). These T cells were loaded with carboxyfluorescein succinimidyl ester (CFSE) (Invitrogen) and cultured with the sorted monocytes at a 1:10 ratio for 7 days. Proliferation was measured by the CFSE dilution using flow cytometry. Animals Wildtype (WT) C57Bl/6 male mice, obtained from The Jackson Laboratory were studied at 3 months of age. Ang II (490 ng/kg/min) or vehicle (sham) was infused for 6 days via osmotic minipumps (Alzet, Model 2002; DURECT Corporation, Cupertino, CA, USA) as previously described. 5,6 All animal procedures were approved by Vanderbilt University's Institutional Animal Care and Use Committee (IACUC) where the mice were housed and cared for in accordance with the Guide for the Care and Use of Laboratory Animals, US Department of Health and Human Services. Flow cytometry Monocyte populations were collected from the endothelial/monocyte co-cultures by digesting the Collagen I and 1% gelatin coating with Collagenase A and B (1 mg/mL) and DNAse I (100 lg/mL) in RPMI 1640 medium with 10% FBS for 30 min at 37 C. This procedure harvested not only cells in suspension, but also cells adhering to the endothelium and those that potentially transmigrated to the subendothelial collagen/gelatin layer. In mice, organs were transcardially perfused by physiological pressure with saline solution prior to organ collection. Single cell suspensions were prepared from aortas and kidneys as previously described. 33 Flow cytometry was performed as previously described. 32 Gates for each antibody stain were determined by flow minus one (FMO) controls and confirmed using isotype controls. We employed live/dead stains to eliminate non-viable cells and selected only single cells for analysis ( Figure 1A). For the detection of murine monocytes, DCs and macrophages in the aorta and kidney we employed a gating strategy that effectively discriminates between these subsets as previously described. 34 For freshly isolated human monocytes we used a gating strategy as described by Urbanski et al. 35 Co-immunoprecipitation, quantitative RT-PCR and ELISA For co-immunoprecipitation (co-IP) studies, monocytes were isolated from the monocyte-HAEC cultures using a negative selection for human CD31 (Miltenyi Biotec). co-IP was performed as previously described. 32 The bands of interest were normalized to total STAT3. For RT-PCR monocytes were again isolated from the endothelium and lysed in RLT buffer and RNA was extracted using the RNeasy Mini Kit (Qiagen, Germantown, MD, USA) and RT-PCR was performed according to the manufacturer's instructions using both iScript cDNA synthesis kit (BioRad, Hercules, CA, USA) or the High Capacity cDNA reverse transcriptase kit (Applied Biosystems, Foster City, CA, USA). Samples were evaluated for human IL-6, IL-1b, IL-23, TGF-b, CD168, p22 phox , CCL4, IL-18, CCL2, MMP8, and TNFa using Taqman primers. Relative quantification was determined using the comparative CT method where samples were normalized to GAPDH and calibrated to the average of the control group (5% stretch). LECT2, IL-6, and High Mobility Group Box-1 (HMGB-1) protein were quantified using ELISA kits from LS-Bio, Affymetrix and IBL International, respectively. Visualization of monocytes with endothelial cells in co-cultures One to two million human monocytes isolated from PBMCs of normotensive people were incubated with 12 mM of CellTracker TM Green CMFDA Dye (C2925, ThermoFisher Scientific, Waltham, MA, USA) in EBM2 medium (Lonza) for 30 min at 37 C according to the manufacturer's instructions. After incubation, cells were washed with medium and one million fluorescently tagged monocytes were added to wells containing either confluent HAECs on Collagen I/gelatin coated six-well stretch plates or without the presence of endothelial cells in six-well stretch plates coated with Collagen I or Pronectin V R (RGD, Flexcell V R International Corporation). For imaging, endothelial-containing cultures Endothelial activation of monocytes in hypertension Figure 1 Hypertensive mechanical stretch in human endothelial cells promotes monocyte activation and differentiation. Human CD14 þ monocytes were isolated by magnetic sorting from PBMCs of normal human volunteers and cultured with HAECs exposed to cyclical stretch. (A) Schematic of the experimental design and (B) gating strategy for phenotyping human monocytes including classical (CD14 þþ CD16 -), intermediate (CD14 þþ CD16 þ ) and non-classical monocytes (CD14 low CD16 þþ ). (C) Changes in numbers of cells for each subject are depicted by connected lines for CD14 þþ CD16 þ (n = 9) and (D) CD14 þþ CD209 þ , (E) CD14 low CD16 þþ , (F) macrophage population (CD14 þ CD163 þ ), (G) CD14 þþ CD16 -, (H) CD14 -CD83 þ (n = 10). (I) Relative monocyte mRNA expression of IL-6, IL-1b, IL-23, TNFa, and CCL4 in adhered monocytes and in monocytes in suspension (5%, n = 15; 10%, n = 16). (J) Monocyte-HAECs cultures were stretched to either 5% or 10% for 48 h followed by sorting monocytes from HAECs using CD31 þ isolation kit and FACS. Monocyte populations were cultured with CFSE-labelled T cells isolated from PBMCs of the same participants. Seven days later, we measured proliferation in the CD4 þ and CD8 þ T-cell populations by flow cytometry. Changes in number of proliferated CD4 þ and CD8 þ T cells after 7 days in culture for each subject are depicted by connected lines (n = 7). Comparisons were made using one-tail paired t-tests (*P < 0.05, **P < 0.01). or monocyte alone cultures were exposed to either 5% or 10% levels of continuous uniaxial stretch for 24 h. Cells were subsequently washed with 1Â PBS and the remaining cells were fixed with 4% paraformaldehyde (PFA) solution for 15 min at room temperature. The cells were then permeabilized with 0.5% Triton X-100 (Sigma) in 1Â PBS for 30 min at room temperature. Cells underwent several washes in PBS and were blocked with goat serum solution (5% goat serum and 2.3% glycine in PBS) for 1 h at room temperature. Cultures with endothelial cells were then stained with the primary purified anti-human CD54 or ICAM-1 (HA58, Biolegend, San Diego, CA, USA) at a 1:100 concentration in goat serum solution for 1 h at room temperature. Cells were then stained with the ReadyProbes secondary antibodies conjugated with Alexa Fluor TM 594 dyes goat anti-mouse antibody (R37121, ThermoFisher Scientific) using one drop per millilitre of PBS and incubation for 30 min at room temperature. Finally, all plates' flexible membranes were mounted unto glass slides and ProLong V R Gold antifade reagent with DAPI (ThermoFisher Scientific) was used to stain for the nucleus before adding the coverslip. Imaging of the slides was performed using an EVOS TM FL Imaging System (ThermoFisher Scientific). Adhered monocytes were counted using ImageJ software in three random fields and an average was calculated and used as a graphing representative. For imaging with confocal microscopy membranes were fixed in 4% PFA for 15 min at room temperature, washed with PBS and then blocked for one hour in Dako Serum Free Protein Block (DSFPB) (Agilent Technologies, Santa Clara, CA). Membranes were incubated with purified anti-CD83 antibody (BioLegend; category number 305301) or isotype matched control (Biolegend; category number 400102) in DSFPB overnight at 4 C, washed in PBS, and then probed with Alexa Fluor 647 labelled anti-Murine IgG raised in goat (ThermoFisher Scientific) for 30 min. Following washing in PBS, membranes were stained with anti-CD31-FITC (BioLegend; category number 303103) and anti-CD14-Alexa Fluor 594 (BioLegend; category number 325630) for 2 h at room temperature. Samples were washed in PBS and counterstained with DAPI for 10 min. Membranes were washed in PBS and then mounted on glass-bottomed dishes. Immunofluorescent images were then acquired using a Zeiss Cell Observer SD confocal fluorescent microscope (Zeiss, Oberkochen, Germany). Visualization of monocytes/ macrophages in aortic sections of WT C57Bl/6 mice C57Bl/6 mice were infused with either sham or Ang II for 2 weeks and aortas were harvested and embedded in paraffin for sectioning. Paraffin sections were dehydrated with ethanol and fixed with 10% formalin for 20 min at room temperature. The cells were then permeabilized with 0.5% Triton X-100 (Sigma) in 1X PBS for 30 min at room temperature. Cells underwent several washes in PBS and were blocked with foetal serum solution (5% goat serum and 2.3% glycine in PBS) for 1 h at room temperature. Sections were stained with the primary purified antimouse F4/80 and pSTAT3 (Y705) at a 1:100 and 1:50, respectively, in serum at 4 C overnight. Cells were then stained with secondary antibodies conjugated Alexa Fluor TM donkey anti-rat and anti-rabbit antibodies (Invitrogen) using one drop per millilitre of PBS and incubation for 1 h and 30 min at room temperature. Finally, we added ProLong V R Gold antifade reagent with DAPI (ThermoFisher Scientific) to stain for the nucleus before adding the coverslip. Imaging of the slides was performed using an EVOS TM FL Imaging System (ThermoFisher Scientific). Statistics All data are expressed as mean ± SEM. One tailed-paired and unpaired Student's t-tests were used to compare two groups. In case of the nonnormality, the non-parametric test one-tailed Mann-Whitney U test was used. When examining the effect of varying endothelial percent stretch on monocyte transformation to the intermediate phenotype, we employed one-way ANOVA with Student Newman Keuls post hoc test. When examining the effect of DETA-NONOate, we employed the Friedman's multiple comparison test followed by the Dunn's post hoc test. To compare the distribution of monocyte subtypes in normotensive humans vs. humans with mild or severe hypertension, we employed one-way ANOVA. To compare pSTAT levels in monocytes from normotensive and hypertensive subjects, we employed two-way ANOVA with Student Newman Keuls post hoc test. Categorical data were analysed using v 2 analysis. P values are reported in the figures and were considered significant when <0. 05. Power analyses for the various experiments are provided in Supplementary material online, Table S1. Hypertensive mechanical stretch in human endothelial cells promotes monocyte activation and differentiation It has been previously reported that endothelial cells activated by zymosan, LPS or IL-1b can promote conversion of monocytes to DCs. 22 A feature of hypertension that activates endothelial cells is increased mechanical stretch. We therefore isolated CD14 þ monocytes from PBMCs of normotensive humans and co-cultured these with HAECs undergoing 5% or 10% mechanical stretch for 48 h as shown by the illustration in Figure 1A. The intermediate monocyte CD14 þþ CD16 þ population ( Figure 1B and C) and the CD14 þþ CD209 þ population ( Figure 1B and D) were significantly increased in the co-cultures with HAECs undergoing 10% compared with 5% stretch. The non-classical monocyte population defined as CD14 low CD16 þþ displayed a trend to increase in response to 10% endothelial stretch ( Figure 1E). The macrophage population defined as CD14 þ CD163 þ ( Figure 1F) and the classical monocyte population defined as CD14 þþ CD16 -( Figure 1G) were not different between experimental groups. Another marker of monocyte activation and DC development is CD83 36 ; however, only a few monocytes expressed CD83 þ after co-culture ( Figure 1H) and these were not changed by endothelial stretch. Intermediate levels of endothelial stretch ranging from 6% to 8% failed to alter the phenotype of human monocytes (see Supplementary material online, Figure S1A). There was also no difference in the percent of live cells between the 5% and 10% stretch (see Supplementary material online, Figure S1B). In keeping with the state of monocyte activation, 10% endothelial stretch promoted a striking upregulation of monocyte mRNA for the cytokines IL-6, IL-1b, IL-23, TNFa, and CCL4 compared with 5% stretch ( Figure 1I). In contrast, stretch did Endothelial activation of monocytes in hypertension not affect monocyte expression of TGFb-1, MMP8, CCL2, IL-18, or CD168 (see Supplementary material online, Figure S2). It has been reported that monocytes can enter tissues and re-emerge into the circulation without differentiation into macrophages or DCs. 34 These cells can transport antigen to lymph nodes and have enhanced ability to drive T-cell proliferation. To determine if endothelial stretch conveys this property to monocytes, we obtained T cells from the same monocyte donors after their monocytes had been exposed to endothelial cell stretch for 48 h. These autologous T cells were labelled with CFSE and co-cultured with monocytes for 7 days and their proliferation was examined by CFSE dilution. As shown in Figure 1J, monocytes previously exposed to 10% stretched endothelial cells for 48 h exhibited an enhanced ability to drive CD4 þ and CD8 þ T-cell proliferation when compared with the cells from the 5% stretched endothelial cell cultures. Hypertensive mechanical stretch on endothelial cells promotes STAT3 activation in co-cultured monocytes Increased expression of IL-6, IL-1b, and IL-23 resemble a cytokine response typical of STAT3 signalling. 37 In addition, STAT3 activation has been identified as a checkpoint for FLT-3-regulated DC development. 38 We therefore sought to determine whether STAT3 plays a role in monocyte activation and differentiation when exposed to HAECs undergoing stretch. STAT3 activation occurs upon phosphorylation of tyrosine (Y) 705 and/or serine (S) 727. When activated, STAT3 can also form a heterodimer with STAT1. Using intracellular staining we found that the CD14 þþ CD16 þ intermediate (Figure 2A, C-E) and the CD14 þþ CD209 þ populations ( Figure 2B, F-H) had a significant increase Figure 2 Effect of endothelial stretch on STAT3 activation in co-cultured monocytes. Human CD14 þ monocytes were isolated from buffy coats of normal human volunteers and cultured with HAECs exposed to 5% or 10% cyclical stretch. (A) Representative flow cytometry plots are shown for intracellular staining of STAT3 phosphorylation in the tyrosine (Y) 705 and the serine (S) 727 and STAT1 phosphorylation in the Y701 in the CD14 þþ CD16 þ intermediate monocytes and (B) the CD14 þþ CD209 þ cells in the 5% stretch (blue), 10% stretch (red), and the dashed line represents FMO control. (C-E) Changes in numbers of intermediate monocytes between 5% and 10% endothelial cell stretch expressing pSTAT3 (Y), pSTAT3 (S), and pSTAT1 are depicted by connected lines. (F-H) Changes in numbers of CD14 þþ CD209 þ cells expressing pSTAT3 (Y), pSTAT3 (S), and pSTAT1 between 5% and 10% endothelial cell stretch. Comparisons were made using one-tail paired t-tests (n = 9, *P < 0.05, **P < 0. in pSTAT3 (Y705), pSTAT3 (S272), and pSTAT1 when cultured with endothelial cells undergoing 10% stretch. We also observed an increase in STAT3 and STAT1 phosphorylation in cells that remained CD14 þþ CD16 -, but not in the non-classical monocytes (see Supplementary material online, Figure S3A and B). Given that both STAT3 and STAT1 are activated in monocytes that underwent transformation, we considered the possibility that this involved heterodimerization of the two STAT isoforms, however, we were unable to detect association of the two using co-immunoprecipitation (see Supplementary material online, Figure S3C). Moreover, we were unable to detect fluorescence resonance energy transfer between STAT1 and STAT3 using flow cytometry-based method (data not shown). STAT3 plays a role in monocyte differentiation and activation during hypertensive mechanical stretch of endothelial cells To determine a specific role of STAT3 in differentiation of monocytes during stretch, we employed Stattic, a non-peptidic small molecule that selectively inhibits the function of the STAT3 SH2 domain. 39 Addition of Stattic to the cell culture reduced formation of the CD14 þþ CD16 þ intermediate monocyte population ( Figure 3A and C) and CD14 þþ CD209 þ DC population in response to 10% stretch ( Figure 3B and D). Likewise, Stattic reduced pSTAT3 (Y), pSTAT3 (S), and pSTAT1 in the CD14 þþ CD16 þ intermediate ( Figure 3E-G) and the CD14 þþ CD209 þ monocyte populations ( Figure 3H-J). Furthermore, we found that addition of Stattic to monocyte-HAEC cultures undergoing 10% stretch reduced upregulation of mRNA for the cytokines IL-6, IL-1b, and IL-23 ( Figure 3K). Next, we sought to determine mechanisms by which endothelial cells undergoing stretch could activate monocytes and promote STAT3 phosphorylation in adjacent monocytes. Others have reported that stretch stimulates expression of IL-6 by endothelial cells, and we confirmed a two-fold increase in IL-6 protein production by HAECs undergoing cyclical stretch (see Supplementary material online, Figure S4). IL-6 has been shown to both stimulate STAT3 activation and to be produced in response to STAT3 in a feed-forward fashion. 40 Addition of an IL-6 neutralizing antibody to the endothelial/monocyte co-cultures markedly reduced formation of intermediate monocytes ( Figure 4A and B), while having no effect on the CD14 þþ CD209 þ population ( Figure 4C). STAT3 can also be activated by reactive oxygen species (ROS), including hydrogen peroxide. 37 ROS, in turn can stimulate IL-6 production by the endothelium. 41 Because increased endothelial stretch can stimulate ROS formation, we performed additional experiments using Tempol, a superoxide dismutase mimetic, or polyethylene glycol (PEG)-Catalase, to scavenge hydrogen peroxide. While Tempol had no effect (data not shown), we found that PEG-Catalase markedly reduced formation of intermediate monocytes in response to 10% endothelial stretch ( Figure 4D). Like anti-IL-6, PEG-Catalase did not inhibit formation of the CD209 population ( Figure 4E). In keeping with these results with anti-IL-6 and PEG-Catalase, we found that these interventions also inhibited pSTAT3 (S727) and pSTAT1 (Y701) and exhibited a trend to inhibit pSTAT3 (Y705) (Figure 4F-G) within the intermediate monocytes. Increased endothelial stretch has also been shown to uncouple the endothelial NO synthase and to reduce stimulatory phosphorylation of endothelial NO synthase in endothelial cells. 31,42 Likewise NO has been shown to suppress IL-6-induced STAT3 activation in ovarian cancer cells. 43 We therefore hypothesized that a loss of bioavailable NO might promote STAT3 activity. In keeping with this hypothesis, we found the NO donor DETA NONOate (DETA NONO) dramatically reduced formation of intermediate monocytes ( Figure 5A and C) and the activation of STAT3 (Y), STAT3 (S), and STAT1 when added to monocytes cultures in the absence of endothelial cells ( Figure 5D-F). Furthermore, addition of this NO donor also reduced formation of CD14 þþ CD209 þ cells ( Figure 5B and G) and activation of STAT3 (Y05), STAT3 (S727), and STAT1 ( Figure 5H-J). We further found that addition of the NO synthase inhibitor L-NAME to monocytes undergoing 5% stretch significantly increased pSTAT3 (Y705), pSTAT3 (S272), and pSTAT1 levels in intermediate monocytes ( Figure 5K). We considered the possibility that LECT2, a ligand for CD209, might be released by endothelial cells, however we were unable to detect LECT2 released from HAECs undergoing either 5% or 10% stretch by ELISA (data not shown). We were also unable to detect release of HMBG-1 chromatin binding protein, which has been shown to activate STAT3, 44 from stretched endothelial cells (data not shown). We also examined the hypothesis that monocytes might adhere to the endothelium and themselves undergo cyclical stretch. In keeping with this, we found that 10% stretch increased ICAM-1 expression on endothelial cells and increased adhesion of monocytes to the HAECs compared to the 5% stretch controls (see Supplementary material online, Figure S5A). We also employed confocal microscopy with Z stacking to interrogate the endothelial layer and the subendothelial collagen to visualize CD31 þ , CD14 þ , and CD83 þ cells. Using this approach, we observed CD14 þ monocytes on the surface of endothelial cells in the coculture (see Supplementary material online, Figure S5B). Moreover, CD83 þ cells were observed on the surface of the CD31 þ endothelial cells exposed to 10% stretch, while none were detected in cultures exposed to 5% stretch. We observed no monocyte or CD83 expressing cells in the collagen/gelatin sub-endothelial space in either the 5% or 10% stretch experiments. To determine if stretch could directly activate monocytes in the absence of endothelial cells, we cultured monocytes in arginylglycylaspartic acid (RGD)-covered Pronectin V R or Collagen I coated stretch plates without the presence of HAECs, and exposed these to either 5% or 10% levels of stretch. By visualizing cells at 24 h, we found that both Pronectin V R and Collagen I coated stretch plates promoted monocyte adhesion (see Supplementary material online, Figure S5C). We examined monocyte phenotypes following 48 h of stretch on these plates, and found that neither 5% or 10% stretch in the absence of endothelial cells supported monocyte transformation (see Supplementary material online, Figure S5D). Hypertension affects the distribution of circulating mononuclear cells in humans In additional experiments, we sought to determine if hypertension is associated with an alteration of the phenotype of circulating monocytes. The demographics of the 132 subjects included for this analysis are shown in Table 1. Using a gating strategy published previously, 35 we found that there is a progressive decline in the classical monocytes and a concomitant increase in the percent of intermediate and non-classical monocytes with increasing levels of hypertension ( Figure 6A-C). In additional studies we sought to determine if intermediate monocytes or non-classical monocytes exhibit evidence of STAT activation. We recruited an additional 15 normotensive and 12 hypertensive subjects for this analysis. The demographics of these subjects are shown in Table 2 and a representative dot plot for the different monocyte populations is shown in Figure 6D. Example histograms for STAT phosphorylation in A total of n= 9 participants per group were used for these experiments. (K) Human CD14 þ monocytes were cultured with HAECs exposed to 5% stretch or 5% plus NO synthase inhibitor, L-NAME (1000 mM) for 48 h. The number of CD14 þþ CD16 þ intermediate monocyte population expressing pSTAT3 (Y), pSTAT3 (S), and pSTAT1 for each subject are connected by lines. A total of n = 7 participants per group were used. For (C-J) the nonparametric Friedman's test followed by Dunn's multiple comparison tests was employed. For (K) a one-tailed paired t-tests was used (*P < 0.05, **P < 0.01, ***P < 0. Figure 6E. Intermediate monocytes exhibited increased phosphorylation of STAT3 Y705, STAT3 S727, and STAT1 Y701 compared with other monocyte populations ( Figure 6F-H). No differences between normotensive and hypertensive groups were observed. Angiotensin II-induced hypertension in WT C57Bl/6 mice promotes accumulation of myeloid cells containing activated STAT3 in the kidney and aorta In subsequent experiments, we sought to determine the role of hypertension on monocyte transformation in vivo. We induced hypertension in C57Bl/6 mice by infusion of Ang II (490 ng/kg/min) for 6 days and analysed single cell suspensions of the kidney, aorta, spleen, and periaortic lymph nodes for the presence of monocytes, macrophages (MU) and DCs using a gating strategy that effectively allows discrimination of these cells ( Figure 7A). We found a significant increase in the total number of macrophages ( Figure 7B) and DCs ( Figure 7C) and a trend of an increase in the monocytes ( Figure 7D) within the aorta of Ang II treated mice. This increase in cell number was accompanied by an increase in STAT3 (Y705) activation ( Figure 7E and F) and a trend of an increase in the monocyte population ( Figure 7G). We also found an increase in macrophages, DCs and monocytes in the Ang II treated mice within the kidneys ( Figure 7H-J) and an increase in STAT3 activation in each of these populations ( Figure 7K-M). In the lymph nodes, macrophages were increased in response to Ang II infusion, while there was no change in monocytes or DCs (see Supplementary material online, Figure S6A). We also observed an increase in STAT3 activation of monocytes in lymph nodes. There were no major changes in these populations and in STAT3 activation in the spleen of Ang II-treated mice (see Supplementary material online, Figure S6B). To determine the location of monocytes infiltrating the aorta after 2 weeks of Ang II infusion in C57Bl/6 mice, we performed immunofluorescence of aortic sections for F4/80 (red), pSTAT3 (Y705, green), and DAPI. We found that after 2 weeks of Ang II infusion, monocytes and macrophages localize in the perivascular fat and adventitia and these co-localize with pSTAT3 (Y705) (see Supplementary material online, Figure S7) (Figure 8). Discussion In this study, we demonstrate that exposure of human monocytes to endothelial cells undergoing 10% mechanical stretch increases differentiation into CD14 þþ CD16 þ intermediate monocytes and CD14 þþ CD209 þ cells. We further show that monocytes cultured with endothelial cells exposed to hypertensive mechanical stretch markedly increase expression of IL-6, IL-1b, IL-23, CCL4, and TNFa and have enhanced ability to drive proliferation of T cells from the same human donor. In addition, we found that endothelial cells undergoing hypertensive mechanical stretch stimulate an increase in pSTAT3 (Y), pSTAT3 (S), and pSTAT1 within these monocyte populations. Inhibition of STAT3 by Stattic prevented conversion of monocytes into the intermediate and the DC phenotype and normalized the cytokine production of these monocytes cultured with endothelial cells undergoing 10% stretch. Our data implicate a role of hydrogen peroxide and IL-6 as mediators of monocyte differentiation. We also show that hypertension is associated with an increase in the percentage of circulating intermediate and nonclassical monocytes and that circulating intermediate monocytes exhibit higher levels of pSTAT3 (Y), pSTAT3 (S), and pSTAT1. Finally, the tissue infiltrating monocytes, macrophages, and DCs exhibited an increase in phosphorylated STAT3 in hypertensive mice. Thus, altered mechanical forces affecting the endothelium can modify monocyte differentiation and activation and likely contribute to immune activation in hypertension. Intermediate monocytes are the least characterized of the monocyte subtypes in humans, however, these cells have been implicated in inflammatory diseases such as Kawasaki disease, 45 rheumatoid arthritis, 15 sepsis, 46 HIV, 47 acute heart failure, and coronary artery disease. 48 Using deuterium labelling in humans, Patel et al. recently showed a sequential transition of classical monocytes that emerge from the bone marrow to the intermediate and subsequently the non-classical phenotype. 11 Compared with classical monocytes, intermediate monocytes exhibit and hypertensive (n = 12) subjects. Two-way ANOVA with Student Newman Keuls post hoc test was used (*P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001). enhanced phagocytosis, produce higher levels of ROS and inflammatory mediators such as TNFa and IL-1b. 12,49,50 They also have the highest expression of the major histocompatibility complex class II antigens, including histocompatibility leukocyte antigen (HLA)-DR, -DP, and -DQ indicating that they also possess antigen presentation functions. 51 Thus, the presence of high numbers of intermediate monocytes in humans with hypertension likely has pathophysiological implications. Our data are compatible with findings by Randolph et al. showing that activated endothelial cells can modify monocyte phenotype. They demonstrated that monocytes that transmigrate the endothelium convert to macrophages that remain in the subendothelial space or DCs expressing CD83 that reverse transmigrate the endothelium. 22 In a subsequent study, this group also showed that a subpopulation of CD16 þ monocytes have a propensity to reverse transmigrate the endothelium and, upon doing so, increase their expression of CD86 þ and HLA-DR. In keeping with this, the authors showed these monocytes potently induced allogenic T-cell proliferation. 23 The investigators assumed cells had transmigrated the endothelial monolayer if they were not removed by washing the monolayer 2 h after addition of the cells. In our study, we found that 10% stretch markedly enhanced monocyte binding to the endothelial surface. Our analysis, however, did include cells potentially within the subendothelial space and it also demonstrated the presence of macrophage-like cells expressing the CD163 marker; however, these were not altered by the degree of endothelial stretch. In these experiments, we not only examined intermediate and nonclassical monocytes, but also CD14 þ cells bearing the markers CD209 and CD83. These have been employed as DC markers, but can be expressed on a variety of activated myeloid cells. CD209 is a C-type Lectin receptor that promotes the production of IL-1b and IL-23 that can ultimately skew T cells to a T H 17 phenotype. Levels of CD209 on whole leukocytes correlate with disease severity in Behçet's Disease. 52 LECT2 acts on CD209 to induce c-Jun N-terminal kinase (JNK) signalling in monocytes and endothelial cells. 20 CD209 forms a complex with TLR4 and promotes NFjB activation in response to oxidized lowdensity lipoprotein (LDL) confirming the role of CD209 in innate immunity and antigen presentation. 53 While we were not able to detect LECT2 levels in our co-culture system, the increase in CD209 on monocytes could arm these cells to respond to signals such as LECT2 or oxidized LDL in vivo. CD83 is a type-I lectin transmembrane protein expressed on monocyte-derived DCs and small subsets of other immune cells. 54 The presence of CD83 on DCs enhances their ability to evoke T-cell calcium transients and proliferation. 36 Thus, surface expression of CD209 and CD83 may have functional consequences in enhancing immunogenicity of monocyte-derived cells in hypertension. There are a number of factors in the hypertensive milieu that could activate both endothelial cells and monocytes, including cytokines and oxidative stress. 55,56 In this investigation, we focused on the role of increased endothelial stretch, which is known to stimulate endothelial cell cytokine production, ROS formation and expression of adhesion molecules. 31 The vessels affected by stretch in hypertension likely vary depending on the duration of the disease. Early in hypertension, there is increased stretch of large arteries; however, as hypertension is sustained, these proximal vessels become stiff, leading to propagation of the forward pulse wave velocity into smaller resistance arteries. 57 Our present data therefore might explain how increased cyclic stretch in small vessels could promote immune activation. Consistent with this, we have previously shown that aortic stiffening precedes renal accumulation of T cells, monocyte and macrophages ultimately leading to renal dysfunction. 58 In these experiments, we found evidence for STAT activation in intermediate monocytes of humans and in monocytes exposed to endothelial cells undergoing 10% cyclical stretch as evidenced by phosphorylation of STAT1 at Y701 and STAT3 at S727 and Y705. Of interest, STAT3 is required for Flt3-dependent formation of DCs from bone marrow derived cells, 38 and plays an important role in production of IL-1b, IL-6, and TNFa in macrophages of humans with coronary artery disease. 37 We also found that exposure of monocytes to endothelial cells undergoing 10% stretch markedly enhanced mRNA expression of these cytokines and that STAT3 is activated in these cells. Inhibition of STAT3 with Stattic not only inhibited cytokine production by human monocytes, but also reduced their conversion to intermediate monocytes and CD14 þþ CD209 þ cells. Thus, STAT-signalling seems to play an important role in monocyte differentiation and activation in hypertension. In keeping with this, we also observed an increase in STAT3 activation the monocyte-derived cells in the aorta and kidney of mice with Ang IIinduced hypertension. These results are compatible with findings of Johnson et al., who showed that STAT3 inhibition with S3I-201 prevented Ang II-induced hypertension and vascular dysfunction in vivo. 59 We made substantial efforts to identify factors released by the endothelium that would mediate STAT3 activation. STAT3 can be activated by myriad factors, including growth factors, numerous cytokines, Janus tyrosine kinase (JAK), ROS, heat shock proteins and xenobiotics. 60 IL-6 is both upstream and downstream of STAT3 activation, and we found that endothelial cells undergoing 10% stretch produced increased amounts of this cytokine. This is compatible with prior gene profiling studies showing that cyclical stretch increases IL-6 expression in endothelial cells. 28 We found that immune-clearing of IL-6 inhibited STAT3 activation and monocyte transformation to the intermediate phenotype suggesting a scenario in which IL-6 released by the endothelium stimulates STAT3 activation in adjacent monocytes, and ultimately greater amounts of IL-6 production by these latter cells in a feed-forward fashion. Of note, IL-6 quartiles were found to be strongly associated with the risk of developing hypertension A total of n = 5, sham, and n =7, Ang II, treated mice per group were used. One-tail unpaired t-test was employed (*P < 0.05, **P < 0.01, ***P < 0.001). . in the Nurses Health Study, 61 and mice lacking this cytokine are protected against Ang II-induced hypertension. 62 We also found that PEG-Catalase, which scavenges hydrogen peroxide, prevents monocyte transformation and STAT3 activation. This finding is compatible with prior studies showing that cyclical stretch activates production of ROS by endothelial cells, initially via activation of the NADPH oxidase and subsequently from uncoupled NO synthase. 31 In preliminary studies, we confirmed that superoxide production, as measured by detection of 2-hydroxyethidium formation from dihydroethidium, was increased in both monocytes and endothelial cells when the latter cells were exposed to 10% vs. 5% stretch (data not shown). Our data suggest that superoxide is unlikely the mediator of monocyte transformation, as the superoxide dismutase mimetic Tempol failed to prevent formation of intermediate cells or STAT3 activation. It is therefore likely that hydrogen peroxide, formed by dismutation of superoxide, mediates these effects. Hydrogen peroxide is relatively stable and thus likely to serve a paracrine-signalling role in mediating cross talk between the endothelium and adjacent monocytes. Of interest, the expression of haeme oxygenase, which has anti-oxidant properties, inversely correlates with monocyte expression of CD14 in humans. 63 In our experimental setup, we cannot exclude the possibility that IL-6 and hydrogen peroxide also have effects on the endothelium. Activation of the IL-6 receptor has been shown to stimulate the JAK-STAT pathway that can lead to further IL-6 production. 64 Likewise, hydrogen peroxide can activate the NADPH oxidase, which could ultimately lead to additional hydrogen peroxide production in a feed forward fashion. 65 We have also shown that ROS released from the mitochondria can stimulate the NADPH oxidase in endothelial cells. 66 Thus, the production of IL-6 and hydrogen peroxide in a milieu of endothelial cells and monocytes could have actions on both cell types, but ultimately lead to monocyte differentiation. Another potential mediator of STAT activation is loss of NO signalling. NO has been shown to inhibit STAT3 activation in ovarian cancer cells and endothelial NO bioavailability is commonly lost in hypertension and related diseases. 67,68 Monocytes in the circulation are constantly exposed to endothelial-derived NO, however, when placed in culture, spontaneously acquire CD16. We confirmed this in our studies and found that this was associated with STAT3 phosphorylation and that the NO donor DETA-NONOate markedly inhibited monocyte transformation and STAT activation. We also found that the addition of L-NAME to monocyte-HAECs cultures undergoing 5% stretch promoted STAT activation, in a fashion similar to 10% stretch. L-NAME exposure did not cause monocyte transformation during this 48-h exposure, suggesting that other signals like IL-6 and hydrogen peroxide might also be needed for this response. In keeping with our findings in the co-culture experiments, we found that hypertensive humans have an increase in circulating intermediate and non-classical monocytes, and that that this seems dependent on their severity of hypertension. It is likely that this shift in monocyte population predisposes to further inflammatory responses, including production of cytokines and T-cell activation. Of note, we also confirmed that intermediate monocytes of both hypertensive and normotensive subjects exhibit STAT1 and STAT3 activation. This is compatible with a scenario in which intermediate monocytes consistently exhibit and likely require STAT activation for their transformation from the classical precursors, but that this transformation is higher in hypertension, perhaps due to encounter with activated endothelium. In summary, our current study provides a previously unrecognized link between mechanical forces affecting the endothelium and activation of monocytes. This provides new insight into how the immune system can be activated in hypertension and for the first time implicates intermediate monocytes as potentially important in this disease. Intermediate Endothelial activation of monocytes in hypertension monocytes are not only a biomarker of inflammation in hypertension, but their acquisition of CD16 arms them to possess cytotoxic function and to produce TNFa. The production of cytokines like IL-6, IL-23, and IL-1b can also skew T cells produce IL-17A, which we have previously shown to be involved in hypertension. 8 It is therefore likely that altered endothelial mechanical forces have important effects on immune cell function, leading to end organ dysfunction and worsening hypertension.
2018-06-07T13:50:02.280Z
2018-05-23T00:00:00.000
{ "year": 2018, "sha1": "2885790fab03a8034169b13ddd630a675017284e", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/cardiovascres/article-pdf/114/11/1547/25527354/cvy112.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2885790fab03a8034169b13ddd630a675017284e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
86784350
pes2o/s2orc
v3-fos-license
Development and Reproduction of Podisus nigrispinus ( Hemiptera : Pentatomidae ) Fed with Thyrinteina arnobia ( Lepidoptera : Geometridae ) Reared on Guava Leaves The aim of this study was to evaluate the developme nt and reproduction of P. nigrispinus in laboratory when fed with T. arnobia reared on guava leaves. This predator showed nymp hal stage of 21.11 days, survival of 60% and periods of pre-oviposition, number of eggs/mass and eggs/female and egg viability of 6.10 days, 26.24 eggs, 314.90 eggs and 82.65%, respectively. These results demons trated that T. arnobia fed with guava leaves was an adequate supply of food to P.nigrispinus. INTRODUCTION Predators of the genus Podisus are natural enemies used for biological control.They occur in the entire American Continent (Thomas, 1992) and have been found in agriculture and forest areas in Brazil (Zanuncio et al., 1993).Podisus nigrispinus (Dallas, 1851) (Heteroptera: Pentatomidae), one of the most common species of this genus in the Neotropical areas, was reported in Brazil at Espírito Santo, Maranhão, Minas Gerais, Mato Grosso do Sul, Pará and São Paulo States.This species has been found in the following crops: cotton, coffee, eucalyptus, pinus, corn, soybean and wheat (Guedes et al., 2000;Pereira et al., 2001).The brown eucalyptus caterpillar Thyrinteina arnobia (Stoll, 1782) (Lepidoptera: Geometridae) is one of the most important defoliator of Eucalyptus spp., Psidium guajava, Campomanesia spp.and Eugenia spp.(Oliveira et al., 2005).Outbreaks of this species in forest plantations have been controlled, in some situations, with insecticides.However, insecticides may kill natural enemies (Guedes et al., 1992) and contaminate the environment.Therefore, there is a need to develop alternative control methods for this pest (Zanuncio et al., 1991).Due to the facility of rearing predatory stinkbug in laboratory, these insects have been produced and released in biological control programs of defoliating caterpillars (Zanuncio et al., 2002).However, this approach may decrease their efficiency in the field because the stimuli they receive in mass rearing are different from those in natural prey.Therefore, biological aspects of these predators can be adversely affected (Sznajder and Harvey, 2003).The knowledge about the performance of P. nigrispinus reared on natural prey is important not only to obtain the information about this predator on another host, but also to obtain data on its establishment in areas with the culture.Holtz et al. (2007) evaluated the performance of P. nigrispinus fed with T. arnobia caterpillars and demonstrated that this predator was not adapted to that prey when reared on eucaliptus leaves.The adaptation of herbivore in differents hosts has been the subject of ecological studies (Agrawal, 2000).However, low attention has been given to the investigation about how the predators can adapt to the prey when them come from differents kind of host.The main of this study was to evaluate the development and reproduction of P. nigrispinus in laboratory, when fed with T. arnobia reared on guava leaves. MATERIALS AND METHODS Rearing the prey Thyrinteina arnobia Caterpillars of T. arnobia were obtained from a mass rearing at Entomology Laboratory of Federal University of Espirito Santo.The pupae were conditioned in PVC tubes with 200 mm diameter and 25 cm height.The adults of T. arnobia were transferred just after emergence to a 38 x 41 x 45 cm wood box with paper leaf in its internal wall as oviposition site.A glass door was put on one side of this box to facilitate the handling of the insects.These adults received a 10% honey solution in anesthetic tube type and their egg masses were daily collected and maintained in an acclimatized chamber at 25 ± 3ºC, relative humidity of 70 ± 10% and photo phase of 12 h.The eggs of T. arnobia were maintained in Petri dishes (9.0 x 1.2 cm) until emergence of the caterpillars that were transferred to plastic tubes (15 x 12 cm) with the leaves of guava (Psidium guajava) until the pupa stage. Maintenance and multiplication of the predator Podisus nigrispinus P. nigrispinus was reared in an acclimatized room at 25 ± 3ºC, relative humidity of 70 ± 10% and photophase of 14 h.Ten adults of P. nigrispinus were maintained in plastic cup of 500 ml with absorbent paper as oviposition site.Water was supplied in an anesthetic tube type fixed at the cover of these cups and with its extremity locked with cotton.P. nigrispinus was fed with Tenebrio molitor (Linnaeus, 1758) (Coleoptera: Tenebrionidae) larvae. Laboratory experiment Stage I A total of 80 replications were used in which each one was constituted by an individualized second instar P. nigrispinus nymph in a Petri dish (9.0 x 1.5 cm).A cotton pad (3 x 3 cm) was put on the inside top of each dish and daily moistened with distilled water to maintain the humidity and water supply for these nymphs.The inferior part of the dishes was covered with paper filter to absorb the excess humidity.P. nigrispinus nymphs received fourth instar larvae of T. arnobia, which were replaced when necessary. Stage II Twenty pairs of recently emerged P. nigrispinus adults were obtained from the stage I.Each of these pairs was conditioned in a plastic cup (9.5 cm height and 10 cm diameter).An anesthetic tube type was put the inside cover of these cups with its extremity locked with a cotton pad to supply water to the insects.The internal part of these cups was wrapped with paper as oviposition site.These adults received caterpillars of T. arnobia which were replaced when necessary. Statistical analysis The weight and weight gain was observed for the each instar and up to the third day of the adult stage (males and females).The duration and viability of each instar of P. nigrispinus were also evaluated.The pre-oviposition period, the numbers of egg masses, eggs per egg mass, eggs per female and nymphs per egg mass besides the viability of eggs per egg mass per female of P. nigrispinus were evaluated.The analyses of variance at 5% probability level were performed for all the characteristics investigated. RESULTS AND DISCUSSION The weight of the nymphs of P. nigrispinus, independent of gender (male and female), were similar in the second, third and fourth instar (Table 1).However, the females from the fifth instar were heavier than males.Gain of weight during the second, third and fifth instars for nymphs which originated the females or males of P. nigrispinus were similar (Fig. 1A).However, gain of weight was higher for the females in the fourth instar compared to the males.The males and females of this predator lost weight on the first day of the adult stage, which continued for the males on their second day of life.On the other hand, P. nigrispinus females had approximately 8% gains of weight during this period (Fig. 1B).Heavier weight of P. nigrispinus nymphs at the end of the nymph period for those that originated females and in the adult stage was similar to that found for Podisus distinctus (Stal, 1860) (Heteroptera: Pentatomidae) fed with T. molitor larvae (Zanuncio et al., 1998;Matos Neto et al., 2004) and to P. nigrispinus fed with Zophobas confusa Gebien, 1906 (Coleoptera: Tenebrionidae) larvae (Zanuncio et al., 1996).This higher weight of the females than males was due to biomass accumulation by the females, which was necessary for their reproduction from the beginning of the fifth instar.Weight gain by the females of predatory Pentatomidae has also been related to the development of their ovary and egg formation (Oliveira et al., 1999;Oliveira et al., 2002).Duration from second to fifth instar (5.18 to 6.89 days) of P. nigrispinus was similar to that of 4.1 to 6.7 days for this predator fed with Musca domestica (Linnaeus, 1758) (Diptera: Muscidae) larvae and 3.4 to 6.0 days with Bombyx mori (Linnaeus, 1758) (Lepidoptera: Bombycidae) larvae (Zanuncio et al., 1990).These values were also similar for this predator with different prey because it presented duration of each instar from 3.3 to 6.9 days with Rachiplusia nu (Guenée, 1852) (Lepidoptera: Noctuidae) (Saini, 1994) larvae and 3.4 to 6.2 days with Alabama argillacea (Huebner, 1818) (Lepidoptera: Noctuidae) caterpillars (Oliveira et al., 2002).The duration of 21.11 days for the nymph stage of P. nigrispinus (Table 2) was similar to that of this predator fed Spodoptera frugiperda (J.E.Smith, 1797) (21.6 days) and T. molitor pupae (20.4 days) (Oliveira et al., 2004).These results also showed that T. arnobia larva was an adequate prey to meet the development and reproduction requirements of P. nigrispinus. The nymph viability of each instar of P. nigrispinus (Table 2) was similar to that found for this predator with caterpillars of B. mori (85 a 96%) from the second until fifth nymphal stage (Fernandes et al., 1996).The total viability (60%) was also similar when this predator were fed with S. frugiperda (64%) and with T. molitor (68%) (Oliveira et al., 2004).The longevity of P. nigrispinus was 35.54 days for the females and 43.08 days for the males (Table 3).The shorter longevity of P. nigrispinus females than the males when fed with T. arnobia caterpillars agreed with that reported for this predator fed with S. frugiperda larvae (39.6 days) (Oliveira et al., 2004).It could be explained by the high energy use of the females for egg formation and oviposition.This occurs due to different energy allocation between the physiological processes with increase of one energy demand for egg production and decrease for maintenance (i.e.longevity) (Sibly and Calow, 1986).The pre-oviposition period per female of P. nigrispinus was 6.10 days (Table 3), which was similar to 6.3 days found for this predator in T. arnobia reared in eucalyptus leaves (Holtz et al., 2006) and with A. argillacea caterpillars (Oliveira et al., 2002).The number of eggs per female of P. nigrispinus fed with T. arnobia (314.90), was similar to that found for this predator when fed with A. argillacea caterpillars and cotton crop (348,10) (Oliveira et al., 2002) and with T. molitor (296,66; 325,00) (Espindula et al., 1996;Oliveira et al., 2004) and higher than that when fed with Diatraea saccharalis (Fabricius, 1794) (Lepidoptera: Crambidae) (97,12) (Vacari et al., 2007) and with T. arnobia (57.0) reared with eucalyptus (Holtz et al., 2006;2007).The highest number of eggs found for P. nigrispinus when fed with A. argillacea and cotton crop on T. molitor larvae and with caterpillars of T. arnobia reared in guava leaves, suggested that these preys may have better nutritional requirements for P. nigrispinus.Eggs viability of this predator was 82.65% (Table 3), which was higher than that reported by Holtz et al. (2006) for P. nigrispinus fed with T. arnobia (56.4 %), when reared with eucalyptus leaves.In T. molitor (79.2 %) and S. frugiperda (85.2 %) (Oliveira et al., 2004), results were similar showing that P. nigrispinus fed with T. arnobia from guava leaves had adequate nutritional quality. Results showed that T. arnobia reared on guava leaves could be an appropriate prey to P. nigrispinus.Holtz et al. (2007) reported that this caterpillar reared on eucalyptus leaves was not a good prey, because these leaves had a specific essence which produced secondary substances, which might affect stinkbug development.However, Holtz et al. (2003) evaluated the effect of these substances on T. arnobia and found that these compounds did not affect the development of this caterpillar on eucalyptus, but others substances that come from guava could affect their development.Santos et al. (2000) and Oliveira et al. (2005) found that T. arnobia had better development on guava leaves, which could be considered as the best host for it, while eucalyptus leaves had poor digestibility and nutritional quality which reduced the development of this caterpillar.For this reason, when T. arnobia was reared with eucalyptus leaves, probably the larvae had a lower quality and therefore affected the development of the predator. by the same letter in the line are not different at 5% of probability. Figure 1 - Figure 1 -Gain of weight (%) of females and males of Podisus nigrispinus the second, third, fourth and fifth stages (A) and the first and second days of adulthood (B) fed with larvae Thyrinteina arnobia.25 ± 3 º C, RH 70 ± 10% and photophase of 12 hours. Table 1 - Nymphs of weight (mean ± standard error) in the second, third, fourth and fifth stages and adults until the third day (mg) of Podisus nigrispinus with Thyrinteina arnobia (25 ± 3 º C, RH 70 ± 10% and photophase of 12 hours).
2018-12-12T17:01:04.271Z
2011-06-01T00:00:00.000
{ "year": 2011, "sha1": "71321063a57a38960c501fdbd6d36f7e4df8aa51", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/babt/a/56kMcqF54vYj7jRWvMY97hc/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "71321063a57a38960c501fdbd6d36f7e4df8aa51", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
16040406
pes2o/s2orc
v3-fos-license
On the spectrum of $\alpha$-rigid maps It is shown that there exists an $\alpha$-rigid transformation with $\alpha$ less or equal to $\frac12$ whose spectrum has Lebesgue component. This answers the question raised by Klemes and Reinhold in \cite{Klemes-Reinhold}. We exhibit also a large class of $\alpha$-rigid transformations with singular spectrum. Introduction In his 1980's paper [27], A. Katok showed that interval exchange maps are not mixing. As a consequence of Katok's proof interval exchange maps are α-rigid. In a later work, Veech in [43] investigated the spectral properties of interval exchange maps and showed that almost every minimal interval exchange maps has singular simple spectrum. Before, Oseledets in [21] proved that for any interval exchange transformation the maximal spectral multiplicity is bounded above by r − 1, where r is the number of intervals exchanged. Moreover, he constructed the first example of a transformation with continuous spectrum and finite multiplicity greater than 1. Since the example is an exchange of 30 intervals, the maximal spectral multiplicity m satisfies 2 ≤ m ≤ 29. Robinson [35] constructed ergodic interval exchange transformations with arbitrary finite maximal spectral multiplicity. It follows from [28] that all the examples constructed by Oseledets and Robinson have singular spectrum. However Katok's theorem [27] remains at the present time the only universal result about the spectrum of interval exchange maps. Following Veech one may ask if there is some other "universal" spectral property satisfied by interval exchange maps. More precisely, as in [1] one may ask the following question. Question 1. Does any interval exchange map have singular spectral type? The answer to the above question is affirmative in the case of three interval exchange maps. Note that using the result in [17] one may show that every ergodic interval exchange transformation on three intervals has singular spectrum. On the other hand, Klemes and Reinhold in [30] proved that for any α ∈]0, 1[, there exists an α-rigid rank one transformation with singular spectrum and asked the following question. Question 2. Does any α-rigid transformation have singular spectral type? It is well known that the answer is affirmative if α > 1 2 (Baxter's theorem). Note that if this was true for all α ≤ 1 then the answer the question 1 would be affirmative too. In this paper we shall prove that this is not the case. More precisely, we prove in section 3 that the Mathew-Nadkarni transformation (which has Lebesgue component in the spectrum) is 1 2 -rigid. We recall that Mathew-Nadkarni introduced this transformation in 1984 in [19] to answer the question raised by Helson and Parry [18] of whether there exists an ergodic measure preserving transformation which has a Lebesgue component in its spectrum with finite non-zero multiplicity. Helson and Parry also mentioned the problem, attributed to Banach, whether there exists an ergodic measure preserving transformation on a finite measure space whose spectrum is simple Lebesgue. In [36], Rokhlin mentioned the problem of finding an ergodic measure preserving transformation on a finite measure space whose spectrum is of Lebesgue type with finite multiplicity. Another contribution to the Banach-Rohklin question is due to M. Queffélec [32] who proved that the spectrum of the Rudin-Shapiro substitution has Lebesgue component of multiplicity two. It turns out that the Rudin-Shapiro substitution is 1 2 -rigid. In section 4 we will present other examples of dynamical systems arising from substitutions with Lebesgue component of multiplicity two whose rigidity constant is less than 1 2 . It is an easy exercise to prove that α-rigid transformations are not mixing. Since α-rigidity does not imply that the spectrum is singular (as we shall see in the section 3), the absence of mixing does not imply that the spectrum is singular. Let us mention that any aperiodic measure-preserving transformation can be realized as an exchange of an infinite number of intervals [3]. Our work on the question of Klemes and Reinhold (question 2) includes a survey of the results of Dekking-Keane [8] and Queffélec [33]. F. M. Dekking and M. Keane showed that the dynamical systems arising from substitutions are not mixing. The ingredients of the proof lead to establish that the dynamical systems arising from substitutions are α-rigid. The procedure to check the constant α will be presented in the last section without proof. Queffélec showed that the substitution which gives rise to the Rudin-Shapiro sequence has Lebesgue component of multiplicity two together with a discrete component. (Kamae [25] had earlier shown that the correlation measure of the Rudin-Shapiro sequence is Lebesgue). In this section 2 we shall exhibit a large class of α-rigid transformations with singular spectrum. More precisely, we shall prove that the transformation satisfying the restricted Beurling condition is singular. We say that the transformation T satisfies the restricted Beurling condition if the following holds Where U T is the operator defined by U T f (x) = f (T −1 x) and U n T , n ∈ Z W is the weak closure power of T . We mention that the α-rigid rank one transformations constructed by Klemes & Reinhold statify the condition above. Thus, our result can be considered as a generalization of the Klemes-Reinhold result. There is another motivation for our work. In 1979 D. Rudolph [37] introduced the notion of minimal self-joinings as the foundation for a machinery that yields a wide variety of counterexamples in ergodic theory. But the property of minimal self-joinings is meager with respect to the weak topology on the group of the all automorphisms. Later, in 1983 A. del Junco and M. Lemańczyk in [9] showed that these constructions, as well as many others in [37], can be based on a much weaker property that is in fact generic (residual in the weak topology). A special case of this property is the property of κ-mixing (see Katok [26], Stepin [42]) which implies the mutual singularity of the convolution powers of the maximal spectral type of an automorphism T . We remark also that any κ-mixing transformation is (1 − κ)-rigid. This would imply that the spectrum of any non-mixing transformation with minimal self-joinings is singular. In this paper we are not able to answer the question 3. Nevertheless we shall exhibit in section 2 a large class of non-mixing transformation with minimal self-joinings with singular spectrum. Recently, some progress was obtained by Prikhod'ko and Ryzhikov in [24]. The authors show that the well-known Chacon transformation [7] possesses this property. It is well known that the Chacon transformation has the minimal self-joining property [11]. This result can be extended easily to the case of staircase transformations with bounded cutting parameter. The latter examples include as a special case the Klemes-Reinhold examples of α-rigid rank one transformations. We recall now some basic facts on spectral theory. A nice account can be found in the appendix of [22]. 1.1. Spectral measures. Given a measure preserving invertible transformation T : X → X and denoting as above by U T f the operator U T f (x) = f (T −1 x), for any f ∈ L 2 (X) there exists a positive measure σ f on the unit circle S 1 defined bŷ Definition . The maximal spectral type of T is the equivalence class of Borel measures σ on T (under the equivalence relation µ 1 ∼ µ 2 if and only if µ 1 << µ 2 and µ 2 << µ 1 ), such that σ f << σ for all f ∈ L 2 (X) and if ν is another measure for which σ f << ν for all f ∈ L 2 (X) then σ << ν. By the canonical decomposition of L 2 (X) into decreasing cycles with respect to the operator U T , there exists a Borel measure σ = σ f for some f ∈ L 2 (X), such that σ is in the equivalence class defining the maximal spectral type of T . By abuse of notation, we will call this measure the maximal spectral type measure. It can be replaced by any other measure in its equivalence class. The reduced maximal type The spectrum of T is said to be discrete (resp. continuous, resp. singular, resp. absolutely continuous , resp. Lebesgue ) if σ (0) T is discrete ( resp. continuous, resp. singular, resp. absolutely continuous with respect to the Lebesgue measure or equivalent to the Lebesgue measure). We write The notion of α-rigidity has been formulated in 1987 by N. Friedman [14]. Besides this, in 1969 J. Baxter proved [5] Theorem 4 (Baxter). The spectrum of any α-rigid transformation is singular Note that this theorem is a consequence of the following result Theorem 5 (Ryzhikov [41]). Any α-rigid transformation T with α > 1 2 is spectrally disjoint from any mixing transformation S Proof. By assumption, there exists a sequence of integers and a Markov operator P such that (U ni T ) converges weakly to αI + (1 − α)P . Let J be any operator such that Then, for any f ∈ L 2 0 (X) we have αJf + (1 − α)P Jf = 0. Since the norm of P is ≤ 1, the operator αI + (1 − α)P is invertible. Hence, Jf must be 0 thus J ≡ 0. It follows that T is spectrally disjoint from S and the proof of the theorem is complete. Remark 1. The proof above gives more, namely if the weak closure of the powers of the operator U T contains any invertible operator, then T is spectrally disjoint from any mixing maps. In fact, assume that there exists an invertible operator V in the weak closure of the power of T which means that there exists a sequence of integers (n i ) such that lim for any f ∈ L 2 0 (X). Let S be any mixing map. By the Lebesgue decomposition of σ (0) T with respect to the maximal spectral type of S we have σ (0) Applying the Riemann-Lebesgue theorem we get therefore f = 0 and the proof is complete. A standard example of ergodic 2-joinings comes from the centralizer of T . More precisely, if S ∈ C(T ) then the measure given by Following [10] and [44], T is called 2−fold simple if each ergodic 2-joining of T is either on the graph of some S ∈ C(T ) or is the product measure µ × µ. It is easy to see that 2-fold simplicity of T implies that C(T ) is a group (consider Now, for n > 1 and any S i ∈ C(T ), i = 1, · · · , n, the n-joining measure µ S1,··· ,Sn given by µ S1,··· , The transformation T is said to be simple if C(T ) is a group and for every n ≥ 2 and every ergodic n−joining λ the set {1, · · · , n} can be split into subsets s 1 , · · · , s k such that each λ| B s i is off-diagonal and λ is their product [10]. If T is 2-fold simple and C(T ) is trivial, i.e., C(T ) = {T i : i ∈ Z}, then T is said to have the property of minimal self-joining of order 2 (T ∈ M SJ (2)). Moreover T is said to have minimal self-joinings of any order if T is simple and C(T ) is trivial. The class of M SJ(2) transformations is contained in the class of transformations with finite joining rank. The joining rank of any ergodic transformation T , written jrk(T), is defined as the minimum of r ∈ N for which each ergodic r−fold self-joining ν has some pair i < j such that the two-dimensional marginal ν B The standard examples of transformations with finite joining rank greater than 2 is given by the powers of any weak mixing transformation T in M SJ (2). In addition, a non-mixing transformation with the minimal self-joining property is α−rigid. Indeed, we have the following Lemma 1 ( [38]). Let T be a non-mixing map in M SJ (2). Then T is α-rigid for some α ∈]0, 1[. Proof. Since the set J(T, T ) is compact and T is not mixing, there is a sequence n k → ∞ such that the sequence of diagonal measures ∆ n k converges to some measure λ ∈ J(T, T ), where λ = µ × µ. Since T ∈ M SJ(2), by the ergodic decomposition, we have where a i ≥ 0 and i∈Z a i = 1 − β. Hence, for some i we have Thus, for the sequence m k = n k − i, we obtain where α def = a i and the proof of the lemma is complete. Remark on the connection between joinning and Markov operators . It is well known and easy to see that for any joining λ between two dynamical systems (X, A, T, µ) and (Y, B, S, ν) there exists a Markov operator J : Here J * is the adjoint of J defined by (Jf, g) = (f, J * g) for f ∈ L 2 0 (X, µ) and g ∈ L 2 0 (Y, ν). The correspondence is one-to-one. The systems (X, A, T, µ) and (Y, B, S, ν) are disjoint (in the sens of Furstenberg) if J is trivial, where trivial means Jf = f dµ, for any f ∈ L 2 (X, µ). To tackle some problems is ergodic theory using this approach is proposed for example in [39]. Using the same approach, Fraçzek-Lemańczyk [12] show that the ergodic α-rigid maps with α > 0 is disjoint from all mixing maps. Indeed, let λ be any ergodic joining between a ergodic α-rigid map T and a mixing map S and let J be a associated Markov operator. By the α-rigidity of T it is easy to see that there is a sequence of integers (n k ) k∈N and a Markov operator P such that (U n k T ) converges weakly to αId + (1 − α)P . Thus JU n k T converges weakly to αJ + (1 − α)JP . Since JU n k T = U n k S J and S is mixing we have (U n k S Jf, g) = 0 for each f ∈ L 2 0 (X, µ) and g ∈ L 2 0 (Y, ν). As a consequence α(Jf, g) + (1 − α)(JP f, g) = 0 for each f ∈ L 2 0 (X, µ) and g ∈ L 2 0 (Y, ν). This gives, for all f ∈ L 2 (X, µ), Since the dynamical system (X × X, A B, T × S, µ × ν) is ergodic and JP U T = U S JP , it follows that the operator f ∈ L 2 (X, µ) −→ f dµ is indecomposable (i.e., is an extreme point of the subspace of Markov operators). Therefore Jf = f dµ for each f ∈ L 2 (X, µ) and the proof is complete. In the following we shall establish the connection between the old problem of Banach on the existence of dynamical systems with simple Lebesgue spectrum and the M SJ property. For that, we recall the following conjecture. Conjecture 6. ( [22],pp. 50) If a reduced maximal spectral type of some transformation with simple spectrum is absolutely continuous then it is Lebesgue. Theorem 7. If T is a transformation with finite joining rank and simple Lebesgue spectrum then Conjecture 6 implies that T is in the class M SJ. Proof. By the King-Thouvenot theorem [29], T is the e-extension (e ∈ N * ) of the power (say p) of some transformation S with the M SJ property. But S p is a factor of T . It follows that S p and S have simple Lebesgue spectrum. Since the multiplicity of S p is p we have p = 1. Thus T is an e-extension of S. Now, by the classical spectral decomposition of the group extension we get e = 1 and this concludes the proof of the theorem. Main result: Beurling condition. In this section we introduce the following condition. Definition . We say that the transformation T satisfies the "Beurling condition " if        i∈Z a i U i T : a i = 0 for some i and This definition is inspired by early work of Beurling on a quasi-analytic class of mappings and on the uncertainty principle in harmonic analysis [4] combined by ideas coming from the joining theory. The main result of this paper is the following theorem. Theorem 8. If the transformation T satisfy the Beurling condition then the spectrum of T is singular. To prove this theorem we shall need the following lemma. We recall that a measure ν on the circle is called a Rajchman measure if lim n→∞ ν(n) = 0. Lemma 2 (Translation lemma). Suppose µ is a probability measure on T and (n k ) k∈N is a sequence of distinct integers. Define µ k by µ k = e in k θ dµ(θ), k = 1, 2, 3, · · · If (µ k ) k∈Z converges in the weak* topology to σ, then σ is singular with respect to any Rajchman measure ν. In fact, if µ = µ s + µ a is the Lebesgue decomposition of µ with respect to the Rajchman measure ν on T, then |σ|(E) ≤ µ s (E), for every Borel set E in T. The proof of this lemma is the same as the proof given in [34, pp. 66] in the case of Lebesgue measure. We will also need the following Beurling theorem proved in [4] Theorem 9 (Beurling's Theorem). Let f be the function on torus T define by and assume that the following condition holds Hence σ a = 0 and the proof of the theorem is completes. As an immediate consequence of Theorem 8 we obtain the following Corollary 1. If the transformation T satisfies the following restricted Beurling condition Then T is α-rigid with singular spectrum. Remark 2. Note that we have actually proved that T is spectrally disjoint from any mixing maps S provided that Observe that (2.4) holds if f is an analytic function on the circle. Applications. Using Theorem 8 we shall give in this section a simple proof of some well known results on the singularity of the spectrum of some special maps of rank one maps. More precisely, we shall give a simple proof of the singularity of Chacon maps and the staircase maps with bounded cutting parameter. Recall that for any ε > 0 one may construct a staircase maps with bounded cutting parameter, say p, such that the α-rigid constant is smaller that ε. In fact, the α-rigid constant is 1 p . Let us recall the definition of rank one maps and in particular the staircase maps. Using the cutting and stacking method described in [15], [16], one can define inductively a family of measure preserving transformations, called rank one transformations, as follows. Let B 0 be the unit interval equipped with the Lebesgue measure. At stage one we divide B 0 into p 0 equal parts, add spacers and form a stack of height h 1 in the usual way. At the k th stage we divide the stack obtained at the (k − 1) th stage into p k−1 equal columns, add spacers and obtain a new stack of height h k . If during the k th stage of our construction the number of spacers put above the j th column of the (k − 1) th stack is a Figure 1 : k th -tower. Proceeding in this way, we get a rank one transformation T on a certain measure space (X, B, ν) which may be finite or σ−finite depending on the number of spacers we added. The construction of any rank one transformation thus needs two parameters: (p k ) ∞ k=0 (cutting and stacking parameter), and ((a In the case of staircase maps the spacers parameter is given by The classical Chacon map [7] corresponds to the case p k = 3, for every k ∈ N * . It is easy to see that the Chacon map is 1 3 -rigid. More generally, it is easy to prove that the staircase with bound cutting parameter (say p) is 1 p -rigid (in fact, for any measurable set A we have lim inf µ(T hn A ∩ A) ≥ 1 p µ(A)). The following lemma can be may proved using Theorem 8 and the Remark 2. Nevertheless for the convenience of the reader we present a different proof. Lemma 3. Let T be a weak mixing transformation and assume that there is a sequence of integers (n k ) such that (U n k T ) k∈N converges weakly to P (T ), where P (z) is a nonzero analytic function on the circle. Then T is spectrally disjoint from any mixing transformation S. Proof. (Ryzhikov [41], [12]) Let J be any operator such that Then P (T )Jf = 0 for each f ∈ L 2 0 (X). Since the maximal spectral type of T is continuous and P (z) is analytic we have ker P (T ) = {0}. Thus Jf = 0 for each f ∈ L 2 0 (X) i.e., J is trivial and the proof of the lemma is complete. Corollary 2. The staircase maps with bounded cutting parameter are spectrally disjoint from any mixing maps. Proof. By the definition of staircase maps it is easy to prove that the sequence T hn converges weakly to One may apply Lemma 3 also to the "historical" Chacon map [6], given by p k = 2, k ∈ N * and a 1 = 1, a 2 = 0. From the construction one may check that the sequence (T h k ) k∈N * converges weakly to +∞ k=0 1 2 k T k . Thus it is easy to get from Lemma 3 the following result. Corollary 3. The historical Chacon map is spectrally disjoint from any mixing map. Remark 2. Combining the same methods with the continuous version of Beurling theorem we can extend Theorem 8 to flows. Now, applying the results of Fraczek-Lemańczyk [12], [13], we can exhibit examples of maps for which the conditions of the theorem are satisfied. Mathew-Nadkarni transformation In this section we show that α-rigidity alone is not enough to ensure the spectrum is singular. Indeed, we will show that the Mathew-Nadkarni transformation is an example of 1 2 -rigid transformation whose spectrum has a Lebesgue component. The proof can be found in [19] or [31]. We will proof the following theorem. Theorem 11. The Mathew-Nadkarni transformation is Proof. Recall that the operator U T φ : has a direct sum decomposition where L 0 = {f ⊗ 1 : f ∈ L 2 ([0, 1))} and L 1 = {f ⊗ χ : f ∈ L 2 ([0, 1)) and χ(g) = (−1) g , for g ∈ Z 2 }. Let A be a Borel set and ε ∈ Z 2 then where f 1 = 1 2 I A , f 2 = χ(ε)f 1 . Let σ d be the discrete part of the spectral measure σ I A×{ε} . Then that there exists a sequence of integers (n k ) such that Hence, by a standard argument, for any Borel set B in the σ-algebra of [0, 1) This completes the proof of the theorem. Remark 3. The Mathew-Nadkarni transformation is not weakly mixing but using the Ageev's construction [2] one can produce an α-rigid transformations with continuous spectrum and Lebesgue component of even multiplicity. Let us remark also that Mathew-Nadkarni's construction contains continuum pairwise non-isomorphic dynamical systems [31]. It follows that one can produce a continuum of α-rigid transformations with 2-fold Lebesgue spectrum. V. V. Ryzhikov told us that Ageev had obtained α-rigid maps with Lebesgue spectrum using the examples of Parry-Helson. This result has never been published. Ryzhikov also communicated us that Ageev used the same assertion as it was pointed out to us also by M. Lemanczyk: If T is rigid and the cocyle φ : X −→ Z 2 gives Lebesgue spectrum for T φ (such φ exists over each rigid maps by Helson-Parry [18]), then T φ is 1 2 -rigid. Substitution examples In this section we review some examples of α-rigid transformations whose spectrum has Lebesgue component. In particular we will give an example with 0 < α ≪ 1 2 coming from substitution theory. A vast literature is devoted to substitutions, whose Bible is [33]. They appear as symbolic systems defined on a finite alphabet A = {0, 1, · · · , k − 1}; a substitution ξ is a mapping from A to the set A * of all finite words of A. It extends naturally into a morphism of A * by concatenation. We restrict ourselves to the case when ξ(0) begins with 0 and the length of ξ n (0) tends to infinity with n. The infinite sequence u beginning with ξ n (0) for all n is then called a fixed point of ξ and the symbolic system associated to u is called the dynamical system associated to ξ. When ξ is primitive, (i.e., there exists n such that α appears in ξ n (β) for all α, β ∈ A), the system is uniquely ergodic, and we can consider the measure-preserving system associated to ξ, to which we refer in short by "the substitution ξ". The composition matrix M of ξ is the matrix whose entries are ℓ ij = O i (ξ(j)), where i, j ∈ A and O i (ξ(j)) is the number of i ′ s occurring in ξ(j). If ξ is primitive, it follows from the Perron-Frobenius theorem that M admits a strictly positive simple eigenvalue θ, such that θ > |λ|, for any other eigenvalue λ and there exists a strictly positive eigenvector corresponding to θ. It is easy to see that (see [33]), for any a ∈ A, the sequence of k-dimensional vector ( O 0 (ξ n (a)) θ n , · · · , O k (ξ n (a)) θ n ), converges to a strictly positive eigenvector v(a) corresponding to θ. A classical result on primitive substitution is the Keane-Dekking theorem [8]. They proved that the dynamical system arising from a primitive substitution is not mixing. Their proof, with the above notations, contains the following. Theorem 12. The dynamical system arising from a primitive substitution ξ is rρ-rigid. Here r is the maximum of the measure of the cylinders set [aa], a ∈ A, and ρ is the ℓ 1 -norm of the strictly positive eigenvector v(a r ) corresponding to the Perron-Frobenius eigenvalue θ of ξ, a r is a letter for which the measure of [a r a r ] is r. Actually it is obvious to compute the constant r. In fact, let ξ 2 be the substitution defined on the alphabet A 2 = {(ab), a, b ∈ A} in the following way : if ξ(ab) = ξ(a)ξ(b) = y 0 y 1 y 2 y 3 , then we set ξ 2 (ab) = (y 0 y 1 )(y 1 y 2 ). Now it is easy to compute the normalized positive eigenvector corresponding to the dominant eigenvalue of the composition matrix M 2 of ξ 2 . M. Queffélec in [32] shows that the continuous part of the Rudin-Shapiro dynamical system is Lebesgue with multiplicity 2 and it is easy to prove that the Rudin-Shapiro This substitution has Lebesgue spectrum with multiplicity 2 in the orthocomplement of eigenfunctions [33, p. 221]. We point out that one may use a standard computer program to compute approximatively the constant of α-rigidity of ξ which is approximatively equal to 0.3104979673 × 10 −7 .
2009-04-02T06:22:09.000Z
2009-02-16T00:00:00.000
{ "year": 2009, "sha1": "24c7e73658b62dd307de394f6a4d2cabbac672b1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0902.2622", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "00ef0f97abbc4520849f24abd27bf3564b42f246", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
254176707
pes2o/s2orc
v3-fos-license
Criterion and construct validity of the Beck Depression Inventory (BDI-II) to measure depression in patients with cancer: The contribution of somatic items Background/Objective Screening for depression in patients with cancer can be difficult due to overlap between symptoms of depression and cancer. We assessed validity of the Beck Depression Inventory (BDI-II) in this population. Method Data was obtained in an outpatient neuropsychiatry unit treating patients with and without cancer. Psychometric properties of the BDI-II Portuguese version were assessed separately in 202 patients with cancer, and 376 outpatients with mental health complaints but without cancer. Results Confirmatory factor analysis suggested a three-factor structure model (cognitive, affective and somatic) provided best fit to data in both samples. Criterion validity was good for detecting depression in oncological patients, with an area under the ROC curve (AUC) of 0.85 (95% confidence interval [CI], 0.76–0.91). A cut-off score of 14 had sensitivity of 87% and specificity of 73%. Excluding somatic items did not significantly change the ROC curve for BDI-II (difference AUCs = 0.002, p=0.9). A good criterion validity for BDI-II was also obtained in the non-oncological population (AUC = 0.87; 95% CI 0.81–0.91), with a cut-off of 18 (sensitivity=84%; specificity=73%). Conclusions The BDI-II demonstrated good psychometric properties in patients with cancer, comparable to a population without cancer. Exclusion of somatic items did not affect screening accuracy. Introduction Patients with cancer frequently experience symptoms of depression, which can negatively affect long-term quality of life, treatment compliance, health service use, and mortality (Andersen et al., 2014;Chida et al., 2008). The reported prevalence of depression in patients with cancer varies according to the type and clinical characteristics of cancer, the conceptualization of depression, and the criteria and methods that are used for diagnosis (Massie, 2004). While prevalence over the first five years following a cancer diagnosis (Mitchell et al., 2011;Pitman et al., 2018) may range from 4% to 20%, depression remains under-diagnosed and is often left untreated in patients with cancer (Walker et al., 2014), calling for an urgent identification of appropriate screening and assessment tools for use in routine clinical practice in this field. To address this need, the National Comprehensive Cancer Network (NCCN) (National Institute for Clinical Excellence et al., 2004) and the American Society of Clinical Oncology (ASCO) (Andersen et al., 2014) have published guidelines emphasizing the importance of formally assessing depressive symptoms regularly across the trajectory of care. These recommendations highlight the use of standardized measures, validated for oncological populations, with several depression assessment tools proving to be effective in this context. The validation of self-reported measures of depression is an important contribution to this field. When used appropriately, such instruments are a cost-effective and equitable means of identifying depressive symptoms, much less time and resource consuming than structured interviews (Vodermaier et al., 2009;Wakefield et al., 2015). Additionally, the selection of self-reported measures should be based on existing validation data in the population of interest (Ziegler et al., 2011). The most often used and recommended questionnaires for the oncological setting are the Hospital Anxiety and Depression Scale (HADS) (Zigmond & Snaith, 1983), the Patient Health Questionnaire (PHQ-9) (Kroenke et al., 2001), the Beck Depression Inventory (BDI-II) (Beck et al., 1996) and the Center for Epidemiologic Studies Depression Scale (CES-D) (Radloff, 1977). However, diagnosing depression in patients with cancer can be particularly challenging, as many symptoms of depression overlap with cancer-related symptoms and/or as treatment side-effects. Furthermore some symptoms may actually represent a normative response when the patient is confronted with threats to life or physical integrity, bad news, aggressive treatments, and/or pain (Ha et al., 2019;Massie, 2004). The BDI-II is one of the most widely used self-report measures of depressive symptom severity. Validation studies have shown good to excellent psychometric properties across populations (Wang & Gorenstein, 2013). Severity cut-off scores were originally provided by Beck (Beck et al., 1996), allowing to distinguish between minimal (0 to 13), mild (14 to 19), moderate (20 to 28) and severe (29 and greater) depression. Importantly, the BDI-II was developed in accordance with the depression diagnostic criteria defined in DSM-IV, which recognizes the wide-ranging nature of depressive symptoms, generally categorized as cognitive, affective and somatic (American Psychiatric Association, 2000). However, and possibly due to the original intent of the BDI-II to measure depression globally, findings concerning its factor structure have been somewhat inconsistent. Several studies aimed to examine the dimensionality of the BDI-II in a variety of samples, trying to replicate the structure proposed by the authors of the scale, or proposing other novel structures (Huang & Chen, 2015). While Beck et al. (1996) originally suggested a two-factor correlated model comprising cognitive and somatic-affective factors, at least two studies identified a single BDI-II factor, in accordance with scoring instructions for the scale (Kim et al., 2002;Segal et al., 2008). Although Beck et al. (1996) reported an alternative two-factor model consisting of cognitive-affective and somatic factors, it was only developed because the first was not suitable to a student sample. On the other hand, the original two-factor model proposed by Beck et al. (1996) in a clinical outpatient sample, with cognitive and somatic-affective factors, has received support from other studies conducted with patients with physical illness (e.g. Arnau et al., 2001;Brown et al., 2012;Kojima et al., 2002;Viljoen et al., 2003). A three-factor model has also been suggested, including cognitive, affective and somatic factors (Beck et al., 2002). Clearly much uncertainty remains regarding the latent structure of the BDI-II, which can be partially explained by the fact that items' organization may vary according to the characteristics of the sample (Beck et al., 1996). This is particularly true when it comes to specific clinical populations, such as cancer patients and other vulnerable groups. In fact, even though the BDI-II was originally developed for use in the psychiatric setting, its use rapidly expanded to other contexts, including oncology. Specific concerns have been raised in the literature about the performance characteristics of the BDI-II in patients with cancer, since almost half of its items assess somatic symptoms. For instance, a study using a sample of hospitalized oncological patients showed that the BDI-II is highly saturated with items describing somatic complaints, suggesting that, in this particular population, scores on these items may be reflecting the intensity of cancer-related somatic symptoms, rather than depression symptoms (Wedding et al., 2007). On the other hand, several studies demonstrated that the BDI-II is able to accurately identify depression in a variety of samples of patients with cancer (Mitchell et al., 2012), with excellent internal consistency, good temporal validity and convergent validity with HADS-Depression (Mystakidou et al., 2007;Tobias et al., 2017). Studies that assessed criterion validity of the BDI-II in oncological populations (Hopko et al., 2007;Katz et al., 2004;Warmenhoven et al., 2012) all found good to excellent sensitivity and specificity values for the BDI-II total score, but proposed different cut-off scores for diagnosis of depressive disorders depending on the sample type. For instance, a cut-off score of 13 was proposed for patients with head and neck cancer (n=60) (Katz et al., 2004); 16 for patients with advanced metastatic cancer (n=46) (Warmenhoven et al., 2012); and 14 in a study with a heterogeneous, but smaller, sample of cancer types (n=33) (Hopko et al., 2007). Data regarding construct and criterion validity of the BDI-II in the oncological populations in comparison with sample without a cancer diagnosis are thus lacking. Such a study would allow for a more specific and detailed investigation of the differential contribution of somatic items to the validity of the BDI-II in patients with cancer. In the present study we validated the BDI-II for oncologic and psychiatric populations, assessing the latent structure of the BDI-II, and how somatic items influence its screening accuracy in identifying depression in the oncological setting. Procedures and participants Study procedures were reviewed and approved by the Champalimaud Foundation Ethics Committee, in Lisbon, Portugal. Data was collected between April 2013 and December 2019 during clinical routine visits to the outpatient neuropsychiatry clinic of the Champalimaud Clinical Center. Written informed consent was obtained from participants in accordance with the Declaration of Helsinki. The routine clinical protocol at admission was composed of a battery of self-reported, pen-and-paper assessment instruments, completed by participants while waiting for a Psychology or Psychiatry appointment. Screening of affective symptoms was followed by clinical assessment with a psychiatrist or a clinical interview with a psychologist. Patients were eligible for participation if they were at least 18 years of age, while exclusion criteria for both samples included: dementia; illiteracy or inability to understand the study instructions; clinically significant focal structural lesion of the central nervous system; history or clinical evidence of chronic psychosis; acute episode of neuropsychiatric disease requiring hospitalization, and current abuse or dependence of drugs or alcohol. Participants were then categorized in two groups: 1) confirmed diagnosis of cancer and active disease and/or under any oncological treatment; and 2) no past or current diagnosis of cancer. Measures Sociodemographic information and exclusion criteria were assessed with structured questionnaires. Details on medical data, including cancer diagnosis and cancer characteristics, were retrieved from electronic clinical records. Depressive symptoms were evaluated with the Portuguese version of the BDI-II (Campos & Gonçalves, 2011), which is a 21-item self-report questionnaire that assesses severity of symptoms of depression occurring in the previous 15 days. Each item inquiries about a symptom and provides four response statements, graded from 0 to 3 according to the severity of the symptom. The total score ranges from 0 to 63 and reflects the sum of the scores of all items. The BDI-II was validated for the Portuguese population in 2011 with two non-clinical samples: a community sample and a college student's sample. The validation studies have shown good internal consistency values (0.90<α<0.91), adequate convergent validity with the Center of Epidemiologic Studies Depression Scale (CES-D) (Radloff, 1977), and a two-factor structure consisting of Cognitive-affective and Somatic factors (Campos & Gonçalves, 2011). The BDI-II administration was performed in a paper-and-pen format filled by patients directly in the protocol sheet, without the intervention of the clinician. In the subset of patients that had a Psychology appointment we also applied the MINI (Sheehan et al., 1998), a structured psychiatric interview, based on DSM-IV diagnostic criteria, comprising modules for 15 psychiatric diagnoses or conditions, namely: major depressive disorder; dysthymia; suicidality; hypomanic or manic episode; panic disorder; agoraphobia; social phobia; obsessive-compulsive disorder; post-traumatic stress disorder; alcohol abuse or dependence; substance abuse or dependence; psychotic disorders; anorexia nervosa; bulimia nervosa; generalized anxiety disorder. For this study we used an European Portuguese adaptation of the Brazilian Portuguese version of the MINI 5.0.0 (Amorim, 2000) to discriminate between participants with or without depression, with the purpose of assessing criterion validity. MINI was chosen as the diagnostic standard for the validation process in order to avoid burdening the patients with time-consuming measures, as previous studies demonstrated a shorted time of application when compared with other interviews, maintaining good and similar psychometric properties (Amorim, 2000). MINI was only applied to patients who had been referred to a first-time clinical psychology session, where a brief psychological assessment with the psychologist is routinely performed. The remaining sample was referred to a first-time psychiatric consultation. In both cases (psychology or psychiatry appointments) patients filled in the BDI-II before starting the consultation. Statistical analysis Statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS Version 26.0; IBM SPSS, Inc., Chicago, IL). All analyses were two-tailed with p<0.05 considered significant. Descriptive statistics were used to characterize the sample and psychometric data, including means and standard deviations, minimum and maximum absolute values and percentage (for categorical data). Independent samples t-tests were performed to compare age, education and the scores for BDI-II across groups, and Chi-square (χ 2 ) analysis for comparisons of gender. Several psychometric properties of the BDI-II were assessed. A Confirmatory Factor Analysis was conducted using structural equation modelling statistics package AMOS 26.0 (SPSS AMOS, Version 26; IBM SPSS, Inc., Chicago, IL, USA) to verify whether the three theory-driven factor models (Table 1) presented an adequate fit for the study sample data. Model 1, a one-factor model, is based on a global construct of depression supporting the use of the BDI-II total score. Model 2, a two-factor model (Beck et al., 1996), divides symptoms in two factors: Cognitive 2 and Somatic-affective. Finally, Model 3 represents a three-factor structure, with Cognitive 3 , Affective and Somatic items as independent factors (Beck et al., 2002). To evaluate the goodness of fit of the tested factorial structures, we considered the following indices: χ2/df (ratio of chi-square to degrees of freedom), the CFI (comparative fit index), the TLI (Tucker Lewis Index), and RMSEA (Root Mean Square Error of Approximation). The fit of the model was considered good for χ2/df <3 (Arbuckle, J.L., 2009;Wheaton, 1987), CFI and TLI values above 0.95 (Bentler, 1990;Bentler & Bonett, 1980) and RMSEA values below 0.06 (Hu & Bentler, 1999;Marôco, J., 2014). Cronbach's alpha was used to measure internal consistency of the BDI-II total scale and of each BDI-II subscale, depending on the theoretical model. To assess criterion validity, receiver operating characteristics (ROC) curves were calculated for the BDI-II total score and for the subscales of Models 2 and 3 (Table 1). Such curves plot the sensitivity and specificity of the scales for every possible cut-off point against the reference criterion, which for this study is the diagnosis of depression disorder according to the MINI. The area under the curve (AUC) of the ROC curve is a global assessment of diagnostic accuracy, with larger AUC indicating better accuracy. To guide interpretation, we considered AUC values of ≥0.9 as very good, ≥0.8 as good and ≥0.7 as fair (Rice & Harris, 2005). Optimal diagnostic cut-off scores were calculated and selected based on the highest Youden Index (sensitivity + specificity-1) (Hughes, 2015), indicating maximization of sensitivity (the probability for individuals with depression to be correctly identified by the scale) and specificity (the probability for individuals without depression to be correctly excluded by the scale). Based on the same method, positive predictive value (PPV), negative predictive value (NPV) and accuracy were also calculated to examine BDI-II's predictive value regarding the diagnosis of depression (Trevethan, 2017). These analyses were performed using MedCalc (Version 19.0; MedCalc Software, Ostend, Belgium). Sample characteristics Sociodemographic and clinical characteristics of the samples are shown in Table 2. A total of 210 patients with a cancer diagnosis (PC), referred for a Psychology or a Psychiatry appointment, were included. BDI-II scores were also collected from 376 community-dwelling patients with no current or previous cancer diagnosis (non-PC), at their first Psychiatry or Psychology appointment. Comparisons between groups revealed that the PC sample was older and comprised more females that the non-PC group. No statistically significant differences were found for education. Comparisons between groups regarding the BDI-II total score and the scores of each dimension from our tested models are presented in Table 3. When we compared BDI-II subscales from the two-and threefactor models across groups, we found that the Somatic dimension of Model 3 did not differ significantly between groups, in contrast to the remaining subscales and the BDI-II total score. The Cognitive dimension of both Model 2 and Model 3, the Somatic-Affective dimension of Model 2 as well as the Affective dimension of Model 3, had lower scores in the PC sample. Psychometric properties -dimensionality Based on previous research on the BDI-II, we performed confirmatory factor analyses (CFA) to assess fit indices for the one-factor, two-factor and three-factor solutions for both the PC and non-PC samples, as shown in Table 4. The CFA results suggested that Model 3 (Cognitive 3 , Affective and Somatic factors) has good and better fit to the PC sample data than the two other models (χ2/df=1.81, p<0.001; CFI = 0.91; TLI = 0.89; RMSEA = 0.05). As shown in Fig. 1, the loadings for the items included in the Cognitive 3 subscale ranged from 0.50 (Failure) to 0.77 (Disconformity with oneself), while for the Affective subscale they ranged from 0.47 (Suicidal thoughts) to 0.84 (Loss of interest), and items in the Somatic subscale ranged between 0.37 (fatigue) and 0.74 (loss of energy). As for the non-PC sample, Model 3 was also an adequate fit to the data (χ2/df=1.81, p<0.001; CFI = 0.91; TLI = 0.89; RMSEA = 0.04), although the two remaining models also showed adequate fit values (Table 4). These results confirm that the latent structure of the BDI-II was similar across groups, with three specific factors (cognitive, affective and somatic) providing the best fit to data. Psychometric properties -internal consistency Internal consistency of BDI-II scores and sub-scores was then estimated using Cronbach's alpha for the three proposed models. A Table 2 Sociodemographic and clinical data from each sample. Mean and standard deviation for all variables, except for gender (presented as percentage of males). Differences were tested using chi-square for gender and independent samples t-test for the other variables (p-values). PC (n = 210) Non-PC (n = 376) Note. Tumor site summarized as "not specified" and patients whose tumor stage is "unknown" were those who did not have that information available on their clinical files. Non-PC=Patients without a cancer diagnosis; PC= Patients with a cancer diagnosis. Cronbach's alpha of 0.91 was obtained for the BDI-II total score (one-factor model) in both the PC sample and the non-PC sample. This value indicates excellent internal consistency of the BDI-II total scale, with slightly lower values, as expected, for each subscale of the two-factor and three-factor models, with Cronbach's alpha ranging from 0.79 to 0.86 in the PC and non-PC samples (Table 3). Psychometric properties -criterion validity Ninety-four PC and 202 non-PC participants (42.9% and 54.7% of the total sample, respectively) completed the MINI psychiatric interview. Sociodemographic characteristics of this subsample are presented in Table 5. Among them 48.6% and 59.9%, respectively, met diagnostic criteria for current major depressive episode/disorder or dysthymia, that we will designate jointly as depressive disorders, and were included as such in all subsequent analyses. The two groups differed significantly in severity of depression symptoms, with higher BDI-II scores in the non-PC sample reflecting the higher prevalence of depression in this group (Table 5). To assess criterion validity, we created receiver operating characteristic (ROC) curves using MINI diagnoses as the discriminator between participants with and without depressive disorders, among patients with a diagnosis of cancer (n PC =53 and n PC =41 respectively). An area under the curve (AUC) of 0.85 (95% Confidence interval [95% CI]: 0.76, 0.91) was obtained for the PC sample when using the BDI-II total scale (Model 1). Further analysis of the ROC curve showed that scores above 14 points correctly identified depressive disorder with a sensitivity of 87% and a specificity of 73%. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) based on the optimal cut-off scores for maximum accuracy of each BDI-II factor structure are further described in Table 6. To compare discriminatory capacity of the BDI-II total scale in the two samples, a pairwise comparison of ROC curves between the PC and non-PC samples was performed. While AUC of the ROC curve in non-PC (0.87; 95% CI: 0.81, 0.91) had a different cut-off value (>18) than the PC population, we did not find statistically significant differences between the ROC curves of the two groups (difference between areas [DBA] = 0.02, p=0.7; Fig. 2a). These results suggest that the accuracy of BDI-II total scale to detect depressive disorders is similar in patients with cancer and in patients with psychiatric disorders without cancer, for whom the instrument was originally developed. Since Model 3 had slightly better fit to our study sample data than Model 1, and in order to assess whether the somatic dimension of the BDI-II influences its criterion validity, the same analyses were repeated using partial scores of the BDI-II score that excluded items of the Somatic dimension in Model 3. For patients with cancer, AUC of the ROC curve for the partial score was similar (0.85; 95% CI: 0.76, 0.92) to that of BDI-II full score, with a cut-off of 4 achieving the highest combination of sensitivity (89%) and specificity (71%) for the diagnosis of depressive disorders. In the psychiatric sample, on the other hand, AUC of the ROC curve for the partial score was 0.85 with a cut-off of 11 achieving the highest combination of sensitivity (77%) and specificity (76%). Importantly, ROC curves of the partial score for the two populations were again similar (DBA=0.002, p=0.9), further demonstrating that somatic items do not impair criterion validity of the BDI-II in patients with cancer (Fig. 2b). Discussion The purpose of this study was to validate the Portuguese version of the BDI-II for patients with cancer and, in particular, to assess the contribution of somatic items for diagnostic accuracy. We demonstrate that the BDI-II is a valid measure to screen and assess depressive disorders in this population, with reliability, construct validity and criterion validity (Beck et al., 1996) 199.5 188 1.1 <0.001 1.0 1.0 0.02 Three-factor (Beck et al., 2002) 128.2 186 0.7 <0.001 1.0 1.0 0.00 Non-PC (n=376) One-factor 289.0 189 1.5 <0.001 1.0 1.0 0.04 Two-factor (Beck et al., 1996) 274.8 188 1.5 <0.001 1.0 1.0 0.04 Three-factor (Beck et al., 2002) 200.5 186 1.1 <0.001 Note. Mean and standard deviation for all variables, except for gender (presented as percentage of males). Differences were tested using chi-square for gender and independent samples t-test for the other variables (p-values). Non-PC=Patients without a cancer diagnosis; PC= Patients with a cancer diagnosis Fig. 1. Confirmatory factor analysis of the BDI three-factor model (Beck et al., 2002) in the oncological sample with standardized parameter estimates and measurement errors. comparable to what we found in patients with psychiatric disorders but no cancer. Furthermore, we found that scores on somatic items do not decrease the diagnostic accuracy of the BDI-II in patients with cancer. Based on the fact that several somatic symptoms of depression are also commonly reported by patients with cancer, we performed confirmatory factor analyses (CFA) of the unidimensional model of the BDI-II, as well as two-factor (Beck et al., 1996) and three-factor (Beck et al., 2002) solutions. The three-factor model consisting of cognitive, affective and somatic dimensions had the best fit to the data collected in patients with cancer. Subsequent reliability analyses of the BDI-II total score and subscale scores in both oncological and non-oncological samples demonstrated adequate to good internal consistency of the BDI-II subscales, although excellent internal consistency became evident when we used the BDI-II total score. These findings are consistent with reliability estimates reported in other studies conducted with clinical populations with other physical diseases (Wang & Gorenstein, 2013) and in particular with cancer (Mystakidou et al., 2007;Tobias et al., 2017). While internal consistency was lowest for the somatic dimension of the threefactor model, this was not specific to patients with cancer and also occurred in the control population. We also found that patients with cancer, when compared with the control sample, had significantly lower BDI-II total scores, mainly due to lower scores on the two dimensions that do not include somatic items, i. e. cognitive and affective dimensions. Given the degree of physical burden associated with oncological disease, it may seem surprising that patients with cancer did not have substantially higher somatic symptom scores than psychiatric outpatients. In fact, other studies found that in cancer patients BDI-II scores were more saturated in somatic items when compared with non-somatic items, concluding that BDI-II may be inadequate to screen for depression in patients with cancer (Jak si c et al., Tobias et al., 2017;Wedding et al., 2007). While it is not clear what underlies the absence of differences between patients with and without cancer regarding somatic items in our study, ROC analyses showed that somatic items do not compromise the screening accuracy of the BDI-II. Analysis of the BDI-II using only the cognitive and affective dimensions yielded AUC values similar to those obtained with the BDI-II total scale, with no significant loss of sensitivity, specificity, PPV or NPV, thus showing that somatic items do not compromise BDI-II criterion validity. Despite the inconsistencies of existing literature on BDI-II factor structure, results of our factor analyses are in line with those previously reported for patients with cancer (Jak si c et al., 2013;Tobias et al., 2017). In fact, our results showed factorial similarity across groups for all models tested, with the three factor model showing the best fit to the data in both groups. This three-dimension model of the BDI-II has important pragmatic advantages. First, specific results from each of the three BDI-II factors may help to determine the specific nature of each patient's symptom profile. Second, it may facilitate targeted interventions across <0.001 Somatic-Affective 0 34 14.4 (7.1) 0 34 16.2 (7.4) 0.01 Model 3 (Beck et al., 2002) Cognitive 3 0 19 4.7 (4.4) 0 20 6.9 (4.9) <0.001 Note. Non-PC=Patients without a cancer diagnosis; PC= Patients with a cancer diagnosis; CFI = comparative fit index; df= degrees of freedom; RMSEA = root mean square error of approximation; TLI = Tucker-Lewis index; X 2 = chi-square; X 2 /df = relative chi-square. time, for instance cognitive therapy for a depressive disorder predominantly characterized by cognitive symptoms. Finally, it may usefully guide the choice of optimal treatment at the individual level, since distinct symptoms of depression have been shown to respond differently to different treatments (Mallinckrodt et al., 2007;Paul et al., 2019). Nevertheless, it remains appropriate to use a global BDI-II score not only for screening depressive disorders, but also to assess severity and monitor response to treatment. This is consistent with the original development of the BDI-II as a measure of the global construct of depression (Brouwer et al., 2013), and also with our results of enhanced reliability of the global score. A critical finding of this study was the confirmation that the BDI-II total scale accurately identifies depressive spectrum disorders in patients with cancer. To the best of our knowledge, this is the first study to suggest a specific BDI-II cut-off score for identifying depression in patients with cancer in general, including patients with diverse types of cancer in various stages. According to our findings, based on a structured psychiatric interview as the gold-standard, an optimal cut-off value of 14 has good sensitivity and adequate specificity, with an 80.7% probability of patients with scores above that cut-off having a depressive spectrum disorder. Sensitivity and positive predictive value (Trevethan, 2017) further support the use of BDI-II for screening in routine oncological practice. While lower than the cut-off of 18 that we found to be ideal for psychiatric out-patients in our sample, this cut-off value of 14 is in line with the study conducted by Warmenhoven et al. in a population with advanced cancer diagnosis (Warmenhoven et al., 2012), but not with two other studies exploring criterion-related validity of the BDI-II in patients with cancer. Katz et al. (2004) suggested a slightly lower cut-off score of 13 (sensitivity= 92%; specificity = 90%) based on a sample of 60 patients with head and neck cancer, while Hopkon et al. (2007), also assessing a group of patients with a variety of cancer types, suggested a cut-off value of 22. However, the latter suggestion was based on a limited sample of only 33 patients, 9 of them with no depression. This divergence in results shows that criterion validity studies conducted in specific cancer types are likely to be valid only for that very particular subpopulation, and that finding a cut-off that is more universally valid in the oncologic setting requires much larger samples comprising various types of cancer in diverse stages, such as what has been described here. Furthermore, a detailed analysis of the BDI-II with only cognitive and affective dimensions showed similar AUC values when compared with the BDI-II total scale. Differences between sensitivity, specificity, PPV and NPV were also not significant. Importantly, these results show that somatic items do not compromise BDI-II criterion validity, suggesting that more important than classifying the physical symptoms into cancerrelated or depression-related, is to value all of the symptoms reported and tailor patient-oriented interventions. The BDI-II in the oncological population proved to be as accurate as in the psychiatric population, as long as the appropriate cut-off value is used. The strengths of our study include the use of a structured clinical interview to assess DSM-IV criteria and applied by certified psychologists in a routine clinical setting, a study sample representative of the diverse cancer types and stages, and the inclusion of a comparison group of psychiatric outpatients without a diagnosis of cancer. Nonetheless, our study is not free of limitations. As we analyzed retrospective data collected in routine care, it was not possible to match the samples regarding age, education and gender. Although differences were found for age and gender between the two groups, we did not find differences regarding level of education. It is important to consider that patients with cancer are expected to be older than non-oncological samples and our sample has considerable more patients with breast cancer, which contributes to an over-representation of the female gender. Notwithstanding, previous studies have reported no significant differences in BDI-II scores between different age-or gender-groups (de S a Junior et al., 2019). A further limitation is our sample size for criterion validity analysis, which is smaller in our cancer sample compared to the psychiatric sample. Yet, our study still has an adequate sample size in the oncological group, considerably higher than previous studies that assessed criterion validity of the BDI-II. Finally, the use of the adapted Portuguese version of MINI 5.0.0 can also be a limitation, since this was a non-published version based on the Brazilian Portuguese version developed by Amorim (2000) and based on DSM-IV. In fact, validated structured clinical interviews of reference validated for the Portuguese population are not currently available. However, language and culture are very close between the two countries, the criteria for depressive disorders are also the same in both countries, and they are very close in DSM-IV and DSM 5. Therefore, clinicians involved in oncological and/or mental health practice can use the BDI-II in patients with cancer to monitor symptoms of depression during the course of the disease, independently of the cancer type or stage. Nevertheless, it is important to use the appropriate cut-off to interpret patients' scores. Moreover, the cut-off value proposed here should not be used if the BDI-II is to be applied to patients with dementia or any other condition that compromises patients' ability to understand the scale. Such patients were excluded from our study and the psychometric properties of the scale thus remain unknown in that specific population. Also, clinicians should be aware of the fact that the BDI-II is not intended to be a diagnostic tool of depression, but rather a measure of depression symptom severity (Nejati et al., 2020) that can be used as a screening measure. In conclusion, this study demonstrated that the BDI-II is a valid measure to assess depression in the oncological population, with psychometric properties comparable to those of a psychiatric sample. Our results suggest that a total score BDI-II cut-off of 14 has good sensitivity, PPV and NPV, and fair specificity in identifying depression in patients with cancer. Moreover, we showed that accuracy did not change with the omission of somatic items. Finally, our findings supported the use of a three-factor structure with cognitive, affective and somatic dimensions contributing for a general depression score. We believe that our findings, particularly the information about the latent structure of BDI-II and the adjusted cut-off points to this population, can facilitate the screening and identification of depressive disorders in the oncological setting, prompting an earlier referral of individuals in need of specialized treatment to proper psychological or psychiatric care.
2022-12-03T17:14:49.074Z
2022-11-25T00:00:00.000
{ "year": 2022, "sha1": "869ddcf2b67146f0eabb24cba735a1be50c4c052", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijchp.2022.100350", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "81f89c87dd2e5d2e8fb51c381eb27578bc735755", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
93633063
pes2o/s2orc
v3-fos-license
Strontium modified calcium phosphate cements approaches towards targeted stimulation of bone turnover Making use of the potential of calcium phosphates to host a variety of ions in their crystal lattice, ion substitution of calcium phosphate bone cements has become the subject of intense investigations in the last few years, since this approach allows one to stabilize a bone defect and to locally deliver therapeutic ions into a specific defect site at the same time. In this respect significant attention has been given to strontium ions (Sr) lately. Strontium possesses the unique potential to both stimulate new bone formation and inhibit cell-driven bone resorption and thus has been used successfully in systemic osteoporosis therapy. Strontium doping of calcium phosphate bone cements might allow making use of this dual effect to promote local bone defect healing. The goal of this review is to provide an overview of different routes that have been employed to obtain strontium-containing calcium phosphate bone cements and describe their material characteristics as well as their biological properties based on cell culture and animal studies. Introduction Biomaterials, in terms of materials intended to come into contact with living tissue or to be implanted into a living organism, are usually described on the basis of their specific degree of biocompatibility. Although various definitions of ''biocompatibility'' exist, most of them are based on absence of negative tissue reaction and/or formation of a fibrous interface. 1 For example, materials in contact with bone tissue are required to allow the formation of a strong bone-material interface in order to support the mechanical integrity of the host bone. Biomaterials for hard tissue repair have been in the focus of research for many years now. Calcium phosphates (CaP), in particular hydroxyapatite (HA) and brushite (DCPA), are widely used in bone graft applications as well as in coatings of metallic implants, since their chemical resemblance to the mineral part of the natural bone offers excellent biocompatibility. 1,2 Since some mixtures of calcium phosphates can, once in contact with water, undergo a hydraulic setting reaction into either HA or DCPA, a number of self-setting cements have been developed of which many are in clinical use today. Calcium phosphate bone cements (CPC) in general can be shaped into a specific bone defect during implantation, are highly biocompatible and are bioresorbable. 3 However, only recently concepts beyond ''simple'' biocompatibility have been developed: on the one hand tissue engineering aims at the implantation of cell seeded, functional biomaterial scaffolds; on the other hand materials with an intrinsic potential to influence the host tissue have been proposed. Such ''third generation biomaterials'' are meant to (locally) evoke a specific cellular response based on the cellmaterial interaction. 4 These may be materials that contain and locally release drugs upon implantation as well as materials which are composed of biologically effective components that are released as soon as the implants degrade in the body. Much effort has been made to investigate the potential of CaP and CPC comprising biologically potent ions, such as silicate, 5-8 magnesium 9-11 and many others. 2,12 Whilst drug delivery from CaP bone cements has been reviewed in detail recently, 13 this study is focussed on cements comprising biologically highly effective strontium ions (Sr 2+ ). Strontium has been proposed to possess both a bone formation stimulating and at the same time anti-resorptive effect and is thus used in systemic osteoporosis therapy. 14 Our aim is to provide a rationale for the development of Sr-laden CaP bone cements for the treatment of bone defects, an overview of different routes that have been employed to obtain such cements as well as their material characteristics and in vitro and in vivo properties. Strontium in bone Strontium (Sr) is an alkaline earth metal accounting for approx. 0.02-0.03% of the earth's crust. 15,16 Its concentration in drinking water varies with geographic region, but is generally low, for example less than 1 mg ml À1 in the United States. 16 A typical diet contains about 2-4 mg Sr per day, 15,16 and the secretion of Sr exceeds that of Ca, accounting for a fairly low bioavailability of orally ingested Sr of B20%. 17 Since Sr 2+ is generally regarded as bone-seeking and thus could remain in the body for a long time once ingested, the interest in strontium metabolism arose from radioactive 90 Sr atmospheric contamination following the nuclear weapon tests since 1945. 15 Radioactive isotopes of Sr have also been used for diagnostic purposes. 15 Strontium was discovered to follow the metabolic pathways and signalling principles known for calcium, although the response to Sr 2+ tends to be weaker. Strontium shows a comparable protein binding capacity to that of Ca and is mostly deposited in the bone mineral. 15 In 1952 Shorr and Carter described an increased calcium absorption and thus bone formation under the influence of strontium lactate (SrC 6 H 12 O 6 ÁxH 2 O). 18 A first clinical trial where patients with postmenopausal osteoporosis received up to 1.75 g d À1 Sr in the form of strontium lactate over a period between 3 months and 3 years showed an increase in bone mass under Sr administration and a potential benefit in osteoporosis therapy without any side effects. 19,20 Systematic studies revealed the bone conserving effect of divalent strontium in ovariectomized (OVX) rats, 21 until two clinical phase III trials showed a significant increase in bone mass under the influence of 2 g d À1 S-12911 (strontium ranelate, SrRan) in postmenopausal women. 22,23 Unlike most other drugs used in osteoporosis therapy, Sr exerts a dual effect on bone remodelling, being able to enhance osteoblast activity and thus increase new bone formation and at the same time inhibit osteoclast activity and thus reduce cellular bone resorption (Fig. 1). [24][25][26][27] Several in vitro studies confirmed this twofold effect of Sr on bone metabolism. For example, osteoblast precursor proliferation and expression of extracellular matrix proteins are increased by B10 À3 mM Sr 2+ , and osteoblast-mediated bone formation is stimulated in the presence of 2-5 mg ml À1 (2.3-5.7 Â 10 À3 mM) Sr 2+ . 28,29 Although the exact mechanism of how Sr affects bone cells remains unknown, it has been proposed that Sr acts on the calcium sensing receptor (CaSR) expressed by bone cells as a calcium-like entity and thus interacts with signalling pathways associated with the Ca-driven regulation of bone metabolism. 30,31 Furthermore, an increase of b-catenin expression indicating an enhanced transcription of osteogenic factors, a decrease in Wnt-pathway inhibitors as well as an enhanced prostaglandin-expression under the influence of Sr have been described. 30,32 Interestingly, similar concentrations of Sr inhibit the formation of osteoclasts and their resorption activity through either paracrine signalling [33][34][35] or even the induction of osteoclast apoptosis. 36 Calcium phosphate bone cements First described by LeGeros as well as Brown and Chow in the 1980s, calcium phosphate cements (CPC) have become a frequently used biomaterial in clinical application today. [37][38][39] CPC are hydraulic cements and are usually prepared by mixing Fig. 1 Effects of Sr 2+ ions on bone metabolism: stimulation of osteoblastprecursor proliferation and osteogenic differentiation (1), increase of bone mineralisation by osteoblasts (2), reduced osteoclast-precursor recruitment and osteoclastogenesis (3), decreased resorption activity and increased apoptosis of mature osteoclasts (4) as well as interaction with the osteoblast/osteoclast paracrine signalling (5). one or more calcium orthophosphate powders (the so-called precursor) with an aqueous liquid. A list of CaP phases frequently used to prepare CPC is given in Table 1. Upon mixing of the precursor with the liquid phase a paste forms that will self-set and harden via a hydraulic reaction, characterised by dissolution of the precursor components and the precipitation of a thermodynamically more stable setting product. Mechanical integrity results from the entanglement of the newly formed crystals. 41 Although a large number of CPC formulations have been proposed, all of them set into one of two possible end products at 37 1C: hydroxyapatite (Ca 5 (PO 4 ) 3 (OH), HA), as the most stable phase at pH values 44.2, or brushite (CaHPO 4 Á2H 2 O, DCPD), which is the result of the setting reaction at pH o4.2. 3 The chemical nature of the reaction can, in principle, be either an acid-base reaction (e.g. the brushite-forming reaction of the acidic MCPM with slightly basic b-TCP) or the conversion of a metastable phase, such as amorphous calcium phosphate (ACP) or a-TCP, into a more stable one. 3,40 The setting reaction will lead to the hardening of the paste within a characteristic time span (setting time). Apatite-forming cements. Since the mineral part of natural bone is composed of Ca-deficient hydroxyapatite containing a variety of substitutions such as carbonate, magnesium, sodium, etc. (referred to as ''bone apatite''), 1 the choice of HA-forming cements for bone repair is self-evident. The classical apatite-forming cement of Brown and Chow is based on the reaction of basic tetracalcium phosphate with calcium hydrogen phosphate (eqn (1) One second category is cements derived from a one-component precursor that set via the conversion of a less stable CaP phase into hydroxyapatite upon mixing with water, e.g. the hydrolysis of a-TCP and its re-precipitation into CDHA or the conversion of ACP into HA. 45,46 Apatite-forming cements possess a very low solubility according to their setting product: CDHA has a slightly higher solubility than stoichiometric HA (Table 2). This leads to relatively low physico-chemical degradation. 47 However, apatite cements can be resorbed in vivo via cellular activity (osteoclastic resorption), 48 although the degree of resorption seems to depend on the actual setting product (HA or CDHA), the respective degree of crystallinity and the porosity. Mechanical characterisation revealed that compressive strength values of completely set HA-forming cements are up to 70-80 MPa. 40,41 Brushite-forming cements. Invented by Mirtchi and Lemaître in 1987, brushite is the second possible end product of hydraulic CPC reactions. 57 The most common formulation is based on an equimolar mixture of b-TCP and MCPM as given in eqn (3) Other formulations comprise b-TCP and phosphoric acid or MCPM, tetracalcium phosphate and CaO, but are all based on an acid-base reaction mechanism. 59,60 In general, brushite cements possess a much higher solubility (Table 2) and thus a higher degradation rate than HA under physiological conditions. 61 Moreover, because of its metastable nature, at pH 4 6 brushite hydrolyses into CDHA over time, which may lead to the release of orthophosphoric acid that could cause the inflammatory tissue response sometimes reported after implantation of large quantities of brushite-forming cements. 62 Brushite can also be resorbed by cellular activity in vitro and in vivo. 61,63 However, setting of brushite cements takes place at low pH values and such acidification may not only complicate the cellular reaction in vitro but also hamper tissue integration in vivo. Brushite cements are mechanically slightly weaker than apatite forming cements. Compressive strength values of up to 60 MPa have been reported. 40 Strontium-modified CaP cements The potential of CPC to host various ions like Na + , K + , Mg 2+ , Sr 2+ , CO 3 2À , Cl À , etc. has been described before. 41 Given the ability of some ions (calcium, phosphate, strontium, silicate, zinc and magnesium) to trigger bone cell response and their advantages over bioactive proteins like growth factors, such as lower cost and better stability, the local release of therapeutically potent ions from an implanted bone cement could have big impact on bone healing strategies. 13 In 2000, Li et al. published the development of a SrHA (strontium-substituted HA) containing resin-based bone cements with good mechanical and in vitro characteristics. 64 Recently, the synthesis and characterisation of Sr-containing CPC (SrCPC) has become the focus of many studies. From a crystallographic point of view, the basis of all those developments is that strontium, as some other divalent cations, can be integrated into the crystal lattice of calcium phosphates on Ca 2+ positions over a wide range of concentrations preserving the respective crystal structure. 56,65,66 It was reported that Sr 2+ can substitute Ca 2+ in a number of CaP phases, including amorphous calcium phosphate, hydroxyapatite, octacalcium phosphate and brushite. 67 In hydroxyapatite, Sr 2+ can occupy either the Ca(I) or Ca(II) position, although Ca(I) is only preferred at very low levels of substitution. 68 Due to its larger ionic radius (Ca 2+ : 148 pm; Sr 2+ : 158 pm in hexagonal hydroxyapatite) 69 strontium substitution causes a linear increase of apatite lattice parameters. 54,68,70 At higher concentrations Sr substitution has been shown to decrease the crystallinity. 68,71 In the case of brushite, substitution of Ca 2+ by Sr 2+ ions is not limited to specific Ca sites and causes an expansion of the unit cell volume and thus a shift in the X-ray diffraction peaks towards lower diffraction angles. 67 The same effect can be found in aand b-TCP. 72-74 b-TCP was shown to possess the ability to host up to 80 at% strontium which is accompanied by a linear increase of the lattice parameters. 72 Lattice distortion induced by the larger Sr 2+ ions has been found to be in charge of the increased solubility of substituted apatite and brushite. 55,56 However, even fully substituted apatite (Ca free strontium-apatite) possesses a solubility higher than that of pure HA (Fig. 2) despite being as highly ordered as HA. 55 Thus substitution could help to enhance in vivo degradability and, consequently, osseointegration of SrCPC. Moreover, Sr can affect the conversion of a-TCP to CDHA and could therefore be used as a tool to modulate the setting characteristics. 75 In particular, Sr 2+ (and other divalent cations such as Mg 2+ , Ba 2+ , Zn 2+ and Cu 2+ ) [76][77][78] has been shown to block a-TCP hydrolysis. In the context of CPC this could decelerate conversion into CDHA and thus setting, a principle that was recently employed to prevent aqueous a-TCP pastes from setting and thus to obtain storable cement precursor pastes that can be activated upon mixing with a second, Ca 2+ containing liquid. 78 The presence of Sr 2+ has further been shown to affect the rate of apatite nucleation; 79 in particular, under the influence of Sr 2+ apatite deposition on HA samples from a Ca 2+ and Sr 2+ containing solution has been demonstrated, but, interestingly, this effect was less pronounced if the substrate was SrHA. 54 In principle there are four possible ways to obtain strontium containing or strontium substituted calcium phosphate cements: one can (a) add Sr-containing phases to an existing cement system, (b) substitute calcium phases in the cement precursor powder by their strontium analogues (e.g. substitution of CaHPO 4 by SrHPO 4 ), (c) use Sr-substituted reactive CaP phases as a component in the precursor (e.g. Sr-substituted a-TCP) or (d) use a strontium-salt-containing solution as liquid phase during cement paste preparation (Fig. 3). As for CPC, setting of SrCPC depends on the solubility of the precursor powder components. 80 During setting, the Sr component can either act as an inert filler or undergo dissolution and re-precipitation as the other precursor components or crystallise from the liquid. Besides the respective solubility of the precursor phases the overall (Ca + Sr)/P ratio of the powder has to be taken into account, since both an excess or shortage of cations (Ca 2+ , Sr 2+ ) can inhibit the setting reaction, alter the setting product or decrease the reaction yield. Regarding the end-product of the setting reaction, cement matrices with embedded Sr-rich clusters (those systems where the Sr component does not undergo complete dissolution) can be distinguished from monophasic apatite matrices with Sr 2+ substituted into the crystal lattice. Another mechanism has been proposed by Kuang et al., who attribute the initial hardening of the cement to a fast chelate reaction of Ca/Sr ions and carboxy groups from carboxylic acids (such as citric acid) in the cement liquid. 81,82 Table 3 summarises different approaches towards preparation of SrCPC and indicates the mono-or biphasic nature of the set cement. Apatite-forming formulations Certainly the most straightforward way to introduce Sr into a cement is the addition of a Sr-containing phase to the cement powder. Wang et al. intermixed SrCO 3 with an ACP/DCPD cement precursor by intense milling and found a retarded setting of the cement into HA. The set cement was characterised by a smaller grain size, which can be attributed to the milling process. However, SrCO 3 was still detected in the XRD spectra of the set cements and thus Sr did not quantitatively substitute into the apatite lattice. 87 Another cement was obtained by addition of SrCO 3 particles with a size of B10 mm to a cement containing a-TCP, DCPA, HA and CaCO 3 . Here, no additional milling step was used, and therefore the set cement was characterised by relatively large SrCO 3 clusters embedded in an apatite matrix. 86 Furthermore, with the aim of mechanical reinforcement of the cement, SrHA whiskers prepared using a hydrothermal synthesis route were added to a b-TCP/DCPD cement precursor by Shen and co-workers. 92 Another approach to obtain Sr-containing HA cements is the substitution of Ca-phases by their Sr analogues. For example, using the cementing reaction of DCPA/TTCP into hydroxyapatite upon mixing with phosphoric acid, DCPA can be substituted by its strontium analogue strontium hydrogen phosphate (SrHPO 4 , DSPA). In this way, Sr-substituted apatite cements with a Sr/(Sr + Ca) ratio of up to 0.1 were prepared by Guo et al. 83,84 Since mixtures of DCPD and CaCO 3 can set into HA, one strategy to obtain SrHA cements is to replace CaCO 3 in the precursor by SrCO 3 . Following this approach, precursors composed of DCPD and various CaCO 3 -SrCO 3 mixtures were prepared by Tadier et al. to obtain cements with up to 8 wt% Sr. 85 In contrast to the above-mentioned systems, the setting product was biphasic: the set cement was composed of a mixture of hydroxyapatite and SrCO 3 and no SrHA could be detected. Similarly, no homogeneous Sr-distribution was achieved by a precursor mixture of DCPD and amorphous calcium phosphate (ACP) in which the latter was partially replaced by amorphous strontium phosphate (ASP). Although total Sr/(Sr + Ca) ratios between 0.025 and 0.1 could be obtained the set cement was composed of coexistent HA and SrHA. 89 On the other hand, in an a-TCP/DCPA/HA/CaCO 3 cement system setting into CDHA, substitution of CaCO 3 by SrCO 3 in the precursor powder mixture was shown to result in the formation of a monophasic, Sr-substituted apatite matrix and enhanced mechanical properties after setting. 86 Baier et al. replaced both DCPA and CaCO 3 by DSPA and SrCO 3 in an a-TCP cement and obtained good in vivo results; however, no material characterisation of that cement was published. 91 SrCO 3 was also successfully used to gradually replace CaO in a DCPD/CaO cement, resulting in the formation of a SrHA matrix via several intermediate steps. 88 However, the Sr/(Ca + Sr) ratio that can be achieved by substituting a Ca phase by its Sr analogue is limited to the fraction the respective Ca phase contributes to the total precursor. Therefore, and despite the elaborate synthesis mostly required to obtain Sr-substituted CaP phases which are often not commercially available, such phases can easily be used to introduce varying amounts of Sr into CPC. For example, the cement forming reaction based on the hydrolysis of a-TCP into HA upon mixing with aqueous Na 2 HPO 4 results in SrHA when a-TCP is substituted by Sr-a-TCP although a reduced reactivity was found for Sr-a-TCP synthesised at high temperatures. 73 In a more complex system based on ACP and DCPD, Sr-substituted ACP (SrACP) prepared by gradual replacement of Ca(NO 3 ) 2 by Sr(NO 3 ) 2 during ACP precipitation from an aqueous solution allows the preparation of cements setting into SrHA. 89 Similarly, SrACP prepared from DCPA, calcium hydroxide and SrCO 3 via a dry mechanochemical approach was employed as a Sr source to obtain cements with Sr/(Sr + Ca) ratios between 0.025 and 0.2. Although SrCO 3 residues were detected in the set cement at early time points, ongoing hydration led to a decrease in SrCO 3 content and substitution of Sr 2+ -ions in the apatite matrix. 90 Based on a cement liquid containing various concentrations of SrCl 2 , cements setting into monophasic Sr-containing apatite can be prepared from a-TCP 93 and a-TCP/gelatin 75 as well as DCPA/CaCO 3 precursor powders. 85 Due to the solubility of SrCl 2 and the limited liquid to powder ratio (l/p) a maximum of 6 wt% strontium content could be achieved by the second method. During immersion in water, this cement was shown to release around 80% of the contained strontium within 21 days, indicating a higher solubility of the poorly crystalline, Sr-substituted apatite cement matrix formed during setting. 85 Similarly, when strontium nitrate was mixed into the liquid phase of an a-TCP/TTCP cement the resulting SrHA with up to 4.3 wt% Sr released more Sr than Ca. 94 Brushite-forming cements Most brushite forming cement formulations comprising strontium are based on the synthesis of Sr-substituted precursor phases (Table 4). One approach is to synthesise Sr-substituted TCP as reactive species. Alkhraisat et al. 74 prepared Sr-b-TCP by sintering CaHPO 4 with mixtures of CaCO 3 and SrCO 3 , thus obtaining Sr-b-TCP with molar Sr/(Sr + Ca) ratios between 0 and 0.33. These were used in a hydraulically setting cement upon mixing with an equimolar amount of MCPM. Lattice parameters of the setting product brushite indicated a substitution of Ca 2+ by Sr 2+ ions in the set cement. After a small initial burst, a constant release of Sr-ions from the cements was found during immersion in water, releasing up to 2.3 mg strontium per g cement within 15 days. However, no impact of the Sr-modification on the mechanical properties was found. Another Sr-substituted CPC setting to brushite based on a-TCP was described by Pina et al. 95,96 Firstly, Sr-b-TCP was synthesised via aqueous precipitation from calcium nitrate, diammonium hydrogen phosphate and strontium with a (Ca + Sr)/P molar ratio of 1.5, allowing the precipitation of TCP. The precipitate was subsequently treated thermally at 1500 1C to obtain Sr-a-TCP. To accelerate setting, an aqueous solution of citric acid was used as cement liquid, and poly(ethylene glycol) or hydroxyl propyl methylcellulose was added to enhance washout resistance of the cement paste. The cement set into brushite with Sr 2+ ions substituted into the crystal structure. In contrast to the cement described by Alkhraisat et al., this cement exhibited an increased compressive strength compared to a reference material prepared from Sr-free a-TCP. Based on the addition of Sr via the liquid phase, another cement that sets into Sr-brushite is based on the reaction of b-TCP and MCP in the presence of up to 10 wt% SrCl 2 and traces of sodium pyrophosphate. 56 In this system high Sr content led to a slight decrease in the diametral tensile strength of the set cements. Cement setting and injectability Setting of CPC has been demonstrated to be fundamentally altered by the presence of Sr 2+ ions by Guo et al. when SrHPO 4 was added to a TTCP/DCPA system. Hydration was retarded significantly, which was attributed to the higher degree of supersaturation that is required to yield Sr-containing apatite crystals. Furthermore, the transformation rate into HA or SrHA, respectively, was reduced in the presence of Sr albeit this could be partially compensated by higher phosphoric acid content in the cement liquid. 84 Cements comprising Sr phases as reactants to form SrHA tend to have prolonged setting times if SrCO 3 or DSPA is used as a Sr source. 81,83,86 Sr substitution in ACP, on the contrary, induced faster setting in SrACP/DCPA systems. 90 The setting time of ACP/DCPD cements was reported to significantly increase with the addition of SrCO 3 (with at the same time decreasing viscosity of the cement paste). 87 Hydrolysis of a-TCP into CDHA was shown to be decreased in the presence of Sr-containing solution. 97 The addition of SrCO 3 into a-TCP based cements merely affected the setting time, which could be due to the fact that SrCO 3 did not participate in the setting reaction. 86 Sr substituted TCP, whether in the form of a-TCP setting to SrHA 73 or aor b-TCP in brushite forming cements, 74 was shown to slow down the cement setting, 95 In most cases, the retarding effect of Sr on the nucleation and crystal growth of apatite results in the formation of much smaller crystals during cement setting. 81,86,87,90 This is illustrated in Fig. 4, where smaller crystals are visible after the setting of an Sr-containing, ACP/DCPD-based cement comprising SrCO 3 as Sr source. However, some studies either reported no visible differences in the microstructure of the set cements 83 or at least partially attributed differences in the crystal size to variations in milling parameters during precursor preparation. 86 Since bone defect treatment via minimally invasive surgical techniques (e.g. in spinal surgery or in order to reinforce osteoporotic bones) has gained considerable attention over the last few years, controlled injectability of both apatite and brushite cements is of interest. 98,99 Besides powder-to-liquidratio, particle size distribution (controlled by precursor milling time) and organic additives like poly(ethylene glycol) or methylcellulose, ion substitution also affects cement setting and thus injectability. 96 By enhancing the reactivity of a-TCP strontium shortens the setting time and thus impairs the injectability of brushite-forming cements based on Sr-a-TCP. 96 In contrast, SrCO 3 -laden apatite cements were found to gain injectability with increasing Sr content due to a decrease in the viscosity of the cement paste. 87 However, since injectability is also a function of precursor particle size (and size distribution) no general conclusions can be drawn when comparing different studies. Degradation and strontium-ion release Degradation of CPC describes the physico-chemical dissolution of the set cements in aqueous environments and is usually quantified by the release of Ca 2+ and phosphate ions. Obviously, this process is controlled by the solubility of the cement components ( Table 2), but also by parameters like porosity, surface roughness and area as well as material fragmentation. In general, brushite degrades much faster than hydroxyapatite. Depending on the experimental setup, precipitation of CaP crystals from Ca and phosphate containing immersion liquids can even lead to a virtually negative degradation rate, making it difficult to compare the effects of Sr modification on degradation in different studies. There is, however, general consensus that lattice expansion induced by the larger Sr 2+ ions upon integration in the apatite lattice increases solubility, and consequently higher degradation was demonstrated for SrHA and SrHA forming cements. 54,55,93,100 There are two mechanisms that can contribute to the release of Sr 2+ from a SrCPC. One is the diffusion of Sr 2+ from soluble phases in the cement matrix via micropores in the cement caused by the cement liquid, and the second is the physicochemical dissolution of the cement matrix itself. During immersion in large volumes of Ca 2+ and Sr 2+ free buffer solution (sink conditions), Sr modification by the addition of solid SrCO 3 led to a reduced Ca 2+ release in a system that set into a SrCO 3 -containing, biphasic HA matrix. 85 Interestingly, Sr introduction into the same cement system via the liquid phase, resulting in a monophasic SrHA matrix, had the opposite effect on Ca 2+ release and the Ca 2+ release increased compared to the Sr-free control cement. Sr 2+ , on the other hand, was released to a greater extent from the biphasic cement containing SrCO 3 clusters, where B80% of the initially contained Sr was released within 21 days of immersion in water. 85 The latter effect was also found in an a-TCP-based cement system, where cumulative Sr 2+ release from a set biphasic HA/SrCO 3 bulk cement did account for only 1.5% of the initially added Sr but exceeded the release from a monophasic SrHA over 21 days in salt buffer solution. 85,86 It can be hypothesized that this difference was due to Sr 2+ release via dissolution of SrCO 3 crystals on the one hand and either diffusion-controlled or degradation-associated Sr release on the other. In a diffusion-controlled system, a depletion of Sr at the sample surface should occur, and leaching of Sr 2+ from the sample surface was shown for monophasic SrHA cements. 85,86 In SrCO 3 /HA cements, a decrease of Sr-concentration at the surface of samples immersed in liquid for 35 days was also present and was traceable even further into the bulk cement, suggesting that diffusion can be a limiting factor also in biphasic cements. 86 For monophasic SrHA cements prepared using a Sr(NO 3 ) 2 solution, Leroux et al. proposed a two-stage degradation mechanism, comprising an initial phase where Sr 2+ release exceeds Ca 2+ release due to the dissolution of strontium nitrate traces in the set cement, followed by the congruent release of Sr 2+ and Ca 2+ according to the ratio contained in the cement during bulk dissolution. Still, only 2 and 4 wt% of the contained Ca and Sr, respectively, were released within one month. 94 Although ion release and degradation can more easily be studied in ion-free buffer solutions, more complex immersion media are required to obtain results comparable to the in vitro (or in vivo) situation. Typical for a-TCP-based materials is a slight acidification and a depletion of Ca 2+ concentration in the immersion medium that arises from the presence of CDHA and the progressive formation of apatite crystals. 101,102 Since material-mediated Ca depletion can have significant impact on the cellular response, 103,104 it is important to note that Sr modification can, in some systems, alter this effect. For an a-TCP/DCPA/CaCO 3 /HA-based apatite cement immersed in alpha minimum essential cell culture medium (aMEM) Ca 2+ concentration was shown to drop from 1.8 mM to B0.5 mM and remain low over 21 days despite a regular medium change. Strontium modification, either by addition of SrCO 3 crystals or substitution of CaCO 3 by SrCO 3 in the precursor, resulted in a significant reduction of Ca depletion. Sr was, at the same time, released in higher doses from monophasic SrHA forming cements, resulting in concentrations of 0.05-0.1 mM compared to 0.025-0.05 mM for SrCO 3 /HA biphasic cements. 105 For brushite-forming cements prepared from Sr substituted b-TCP a high, zero-order release of Sr was found during immersion in water. Up to 2.3 mg g À1 Sr was released, and the kinetics of Sr release resembled the one measured for Ca, indicating that Sr was released by bulk erosion. 74 In an approach where Sr was introduced into the cement via the liquid phase an increase in solubility with increasing Sr-content was described. 56 It is obvious that degradation and thus Sr 2+ release differ from any laboratory setup as soon as a SrCPC is implanted into a tissue defect. Besides the complexity of body fluids compared to any of the buffer solutions described above, diffusion processes within the tissue as well as (cell-driven) active ion transport occur. Still several in vivo studies demonstrated an increased degradation of SrCPC compared to the respective Sr-free control groups and thus confirmed the above findings. 106,107 However, Sr 2+ release did in no case result in an increase of serum strontium levels to concentrations comparable to strontium ranelate treatment, suggesting that the released Sr 2+ is accumulated in the surrounding bone. 91 Mechanical characteristics of SrCPC As all inorganic cements and ceramics, CPC in general exhibit brittle fracture behaviour. Compared to natural bone (cancellous bone: 5-15 MPa and 0.26-0.90 GPa, cortical bone: 133-193 MPa and 17-25 GPa compressive strength and compression modulus, respectively) 1,108 most CPC possess poor mechanical strength and thus are mainly intended to be used in non-loadbearing situations. Brushite cements tend to have a lower compressive strength (10-60 MPa) 109-111 than hydroxyapatite forming ones (up to 75 MPa), 109,112 depending mainly on the phase composition of the set cement as well as on the respective porosity. 112 In SrCPC, contradicting results have been found regarding mechanical characteristics that are summarized in Table 5. It is important to note that the different preparation routes and properties of the set samples, such as porosity, do not allow direct comparison of the mechanical characteristics. In general, the strength of the cements increases within the first days to weeks as hydration proceeds. In most cements setting into monophasic SrHA, strontium modification either increased the compressive strength 83,86,90 or had no significant effect on the mechanical characteristics. 81 Nevertheless, one study reported a decrease in the compressive strength of Sr-a-TCP cements compared to Sr-free control. 73 This was attributed to a decreased reactivity of the precursor that was derived from the high sintering temperature during Sr-a-TCP synthesis which caused a larger crystal size and therefore slower dissolution speed which was also obvious from the prolonged setting time. To explain the higher strength of most SrHA cements based on multi-component precursors, Yu et al. suggest an increase in Ca 2p and P 2p binding strength in the Sr-substituted apatite lattice as measured by XPS. 90 Other parameters in discussion are the occurrence of crystal defects and the inhibitory effect of Sr on the Ref. SrHA (m) n/a B66 (+) (+) (+) 83 SrHA (m) 52 (À) n/a n/a n/a 85 SrHA (m) apatite depositing rate. 83 In their TTCP/DCPA/DSPA cement system, Guo et al. described a non-linear influence of Sr content on the mechanical properties: while an optimum composition was found at a Sr/(Sr + Ca) ratio of 0.05 with an increase of compressive strength up to 66 MPa compared to B50 MPa for the Sr-free reference cement, higher Sr content resulted in a reduced compressive strength. 83,84 A comparable optimum was described for DCPD-based cements where SrCO 3 replaced the second precursor component, CaO. 88 Here, up to 1.5% molar fraction of SrCO 3 increased the compressive strength of the set cement from B7 to B20 MPa. Again, higher substitution resulted in decreased strength values. In contrast Kuang et al. found no effect on the mechanical properties for the same cement system with partial or complete substitution of DCPA by DSPA (Sr/(Sr + Ca) = 0-0.2), although the overall compressive strength was significantly lower (B12 MPa after 28 days). 81 Still, in this cement a polymer component had been added to the cement liquid which might have interfered with the setting process. Full replacement of DCPA by DSPA and CaCO 3 by SrCO 3 , on the other hand, reduced the compressive strength of a-TCP/DCPA/CaCO 3 /HA cements. 91 In contrast, in another monophasic system, up to 8.37 wt% strontium in SrHA derived from an a-TCP/DCPA/HA/CaCO 3 precursor by substitution of CaCO 3 by SrCO 3 increased the compressive strength significantly from 32 to 53 MPa. 86 In a SrACP/DCPD cement higher compressive strength values were found for Sr/(Sr + Ca) ratios between 0.025 and 0.2 (up to 75 MPa after 10 days) compared to Sr-free reference cement. Although residues of SrCO 3 were present in the cement at early time points, hydration was shown to lead to a decrease in SrCO 3 content and the integration of Sr in the apatite matrix. 90 Sr modification via the cement liquid also results in the formation of SrHA; however, only modest impact on the mechanical properties was described by Panzavolta et al., where 1 wt% substitution increased compressive strength, whilst higher substitution resulted in decreased strength. 75 In cement systems setting into HA-SrCO 3 composite matrices, a decrease of mechanical strength compared to unmodified cement control samples was reported. This was explained by the low binding strength of SrCO 3 residues and the surrounding apatite matrix and the role of these crystals as crack initiators under compression loading. 86 Wang et al. postulated a small amount of Sr substitution into the HA lattice from intermixed SrCO 3 particles and thus a small increase in compressive strength, whilst at higher SrCO 3 loading ratios compressive strength also decreased. 87 In contrast to residual SrCO 3 clusters with poorly defined morphology, addition of SrHA whiskers was shown to significantly increase the strength of set cements at concentrations of 2.5 and 5 wt%. This result was attributed to the entanglement of the elongated whiskers, although at higher contents (10 wt%) a decrease of compressive strength was found. 92 In brushite forming cements based on Sr-b-TCP the presence of Sr did merely affect the compressive strength, as well as porosity, although a tendency to set into monetite instead of brushite was found at Sr contents up to 20 at%. 74 In contrast, Pina et al. found elevated compressive strength in Sr-a-TCP based cements, and attributed this to the reduced conversion of the precursor in the mechanically weaker setting product brushite. However, the high solubility of brushite leads to a fast degradation of the cement which resulted in a decrease of strength over time. 95 In general, the mono-or biphasic nature of the setting product of the respective cement system seems to be a key parameter for the mechanical properties: while the gain in strength caused by crystal lattice deformation associated with Sr substitution can lead to a certain increase in both apatite and brushite cement strength the presence of secondary clusters impairs the mechanical integrity of the set cement (Fig. 5). Radiopacity Apart from the therapeutic benefits of Sr modification and possible improvement of mechanical properties, another rationale to introduce Sr-phases into CaP bone cements is their increasing effect on the adsorption of X-rays. This is of interest since high radiopacity is required to monitor the positioning of injectable cements during surgery and follow-up imaging. Various species such as SrCl 2 , SrCO 3 , SrBr 2 , and SrF 2 have been evaluated to modify CPC [86][87][88]90,113,114 or calcium-aluminiumphosphate cements 115 for that purpose. In most cases an increase of radiopacity with Sr content was shown. For example, Wang et al. demonstrated that 4-20% Sr substitution allows one to distinguish SrCPC from bone samples with the same thickness in radiographic images (Fig. 6). 87 When comparing cements setting to SrHA and SrCO 3 /HA the latter showed less increase of radiopacity at similar Sr contents which was attributed to the low resolution of the radiographic imaging that could not correctly represent small local contrast variations as to be expected in biphasic matrices. 86 Biological properties based on cell culture studies Only a few data have been published on the in vitro characteristics of Sr-modified CPC. In general, Sr modification does not 1,[73][74][75]81,83,[85][86][87][88]90,95,108 induce cytotoxicity. In cell cultures treated with extracts of SrHA, 81,86 biphasic HA/SrCO 3 86 and SrHA-whisker reinforced cements, 92 a positive effect of the Sr doping and/or the Sr 2+ released from the cements was found on cell proliferation and osteogenic differentiation. It was further stated that the decreased tendency of SrCPC to cause medium Ca 2+ depletion as well as a smaller variation in the pH during SrCPC setting may contribute to these findings. 86 Only one study described a slight cytotoxic effect of highly concentrated SrHA extracts on the murine connective tissue cell line L929. 83 Table 6 shows an overview of the in vitro effects of SrCPC. For example, Tadier et al. found a higher proliferation rate of osteoprogenitor cells on biphasic SrCO 3 /HA compared to monophasic SrHA cements that were derived from a DCPD/CaCO 3 /SrCO 3 precursor. However, at an mRNA level there was no difference in osteogenic differentiation. 85 In contrast two different routes of Sr modification based on an a-TCP precursor, setting into SrHA and SrCO 3 /HA, respectively, were both shown to stimulate proliferation and osteogenic differentiation of primary human mesenchymal stem cells. 105 Interestingly, proliferation was generally more enhanced in samples kept under basal cell culture conditions, whilst cells cultured in medium supplemented with osteogenic additives showed higher proliferation on monophasic SrHA cement matrices. This indicates different susceptibilities of cells towards Sr depending on their stage of differentiation. No cytotoxicity (using MG63 osteoblast-like cells in an indirect culture setup) but an enhanced proliferation and osteogenic differentiation in a direct setup using MC3T3-E1 murine pre-osteoblasts were found for TTCP/DCPA/DSPA based SrHA cements. A comparable cell morphology was found on SrCPC and Sr free controls. 107 Similarly, enhanced proliferation and osteogenic differentiation were observed for MG63 osteoblasts cultured on a-TCP/gelatine/DCPD cements where Sr was introduced via the liquid phase. 75 Since the stimulatory effect of Sr modification was found in indirect cell culture setups as well as in direct contact cultures, it can be hypothesised that different aspects contribute to the positive cell reaction: release of Sr 2+ from the cement, altered ionic interaction of the cement and the cell culture medium, resulting in e.g. reduced Ca 2+ depletion and reduced acidification of the medium. 85,105 In Sr containing brushite cements prepared from Sr-b-TCP and MCPM, no distinct influence of Sr content on proliferation and metabolic activity of human osteoblasts was found in a direct setup. The cement released up to 40 ppm (0.56 mM) strontium. 74 However, this release test was performed during immersion in water, and might not correctly represent the situation in cell culture conditions, as it is known that e.g. protein content of immersion solutions can limit ion release. 116 In comparison to Zn-doped brushite cements additional doping with Sr was shown to increase the metabolic activity of murine osteoblast precursors. 117 Therefore it can be concluded that the beneficial effects of Sr on both proliferation and differentiation of osteogenic cells were confirmed for SrCPC. Until today, to the best of our knowledge no data have been published on the osteoclastic resorption of Sr containing CPC in vitro. Still, the anti-resorptive effects of Sr incorporated in hydroxyapatite have been demonstrated by Yang et al. using mineralised films prepared by precipitation from Ca and Sr containing liquids, that somewhat resemble the characteristics of set CPCs: osteoclastic activity, by means of resorption pit area, was significantly decreased in the presence of different Sr 2+ concentrations while osteoblast proliferation increased in a Sr dose-dependent manner and osteoblast activity remained unaffected. 118 In vivo studies on strontium-containing cements Although the stimulating effect of systemically administered strontium ranelate on bone formation has been questioned recently, 119 evidence exists that the local release of Sr 2+ into a specific bone defect can support defect healing. One of the first studies using a Sr-containing bone cement in vivo was published by Ni et al. in 2006, although SrHA was only used as a filler in a resin-based cement. 120 However, promising results were obtained, with the Sr-laden cement exhibiting an increased bonding strength to bone compared to the Sr-free PMMA cement control group. Positive results for new bone formation at the bone-implant area of SrHA-containing polymeric cements were also described by others. [121][122][123][124] Wong et al. pointed out that neither inflammatory response nor necrotic reactions or fibrous encapsulation that occurred in the PMMA control group were found in the SrHA-PMMA cement group. 123 In vivo studies focussing on SrCPC of different compositions are summarised in Table 7. In an intramuscular implantation study, Dagang Guo et al. demonstrated an increase of material degradation with increasing Sr/(Ca + Sr) ratio using pre-set cylinders prepared from a TTCP/DCPA/DSPA cement with up to 10% strontium substitution. 106 This was confirmed in bone contact when the same material was implanted into a femoral drill hole defect. Furthermore, a tight bond of newly formed bone was found at the interface between the cement and the host bone. Similarly, Kuang et al. implanted pre-set TTCP/ DCPA/DSPA based SrHA cement cylinders in a metaphyseal drill-hole defect model in rats. 107 Again, enhanced degradation was found for SrHA compared to Sr-free reference after 32 weeks; however, it remains unclear whether this was caused by higher physico-chemical dissolution or cellular resorption. Early endochondral ossification after 2 weeks and higher new bone deposition (4 weeks post operation) indicated an enhanced osteo-regenerative potential of the SrHA cement, although only a small number of animals was used. In another study, it was shown that a SrHA cement accelerated graft healing in anterior cruciate ligament reconstruction using allogenic ligament grafts impregnated with a Sr-containing cement in comparison to Sr-free control. 125 Enhanced endochondral ossification was found in the 10% Sr-CPC group after 2 weeks, whilst after 4-16 weeks postoperation progressive osteo-conduction was found in both groups. Metaphyseal, 4 mm wedgeshaped, bridged with an osteosynthesis plate After 6 weeks a significant increase of new bone formation in the SrCPC group against CPC and empty defect was found. Upregulation of bone-specific markers bone-morphogenic protein-2, osteocalcin, osteoprotegerin and alkaline phosphatase; increased collagen formation. Sr 2+ was found in the newly formed bone tissue around the implant. To study the effects of Sr-modified CaP intended to positively influence the bone metabolism in osteoporotic bone in vivo, the key issue is to select an appropriate animal model that resembles the human osteoporotic fracture situation. 126 Baier et al. implanted an a-TCP/SDPA/SrCO 3 /HA cement in a metaphyseal drill hole defect in ovariectomized rats in comparison to a Sr free reference cement. 91 It is interesting to note that the occurrence of discontinuities within the cement appeared as a key for cement integration into the bone tissue, since new bone formation was found at these sites and was even enhanced in the case of Sr-laden cement. Moreover, BMD measurements revealed a local increase of bone in the cement-treated metaphyseal region, suggesting that Sr released from the cements led to a local bone-anabolic effect. This is supported by a recent study, where ToF-SIMS imaging was used to visualize Sr-distribution in histological sections of the metaphyseal region of rat femora where a SrHA cement based on a-TCP, DCPA, SrCO 3 and HA had been implanted to bridge a wedge-shaped critical size defect resembling a typical osteoporotic fracture (Fig. 7). 127 Furthermore, histomorphometric analysis of bone formation as well as immunohistochemical analysis of bone-related markers confirmed an increased bone formation and remodelling activity in the Sr-HA group. Only one study focussing on the in vivo properties of Sr-containing brushite cement has been published so far: in a trabecular bone defect in pigs Pina et al. demonstrated positive effects of Sr co-doping of a Zn-containing brushite cement. Sr doping was found to reduce the osteoclast density at the boneimplant interface and to increase new bone formation. 117 Although data on the in vivo effects of SrCPC are sparse, published data strongly suggest that Sr doping of CPC positively influences local bone regeneration. In particular animal studies in osteoporotic models revealed promising results towards the utilisation of SrCPC in the clinical treatment of osteoporosis related fractures. Conclusions A variety of approaches towards strontium-substituted calcium phosphate bone cements have been published in the last few years. Most of them are based on the use of Sr-containing phases that are intermixed with the conventional precursor powder or substitute CaP phases therein, whilst others use the cement liquid to introduce Sr 2+ into the cement. The resulting cements can be categorised into those that set into mono-or multiphasic matrices that are characterised by a homogeneous Sr distribution or contain Sr rich clusters, respectively. In general, Sr substituted CPC are characterised by an enhanced degradability both under laboratory conditions and in vivo. Given the differences in the phase composition and porosity of the cements no general conclusion on the effects of Sr modification on the mechanical properties of the set cements can be drawn, but an increased strength has been found in many compositions setting into monophasic SrHA, whereas multi-phasic cements tend to have lower strength. The stimulatory effect of Sr 2+ released from the cements has clearly been shown in vitro, although the use of various cell types that may have different susceptibilities to Sr 2+ concentrations, varying experimental setups and cement characteristics (e.g. porosity) prohibit general conclusions on the optimal degree of substitution. There remains a need to study the influence of Sr modification on bone resorbing cells in vitro. However, evidence exists from a few in vivo studies that the potential of Sr 2+ to stimulate bone formation can be used to locally enhance new bone formation without systemic side effects. Therefore, strontium modified calcium phosphate bone cements are promising materials that could help to improve clinical outcome in the regeneration of bone defects, in particular in the case of diseased, e.g. osteoporotic, bone.
2019-04-04T13:09:59.019Z
2015-06-03T00:00:00.000
{ "year": 2015, "sha1": "711c45b3297ab2e8a89b7d51b350a287f21519a3", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2015/tb/c5tb00654f", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8dd1dfde346bdb0c28cf3c3594ba92a13934c563", "s2fieldsofstudy": [ "Materials Science", "Biology" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
25343289
pes2o/s2orc
v3-fos-license
A young lady with post-partum jaundice and right upper quadrant lump abdomen: an unusual etiology Intraductal papillary mucinious neoplasm-biliary type is the biliary counterpart of intraductal papillary mucinious neoplasm-pancreatic type. We report a rare case of intraductal papillary mucinous tumor arising from extrahepatic biliary system. The diagnosis was established on histopathological analysis following endoscopic retrograde cholangiopancreatography-guided biopsy. Isolated papillary adenoma of the bile duct is extremely rare, and in this unusual case the patient was a 22-year-old young lady who had delivered a healthy infant 6 weeks previously. Introduction Contrary to well-documented flat type neoplasms, biliary papillomatosis or intraductal papillary mucinous neoplasmbilliary type (IPMN-B) is relatively rare and a recently diagnosed disease entity. Papillary neoplasm of the bile ducts are rare, pathology is characterized by diffuse papillary proliferation of the bile duct epithelial cells. They account for 3-5% of cholangiocarcinomas and can arise from any portion of the intra or extra hepatic bile ducts [1]. Biliary intraductal neoplasms are proposed to be of two types: flat and papillary [2,3]. Papillary cholangiocarcinoma is believed to have a better clinical course than non-papillary cholangiocarcinoma, just as malignant intraductal papillary mucinous neoplasm of the pancreas (IPMN-P) has a better prognosis than pancreatic ductal adenocarcinoma. Case report A 22-year-old young lady was referred to the Gastroenterology department by her local physician with 4 weeks history of yellowish discoloration of urine and sclera. Lumpy upper abdomen and significant loss of weight (approx. 5 kg) were present for 3 weeks. There were no prodromal symptoms and she denied any history of pruritus or passage of light colored stool. Her drug and family histories were unremarkable. She had normal full-term pregnancy with transvaginal delivery of a healthy infant 6 weeks previously. There were no complications during pregnancy and no evidence of jaundice or cholestasis during pregnancy. Patient had a sensation of a lump in the right upper hypochondrium which had been gradually increasing in size over the past 3 weeks. It was associated with colicky pain lasting for 15-20 min and then subsided gradually. The patient also had intermittent spikes of fever with chills for one week. On general examination there was pallor, jaundice but no lymphadenopathy. Tenderness was present in the right hypochondrium and a globular bulge was felt 4 cm below the right costal margin. Hepatomegaly was present with left lobe enlargement more than the right lobe; margins were sharp with firm consistency. Routine investigations confirmed normocytic normochromic anemia (hemoglobin 6.4 g/dL) with conjugated hyperbilirubinemia (10.5/6.8 mg/dL) and deranged liver function test (Table 1). Ultrasonography was done showing grossly dilated common bile duct with large filling defect throughout. Gross dilatation of intrahepatic biliary radicals and common hepatic ducts were seen. Contrast-enhanced computed tomography (CECT) study of abdomen was done showing grossly dilated gallbladder with moderate to gross dilatation of intrahepatic biliary radicals and common bile duct (Fig. 1). A large polypoidal intraluminal multifocal enhancing lesion filling almost the entire common bile duct, cystic duct was also seen multifocally in the gallbladder. Multiple small hypodense lesions seen in both lobes showing peripheral rim enhancement clustered around central intra hepatic biliary radical dilatations were suggestive of cholangitic abscess. Endoscopic retrograde cholangiopancreatography was performed under conscious sedation and after easy cannulation grossly dilated common bile duct was noted with obstruction and shouldering at mid common bile duct on air cholangiogram. A biopsy was taken from the mass lesion and sent for histopathological examination. Two plastic biliary stents 10 Fr x 10 cm were placed across the mass. Choledochoscopic image was taken by ultrathin neonatal endoscope (Fig. 2). Histopathology analysis of the soft tissue revealed multiple papillae lined by complex epithelial proliferation with severe dysplastic changes. Early invasion of stroma was seen with neutrophillic infiltration and dilatation of blood vessels (Fig. 3A, B). These findings were in consistence with intraductal papilloma with severe dysplasia. Discussion Two types of biliary intraductal neoplasms preceding invasive cholangiocarcinoma have been identified so far; a flat type neoplastic lesion called BilIN, which develops into non-papillary cholangiocarcinoma, and a papillary type called IPMN-B with malignancy potential. IPMN-B comprises a histological spectrum that ranges from benign to malignant adenoma, borderline tumor, carcinoma in situ, and invasive carcinoma [4]. The current WHO classification and some authors recognize biliary papillomatosis, as well as BilIN, or biliary epithelial dysplasia, as precursor lesion of cholangiocarcinoma [5][6][7][8]. The pathogenesis of the disease is not yet known, but has been thought to be related to chronic billiary ductal inflammation from pancreatic juice reflux resulting in excessive proliferation of the bile duct epithelium followed by dysplasia-carcinoma sequence [7]. Unlike traditional ductal adenocarcinoma, papillary tumors are low-grade malignancy. They are usually limited to the mucosa or spread along a mucosal membrane, although they can invade the ductal wall in the later stage. Once papillary cholangiocarcinoma shows stromal invasiveness, its prognosis is as poor as non-papillary cholangiocarcinoma. Most papillary tumors are peripheral in location and are nodular or papillary in shape. IPMN-B involves the extrahepatic ducts in 58%, both extra and intrahepatic ducts in 33% and intrahepatic ducts alone in 9% cases [9]. IPMNs are classified in 4 subtypes on the basis of histology and mucin expression, as recently published in a consensus study [10]. These are: 1) the gastric type, composed of columnar epithelial cells resembling cells of the gastric foveolae (MUC1-, MUC2-, MUC5+); 2) the intestinal type, characterized by Biliary papillomatos commonly affects adults older than 60 years, with a Male: Female ratio of 2:1 [11], but we report a case of IPMN-B in a young 22-year-old lady during the post-partum period. There is no literature available about a possible link between pregnancy hormones and IPMN-B. Cytological examination by epithelial brushing and biopsies during ERCP is an effective preoperative procedure for the diagnosis of papillary tumor of the bile duct. In conclusion, intraductal papillary tumor of the bile duct runs a benign course and will benefit from early surgical intervention. When massive localised dilatation of the intrahepatic duct is seen on CT without an obvious cause, papillary mucinous tumor of the bile duct should be considered. ERCP is often diagnostic when a copious amount of mucus is present. Furthermore, for establishing the concept of IPMN-B, more continual reports and studies are warranted.
2016-05-04T20:20:58.661Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "070d6fd55db35a7d8ba1f162f21c30d8460965e7", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "070d6fd55db35a7d8ba1f162f21c30d8460965e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14250380
pes2o/s2orc
v3-fos-license
A long, hard look at MCG-6-30-15 with XMM-Newton II: detailed EPIC analysis and modelling The bright Seyfert 1 galaxy MCG-6-30-15 has provided some of the best evidence to date for the existence of supermassive black holes in active galactic nuclei. Observations with ASCA revealed an X-ray iron line profile shaped by strong Doppler and gravitational effects. In this paper the shape of the iron line, its variability characteristics and the robustness of this spectral interpretation are examined using the long XMM-Newton observation taken in 2001. A variety of spectral models, both including and excluding the effects of strong gravity, are compared to the data in a uniform fashion. The results strongly favour models in which the spectrum is shaped by emission from a relativistic accretion disc. It is far more difficult to explain the 3-10 keV spectrum using models dominated by absorption (either by warm or partially covering cold matter), emission line blends, curved continua or additional continuum components. These provide a substantially worse fit to the data and fail to explain other observations (such as the simultaneous BeppoSAX spectrum). This reaffirms the veracity of the relativistic `disc line' interpretation. The short term variability in the shape of the energy spectrum is investigated and explained in terms of a two-component emission model. Using a combination of spectral variability analyses the spectrum is successfully decomposed into a variable power-law component (PLC) and a reflection dominated component (RDC). The former is highly variable while the latter is approximately constant throughout the observation, leading to the well-known spectral variability patterns. (Abridged) INTRODUCTION X-ray spectroscopy holds great promise for probing physics close to the accreting black holes thought to power Active Galactic Nuclei (AGN) and Galactic Black Hole Candidates (GBHCs). The high X-ray luminosities observed from these systems are thought to be generated close to the black hole ( ∼ < 100 rg; where rg ≡ GM/c 2 is the gravitational radius) where the effects of strong gravity, such as gravitational redshift and light bending, will imprint characteristic signatures on the emerging X-ray spectrum. As it escapes towards the observer, the primary X-ray emission will interact with material in the innermost regions, modifying the observed X-ray spectrum. Reprocessing of the X-rays by an accretion disc will lead to a particular set of spectral features. Absorption, fluorescence, recombination, and Compton scattering in the surface layers of the X-ray illuminated disc produce a 'reflection' spectrum. Under a wide range of physical conditions the most prominent observables will be an emission line at ≈ 6.4 keV (the iron Kα fluorescence line) and a broad 'hump' in the spectrum which peaks in the range 20 − 40 keV (e.g. Guilbert & Rees 1988;Lightman & White 1988;George & Fabian 1991;Matt, Perola & Piro 1991). Observations of Seyfert 1 galaxies with Ginga (Pounds et al. 1990;Nandra & Pounds 1994) demonstrated the simultaneous presence of a ≈ 6.4 keV emission line and an up-turn in the spectrum above ∼ 10 keV, indicative of the reflection hump. The presence of the reflection hump has since been confirmed by CGRO (e.g. Zdziarski et al. 1995) and BeppoSAX (e.g. Perola et al. 2002). The bright Seyfert 1 galaxy MCG-6-30-15 (z = 0.007749) has received particular attention since ASCA observations revealed the presence of a broad, asymmetric emission feature at energies between 4 − 7 keV identified with a highly broadened Fe Kα emission line . Similar features were subsequently observed in other Seyfert 1 galaxies (Mushotzky et al. 1995;Nandra et al. 1997). The profile of the line can be explained in terms of fluorescent emission from the surface of an accretion disc extending down to ∼ < 6 rg about a supermassive black hole (SMBC): the relativistic 'disc line' model (Fabian et al. 1989;Stella 1990;Laor 1991). For comparison, the emission lines seen in other wavebands (e.g. broad optical lines) are thought to originate at distances ∼ > 10 3 rg. The broad Fe Kα line therefore potentially offers a powerful diagnostic of the physical conditions in the immediate environment of the black hole (see Fabian et al. 2000;Nandra 2001 andReynolds &Nowak 2003 for recent reviews). Repeated observations of MCG-6-30-15 with ASCA (Iwasawa et al. 1996Shih et al. 2002), BeppoSAX (Guainazzi et al. 1999), RXTE (Lee et al. 1999(Lee et al. , 2000Vaughan & Edelson 2001), Chandra (Lee et al. 2002) and XMM-Newton (Wilms et al. 2001, hereafter W01, andFabian et al. 2002, hereafter paper I) have confirmed the presence of the broad, asymmetric emission feature. However, while this is clearly not an artifact of any particular instrument or observation, there are uncertainties associated with the detailed modelling of the spectrum. The most challenging of these is that MCG-6-30-15, along with many other Seyfert 1s, shows complex absorption in its X-ray spectrum. This complicates the process of identifying the correct underlying continuum and thereby measuring the superposed line emission. Indeed, the presence of relativistically broadened emission lines in the X-ray spectra of other Seyfert 1 galaxies has recently become something of a cause celebre (see e.g. Lubinski & Zdziarski 2001;Inoue & Matsumoto 2001;Branduardi-Raymont et al. 2001;Pounds & Reeves 2002;Page, Davis & Salvi 2003;Pounds et al. 2003a,b). The long (∼ 320 ks) XMM-Newton observation of MCG-6-30-15 is of great importance to studies of the broad iron line, since MCG-6-30-15 offers the best established example of a relativistic line profile. In this paper the X-ray spectrum and spectral variability of MCG-6-30-15 are examined using the long XMM-Newton observation. The interpretation of the spectrum in terms of relativistic Fe Kα emission is investigated by fitting a range of models including both simple phenomenological models and models accounting for the detailed physics expected in photoionised accretion discs. These are compared to models that do not include relativistic effects in order to test the strength of the argument for strong gravity in MCG-6-30-15. A variety of techniques are employed to characterise the spectral variability of the source and thereby further constrain the range of viable spectral models. The two XMM-Newton observations are considered in the context of the long term behaviour of MCG-6-30-15 using RXTE monitoring observations spanning more than six years. The rest of this paper is organised as follows. Section 2 gives details of the XMM-Newton and RXTE observations and their data reduction. Section 3 briefly reviews the RXTE monitoring of MCG-6-30-15 and shows how the two XMM-Newton observations fit in with the long timescale properties of the source. Section 4 discusses the spectral fitting results using models that include relativistic effects (Section 4.6) and exclude relativistic effects (Section 4.7). In section 5 the spectral variability of the source is examined using flux-flux plots (section 5.1), flux-resolved spectral analysis (section 5.2), rms spectra (section 5.3), time-resolved spectral analysis (section 5.4) and Principal Component Analysis (PCA; section 5.5). Section 6 discusses differences between the long XMM-Newton observation of 2001 and the previous observation taken in 2000. Section 7 summarises the new results from the present work as well as the other analyses of these XMM-Newton data. Section 8 presents a discussion of the main results and the main conclusions are given in section 9. Some details of the EPIC spectral calibration are investigated in Appendix A. XMM-Newton observations XMM-Newton (Jansen et al. 2001) has observed MCG-6-30-15 twice. The first observation was performed during revolution 108 (2000 July 11-12) and the second observation was performed during revolutions 301-303 (2001 July 31 -August 5). In both cases the source was observed on-axis and all instruments were operating nominally. The first results from the 2000 XMM-Newton observation were presented in W01 and the first results from the 2001 XMM-Newton observation were presented in paper I. During the 2000 observation the pn camera was operated in small window mode, the MOS2 camera in full frame mode and the MOS1 camera in timing mode. During the 2001 observation both the EPIC MOS cameras and the EPIC pn camera were operated in small window mode. For the present analysis only the EPIC data taken in small window mode were used (i.e. only the pn from the 2000 observation, pn and both MOS from the 2001 observation). The pn small window mode uses a 63 × 64 pixel window (≈ 4 ′ .3 × 4 ′ .4 on the sky) with the source positioned less than an arcminute from the CCD boundary. The MOS small window mode uses a window of 100 × 100 pixels (≈ 1 ′ .8 × 1 ′ .8) with the source image approximately centred within the window. The small window modes allow CCD frames to be read out every 5.7 ms for the pn and 0.3 s for the MOS. These short frame times help lessen the effects of photon pile-up which can distort the spectrum of bright sources (Ballet 1999;Lumb 2000). All three EPIC instruments used the medium filter. XMM-Newton data reduction The extraction of science products from the Observation Data Files (ODFs) followed standard procedures using the XMM-Newton Science Analysis System (SAS v5.4.1). The EPIC data were processed using the standard SAS processing chains to produce calibrated event lists. These removed events from the position of known defective pixels, corrected for Charge Transfer Inefficiency (CTI) and applied a gain calibration to produce a photon energy for each event. Source data were extracted from circular regions of radius 40 ′′ from the processed images and background events were extracted from regions in the small window least effected by source photons. These showed the background to be relatively low and stable throughout the observations, with the exception of the final few ks of each revolution where the background rate increased (as the spacecraft approached the radiation belts at perigee). Data from these periods were ignored. The total amount of 'good' exposure time selected was 64 ks from the pn during the 2000 observation, and for the 2001 observation 315 ks and 228 ks from the MOS and pn, respectively. (The lower pn exposure is due to the lower 'live time' of the pn camera in small-window mode, ∼ 71 per cent; Strüder et al. 2001). The ratios of event patterns as a function of energy showed there is negligible pile-up in the pn data but the MOS data suffer slightly from pile-up. This may lead to a slight distortion of the MOS spectra. In order to minimise this effect only photons corresponding to single pixel events (EPIC patterns 0) were extracted from the MOS cameras. The spectrum of single pixel events should be least affected by pile-up (Molendi & Sembay 2003). For the pn, single pixel (pattern 0) and double pixel (patterns 1 − 4) events were extracted separately for the spectral analysis and fitted simul- taneously (the spectrum of single pixel events provides better energy resolution). Response matrices were generated using RMFGEN v1.48.5 and ancillary responses were generated with ARFGEN v.1.54.7. For the 2001 observation source spectra were grouped such that each energy bin contains at least 2, 000 counts for the pn and 400 counts for the MOS 1 . For the shorter 2000 observation the pn spectrum was grouped to contain at least 500 counts per energy bin. Background subtracted light curves were extracted from the pn camera in 1000 s bins using both single and double pixel events. RXTE observations MCG-6-30-15 has been regularly monitored by RXTE since 1996. The RXTE Proportional Counter Array (PCA) consists of five collimated Proportional Counter Units (PCUs), sensitive to X-rays in a nominal 2 − 60 keV bandpass and with a total collecting area of ∼ 6250 cm 2 . However, due to operational constraints there are usually fewer than five PCUs operated during any given observation. Only data from PCU2 were analysed here as this is the one unit that has been operated consistently throughout the lifetime of the mission. In the following analysis, data from the top (most sensitive) layer of the PCU array were extracted using SAEXTRCT v4.2d. Poor quality data were excluded using standard selection criteria 2 . The PCA background was estimated using the latest version of the 1 This grouping was used instead of the more traditional N = 20 as it results in fewer spectral bins (and therefore reduces the computation time for each fit iteration) but does not compromise the intrinsic spectral resolution. The data contain sufficient counts that even with this heavy binning the instrument resolution is oversampled. 2 The acceptance criteria were as follows: TIME SINCE SAA> 20 min; Earth elevation angle ELV 10 • ; offset from optical position of MCG-'combined' model (Jahoda et al. 2000). Light curves were initially extracted from the STANDARD-2 data with 16 s time resolution. The data were rebinned into quasi-continuous observations lasting typically 1 − 2 ks. X-RAY HISTORY OF MCG-6-30-15 Figure 1 shows the RXTE light curve and hardness ratio around the times of the two XMM-Newton observations. From this it is clear that the source displays strong X-ray variability, but most is confined to short term 'flickering' rather than longer term trends in the data. A counter example can be seen around MJD 52,000 when there appears to have been a prolonged period of lower flux and variability, during which the spectrum hardened slightly. However, both XMM-Newton observations seem to have caught MCG-6-30-15 in its 'typical' state; there are no outstanding secular changes in either flux or hardness ratio to suggest anything other than normal behaviour in the source during the two XMM-Newton observations. The same conclusion is reached when the RXTE monitoring extending back to 1996 are considered. Papadakis et al. (2002) and discuss the RXTE monitoring observations in more detail. SPECTRAL FITTING ANALYSIS This section describes the results of the spectral fitting analysis. The raw spectra from the 2001 observation are plotted in Fig. 2. A 6-30-15 OFFSET 0.02 • ; and ELECTRON2 0.1. This last criterion removes data with high anti-coincidence rate in the propane layer of the PCU. variety of models, both including and excluding relativistic effects, were fitted to the data. The goal of the analysis was to find the models that can accurately match the data, and test whether models including relativistic effects were preferred. This analysis expands upon the analyses of the same observation presented in paper I, and in that the latest software and calibration were used (as of 2003 May), a wider variety of spectral models was tested, and more information from other observations was used to better constrain the range of the model fits. Limits of spectral modelling The high signal-to-noise of these spectra mean that systematic errors in the detector response model can dominate over the statistical errors (due to photon noise). A brief analysis of some calibration data is discussed in Appendix A. This demonstrated that significant instrumental features should be expected close to the K-edges of O and Si and the M-edge of Au. Elsewhere the spectrum shows no sharp instrumental residuals larger than ∼ 5 per cent. Therefore in the spectral fitting outlined below, the fitting is primarily carried out over the 3 − 10 keV spectral range that excludes these features. Appendix A also demonstrates that there is a strong discrepancy between the spectral slopes derived from each of the EPIC cameras (see Molendi & Sembay 2003). Therefore the spectral slope of the power-law continuum was always allowed to vary independently between the three EPIC cameras. With these precautions in place various spectral models were compared to the data as described below. The models were fitted to the four spectra (pn:singles, pn:doubles, MOS1 and MOS2) simultaneously using XSPEC v11.2 (Arnaud 1996). The quoted errors on the derived model parameters correspond to a 90 per cent confidence level for one interesting parameter (i.e. a ∆χ 2 = 2.71 criterion), unless otherwise stated, and fit parameters are quoted for the rest frame of the source. A first look at the spectrum The count spectra shown in Fig. 2 give only a poor indication of the true spectrum of the source since they are strongly distorted by the response of the detector. In order to gain some insight into the broad-band shape of the source spectrum, without recourse to model fitting, the EPIC data for MCG-6-30-15 were compared to the spectrum of 3C 273 (see Appendix A). The raw EPIC pn:s data (shown in Fig. 2) were divided by the raw pn:s spectrum for 3C 273. This gave the ratio of the spectra for the two sources. 3C 273 was chosen as it is a bright source and has a relatively simple spectrum in the EPIC band, i.e. a hard power-law plus smooth soft excess modified by Galactic absorption , and in particular does not contain any strong, sharp spectral features such as lines or edges. Thus the ratio of MCG-6-30-15 to 3C 273 spectra will factor out the broad instrumental response to give a better impression of the true shape of the MCG-6-30-15 spectrum. The top panel of Fig. 3 shows this ratio. The advantage of this plot is that it gives an impression of the MCG-6-30-15 spectrum and is completely independent of spectral fitting. The disadvantages are that it is is in arbitrary units and only gives a very crude representation of the true flux spectrum of MCG-6-30-15 because instead of being distorted by the instrumental response, it is now distorted by the spectrum of 3C 273. A more accurate, but model-dependent method for obtaining a 'fluxed' spectrum of MCG-6-30-15 is to normalise the ra- tio by a spectral model for 3C 273. This technique is very similar to the method routinely used in optical spectroscopy of normalising the target spectrum using a standard star spectrum to obtain a wavelength-dependent flux calibration. The MCG-6-30-15/3C 273 ratio was multiplied by a spectral model for 3C 273 (defined in flux units). The spectral model comprised a hard power-law plus two blackbodies to model the soft excess, modified by Galactic absorption (see Appendix A and also ). This process transformed the raw MCG-6-30-15 count spectrum into a 'fluxed' spectrum, without directly fitting the MCG-6-30-15 data. (The accuracy of this procedure does however depend on the accuracy of the 3C 273 spectral modelling.) The fluxed spectrum for MCG-6-30-15 is shown in the bottom panel of Fig. 3. Note that this procedure is not a formal deconvolution of the spectrum (see Blissett & Cruise 1979;Kahn & Blissett 1980) and so does not correctly take into account the finite detector spectral resolution. Nevertheless, it does provide a simple yet revealing impression of the source spectrum. Specifically, the plot clearly reveals the strongest spectral features in MCG-6-30-15, namely the strong warm absorption concentrated in the ∼ 0.7 − 2 keV region and the iron line peaking at ≈ 6.4 keV. Soft X-ray absorption In the spectral fitting analysis described below the absorption due to neutral gas along the line-of-sight through the Galaxy is accounted for assuming a column density of NH = 4.06 × 10 20 cm −2 (derived from 21 cm measurements; Elvis, Wilkes & Lockman 1989). Accurately modelling the intrinsic absorption provided a more challenging problem. MCG-6-30-15 is known to possess a complex soft X-ray spectrum, affected by absorption in partially ionised ('warm') material (Nandra & Pounds 1992;Reynolds et al. 1995;Otani et al. 1996) and possibly also dust Lee et al. 2001;Turner et al. 2003a;Ballantyne, Weingart-ner & Murray 2003). However, the detailed structure of the soft X-ray spectrum, as seen at improved spectral resolution with the grating instruments on board Chandra and XMM-Newton, remains controversial (Branduardi-Raymont 2001;Lee et al. 2001;Sako et al. 2003;Turner et al. 2003a). In the present analysis the complicating effects of the warm absorber were mitigated by concentrating on the spectrum above 3 keV. However, this will not entirely eliminate the effects of absorption. In the 3 − 10 keV range absorption by Z 14 ions will be limited since their photoionisation threshold energies are less than 3 keV for all charge states (the ionisation energy for H-like Si +13 is 2.67 keV ;Verner & Yakovlev 1995). Similarly, the L-shell of ionised Fe can significantly absorb the spectrum around 1 keV due to complexes of absorption lines and edges (e.g. Nicastro, Fiore & Matt 1999;Behar, Sako & Kahn 2001). These transitions all occur below 2.05 keV (the ionisation energy of the L-edge in Li-like Fe +23 ). Thus above 3 keV the only opacity from low-Z ions and Fe L will be due to the roll-over of their superposed photoelectric absorption edges. The dominant effect of the warm absorber on the spectrum above 3 keV (λ < 4.13Å) is thus to impose some subtle downwards curvature on the continuum towards lower energies. The only significant absorption features occurring directly in the 3 − 10 keV energy range will be due to K-shell transitions in the abundant Z > 14 elements (S, Ar, Ca, Fe and Ni). Absorption in the 2 − 5.5 keV range could be due to S, Ar and Ca while Fe and Ni may absorb above 6.40 keV and 7.47 keV, respectively. Comparison with the high quality Chandra HETGS spectrum of the Seyfert 1 galaxy NGC 3783, which shows a very strong warm absorber (Kaspi et al. 2002), suggests that absorption due to S, Ar and Ca is likely to be weak (see the top panel of their figure 1 and also Blustin et al. 2002, Behar et al. 2003). This is confirmed by an analysis of the Chandra HETGS spectrum of MCG-6-30-15 (J. Lee, piv. comm.; see also section 4.5.3). Turner et al. (2003a) fitted the RGS spectrum from the 2001 XMM-Newton observation using a multi-zone warm absorber model plus absorption by neutral iron (presumably in the form of dust). Although the transmission function of this absorption model recovers above a few keV it nevertheless predicts the absorption remains significant at higher energies (a ∼ 10 per cent effect at 3 keV) due to the combined low-energy edges discussed above. Therefore, to account for the absorption in MCG-6-30-15 this multi-zone absorption model was included in the spectral fitting, with the column densities and ionisation parameters kept fixed at the values derived from fits to the RGS data (these are in any case very poorly constrained by the EPIC data above 3 keV). The values used are tabulated in Table 1 of Turner et al. (2003a;model 2). The photoelectric edges were included but not the absorption lines. This absorption model therefore accounts for the subtle spectral curvature imposed by the warm absorber. Resonance absorption lines from S, Ar, Ca and Ni are unlikely to contribute significant equivalent width. The possibility of resonance line absorption by Fe is explored separately below. Iron K-shell absorption The other noteworthy effects of the warm absorbing material above 3 keV are due to the K-shell of iron. Iron in F-like to H-like ions (Fe +17 − Fe +25 ) can absorb through Kα resonance lines in the 6.4 − 6.9 keV range (Matt 1994;Matt, Fabian & Reynolds 1997) which would be poorly resolved (if at all) in the EPIC spectra. Sako et al. (2003) estimated the resonance lines could produce a total absorption equivalent width of EW ∼ −30 eV in MCG-6-30-15 based on an analysis of the Fe L-shell transitions present in the RGS spectrum from the 2000 XMM-Newton observation. This absorption could alter the shape of the observed emission around the Fe Kα emission line and was therefore accounted for in the analysis (paper I briefly explored the possible effect of iron resonance absorption). The other major source of opacity is K-shell photoelectric absorption of photons above 7.1 keV (Palmeri et al. 2002). This could affect the continuum estimation on the high energy side of the line, and therefore the apparent line profile, if not properly accounted for (see discussion in e.g. Pounds & Reeves 2002). The edge due to neutral iron is negligible in MCG-6-30-15; the Galactic and intrinsic (dust) neutral absorption gave τ ∼ < 0.01. The warm absorbing material contains Fe in wide a range of charge states, leading to a blurring together of the many K-edges over the ∼ 7.1 − 9 keV range (see Palmeri et al. 2002). In the Turner et al. (2003a) absorption model the total optical depth due to these edges (estimated by comparing the model transmission below the Fe I edge to that at the minimum of the resulting absorption trough) is only τmax ∼ 0.02. Using the ionic columns for Fe derived by Sako et al. (2003) gave similar results (τmax ∼ 0.01 around the K-edges). This suggests that while the line-of-sight material does produce iron K-edge absorption, it is rather shallow and spread over the 7 − 9 keV range. The resulting distortion on the transmitted spectrum is far smaller than the statistical errors on the data above 7 keV and thus is of little consequence to the spectral fitting. For completeness the absorption due to dust and warm gas in MCG-6-30-15 (as derived by Turner et al. 2003a) was included in the absorption model used below. Furthermore, the 7 − 10 keV EPIC data were searched for additional Fe absorption (see section 4.5.1), such as Fe XXV -XXVI edges which would be undetectable in the lower energy grating spectra (due to the lack of L-shell electrons). Reflection continuum Observations of MCG-6-30-15 extending beyond 10 keV with Ginga (Pounds et al. 1990;Nandra & Pounds 1994), RXTE (Lee et al. 1998(Lee et al. , 1999 and BeppoSAX (Guainazzi et al. 1999;paper I) revealed an upturn in the spectrum and indicated the presence of a reflection continuum. Particularly relevant to the present analysis is that BeppoSAX observed MCG-6-30-15 simultaneously with the 2001 XMM-Newton observation. The data from the PDS instrument provided spectral information extending up to ∼ 100 keV. Paper I and presented the results of fitting the BeppoSAX data with reflection models (see in particular section 3 of . These results strongly indicated that the reflection continuum is strong (R ∼ > 2). Narrow iron emission line As discussed in Section 1, various missions have resolved the broad emission feature spanning the 4 − 7 keV range. In each observation the spectrum can be fitted in terms of a disc line. There could in principle also be a contribution to the reflection spectrum from more distant material (≫ 100 rg) which would produce a much narrower, symmetric line. Such lines have been resolved in the spectra of many other Seyfert 1 galaxies (e.g. Yaqoob In the case of MCG-6-30-15 the variations in the line profile observed by ASCA during a low-flux period (the 'deep minimum') suggested the contribution to the line from distant material was small (Iwasawa et al. 1996). The higher resolution Chandra HETGS spectrum (Lee et al. 2002) was able to resolve the peak of the line emission near 6.4 keV and confirm that the contribution due to distant material (i.e. an unresolved line 'core') is weak (EW ∼ < 20 eV). The 2000 XMM-Newton observation of MCG-6-30-15 also showed evidence for narrow line emission (W01; . Therefore the possibility of an unresolved iron emission line is allowed in the spectral models considered below. Figure 4 shows residuals when the EPIC spectra are fitted with a simple model comprising a power-law plus neutral and warm absorption (as discussed in section 4.3.1). The only free parameters were the power-law slope (photon index Γ) and normalisation. The spectral interval immediately around the Fe K features (5 − 8 keV) was ignored during the fitting, and the model was subsequently interpolated over these energies. The residuals clearly reveal the strong iron Kα emission line peaking near 6.4 keV. The emission line appears asymmetric in this ratio plot, extending down to at least ∼ 5 keV (this red wing on the line is far broader and stronger than that expected from the Compton shoulder from cold gas; Matt 2002). Fitting the Fe line core As a first step towards modelling the iron features the 5 − 8 keV region was examined in detail. A model was built in terms of a power-law continuum (absorbed as described in section 4.3.1) plus three Gaussian lines. The best-fitting parameters are shown in Table 1. The lines were added to model the resolved line emission (line 1), the unresolved absorption at ≈ 6.7 keV (line 2) and the unresolved 6.4 keV emission line (line 3). Modelling the line with the single resolved Gaussian emission line provided a reasonable fit, but the inclusion of the unresolved absorption and emission lines significantly improved the quality of the fit. The inclusion of the absorption line was an attempt to account for possible Fe resonance absorption lines (section 4.3.2). Figure 5 shows the spectrum around the peak of the Fe emission fitted with the three line model (only EPIC pn:s data are shown but all four spectra were included in the fitting). The resolved emission line accounted for the majority of the flux from the core of the Fe emission line, and was significantly resolved with a velocity width F W HM ≈ 4.5 × 10 4 km s −1 . Fitting the four EPIC spectra individually with the same model gave consistent results; in each case the emission line was very significantly resolved and the measured widths were consistent with one another. . Ratios of EPIC spectra to absorbed power-law model. The model was fitted to the 3 − 5 and 8 − 10 keV data. Note the model underestimates the flux on the red side of the line (∼ 5 − 6.4 keV) while the flux is overestimated above 7 keV. Also note the narrow 'notch' at ≈ 6.7 keV noticeable in the pn:s spectrum. The high dispersion velocity width of the emission line core is robust to the details of the underlying continuum fitted over this fairly narrow energy range (it remain significantly resolved after including additional cold gas and/or Fe K edge absorption in the model; see also section 4.5.1). Forcing the entire emission line to be narrow (i.e. unresolved) gave an unacceptable fit (∆χ 2 = +266.2 compared with the resolved line). In addition to ≈ 6.4 keV emission there could also be Fe lines at energies ≈ 6.7 and ≈ 6.9 keV, corresponding He-like and H-like ions, respectively. The simultaneous presence of iron lines at these three energies (convolved through the EPIC response) could in principle conspire to make the Fe emission mimic a single, resolved line. See Matt et al. (2001) and Bianchi et al. (2003) for an examples where such line blends have been observed in EPIC spectra of Seyferts. Allowing for the presence of three unresolved lines (at energies of ≈ 6.4, ≈ 6.7 and ≈ 6.9 keV) also gave an unacceptable fit. The Fe emission was thus unambiguously resolved by EPIC. This is of great interest because the resolved line core is considerably broader than those of some other Seyfert 1 galaxies (e.g. Yaqoob et al. 2001;Kaspi et al. 2002;Pounds & Reeves 2002;Page et al. 2003a;Page et al. 2003c) and suggests an origin close to the central SMBH. Neglecting for the time being the contribution of any strongly redshifted component to the line profile, the width of the resolved core can be used to infer the radial distance of the line emitting gas under the assumption that the material is gravitationally bound (and emits at 6.40 keV in the rest frame). In this case the rms velocity width of the line (σ) is determined by its proximity to the SMBH (r/ rg) and the geometry according to: r/ rg ∼ (c/σ) 2 /q (where q depends on the geometry; Krolik 2001). Assuming randomly oriented circular orbits (q = 3) places the line emitting material at r ∼ 80 rg, while assuming a Keplerian disc (inclined at i = 30 • ; q = 8) requires the material to be at r ∼ 30 rg. The resolved line core thus suggests emission from near the central SMBH, necessitating the use of models accounting for the relativistic effects operating in this region. Fitting this model over the 3 − 10 keV range gave a bad fit (χ 2 = 983.1/777 dof ) with the emission line energy fixed to 6.4 keV. Allowing the energy of the broad Gaussian to be free improved the fit (χ 2 = 862.1/776 dof ) but the Gaussian became extremely broad and redshifted (E = 5.9 ± 0.2 keV and σ ∼ > 0.7 keV). Fitting with two Gaussians to model the broad line (to account for the resolved core and also a highly redshifted component), as well as the narrow 6.4 keV emission and ≈ 6.7 keV absorption, provided a good fit (χ 2 = 730.7/773 dof ). The two broad Gaussians had the following parameters: E1 = 6.38 +0.06 −0.04 keV, σ1 = 352 +106 −50 eV, EW1 ≈ 120 eV and E2 = 4.9 +0.2 −1.5 keV, σ2 = 1.0 +0.4 −0.2 keV, EW2 ≈ 150 eV. The energy and width of the second Gaussian further suggests the presence of a highly broadened and redshifted component to the Fe emission. However, such a claim is dependent on the assumed form of the underlying continuum. Dependence on the continuum model The simple absorbed power-law continuum model used above (Fig. 4) clearly overestimated the continuum above 7 keV (on the 'blue' side of the line). These residuals are a direct result of fitting the continuum with an over-simplistic model, and specifically could be caused by of one of the following: (a) additional K-shell absorption by Fe, (b) intrinsic curvature in the continuum, (c) a continuum curved by excess absorption, or (d) excess emission on the red side of the line (3 − 5.5 keV). These last two possibilities would lead to the true continuum slope over the 3 − 10 keV range being underestimated (the fit was driven by the better statistics in the 3 − 5 keV region). The first three of these alternative possibilities (a − c), illustrated in figure 6, are briefly discussed below and found to be unsatisfactory. In the subsequent analysis of sections 4.6 and 4.7 it was assumed that there is excess emission extending from the Fe line to lower energies (i.e. possibility d), and this was modelled using relativistic and non-relativistic emission features, respectively. Additional iron K absorption? As discussed in section 4.3.2 the Fe K-shell absorption edges expected based on the grating spectra has been included in the absorption model. It remains possible that there is absorption from additional Fe that was not accounted for in this model. The 7 − 10 keV EPIC spectrum was searched for additional absorption by fitting the data over only this limited energy range with a power-law and an absorption edge. Although this model is very simple, over the limited energy range the underlying continuum should not deviate noticeably from a power-law, and with the limited resolution and statistics available at these energies an edge can provide an approximate description of additional Fe absorption. The data constrain the depth of any such edge to be τ < 0.06 (90 per cent confidence) over the energies expected for Fe K (7.11 − 9.28 keV). In particular the depth of edge allowed at the energies expected for Fe XXV -XXVI (at the rest-frame velocity of MCG-6-30-15) is τ ∼ < 0.03, ruling out the presence of a large column of such gas (which would have gone undetected in the low energy grating spectra). The best fitting edge model gave E = 7.31 ± 0.14 keV and τ = 0.036 ± 0.024. Figure 7 shows the confidence contours for the edge parameters in this 'sliding edge' search. Including this edge in the model made virtually no difference to the residuals around the iron line. In particular the apparent asymmetry remained. That said, the best-fitting optical depth should perhaps be treated as an upper limit due to the likely presence of Fe Kβ emission at E ≈ 7.06 keV, which must accompany the Kα emission. Allowing for the possibility of line emission at E = 7.06 keV resulted in the edge depth becoming consistent with zero, with a limit of τ 0.06. Including the Fe K edge absorption contained in the model discussed in section 4.3.1 further reduced the depth of any excess Fe absorption. A caveat is that if a large column of Fe is present with a range of intermediate charge states the resulting absorption will not resemble an edge (Palmeri et al. 2002). However, the low energy grating data do not support the presence of such a large column of ionised Fe in addition to that already included in the model (Lee et al. 2001;Sako et al. 2003;Turner et al. 2003a). and paper I presented analyses of the simultaneous BeppoSAX PDS spectrum of MCG-6-30-15. After allowing for the reflection continuum, the underlying power-law did not show any evidence for intrinsic curvature. In particular, the lower limit on the energy of an exponential cut-off in the continuum was constrained to be Ecut ∼ > 100 keV. Thus any subtle spectral curvature due to the continuum deviating from a power-law at high energies (as predicted by thermal Comptonisation models) will not affect the 3 − 10 keV bandpass. As a check for the possible effect of more drastic curvature in the 3 − 10 keV band, the EPIC spectrum was fitted with a broken power-law model, after excluding the region immediately around the Fe K features (5−8 keV). This provided a good fit (χ 2 ν = 0.92). The break in the continuum occurred at E = 4.85 +0.86 −0.20 keV where the slope changed from Γ1 = 2.00 +0.02 −0.04 to Γ2 = 2.14 +0.06 −0.03 above the break energy (the slopes are given based on the pn:s spectrum, similar differences were derived from the other spectra). This form of spectral break, with the spectral slope increasing by ∆Γ ≈ 0.15 above 5 keV is highly unusual and unexpected when compared to other Seyfert 1 spectra. Furthermore, the strong reflection component seen in the BeppoSAX PDS data (section 4.3.3) should lead to a flattening of the spectral slope at higher energies. Indeed, when the broken power-law model was interpolated across the 5 − 8 keV region it predicted a broad iron line extending down to at least 5 keV (the energy of the continuum break) but appeared to match poorly the continuum slope at energies above the line (the model was too steep compared to the data above 7 keV). An intrinsically curved/broken 3−10 keV continuum is therefore considered highly unlikely. Excess soft X-ray absorption? The warm absorbing gas could perhaps impose curvature on the 3 − 5.5 keV continuum in excess of that already accounted for in the absorption model (section 4.3.1). Significantly higher columns of O or Si ions, for example, would have the effect of increasing the opacity in the high energy tail of the absorption edges occurring at energies ∼ < 3 keV. This can be quite effectively modelled by simply allowing the parameters of one of the warm absorbing zones (namely the column density and ionisation parameter) to be free in the fitting. The data were therefore fitted in the 3 − 10 keV range, again excluding Fe K region (5 − 8 keV), using a power-law modified by the absorption model of section 4.3.1 but allowing the parameters of one of the warm absorber zones to vary. The result was that column density of the low ionisation absorber increased to fit better the slight curvature in the 3−5 keV band. The fit was good (χ 2 ν = 0.92) but when the model was interpolated into the Fe K region the residuals again suggested a strongly asymmetric emission feature (as shown in Figure 8). The inclusion of extra absorption below 3 keV did therefore not alter the requirement for an asymmetric, red emission feature. However, this model is disfavoured as it severely under-predicted the flux level at lower energies when extrapolated below 3 keV. Very similar results were obtained after allowing the other (higher ionisation) absorbing zones to vary during the fitting. The other possibility, namely that the 3 − 5.5 keV spectrum is significantly affected by S, Ar and/or Ca absorption, was also explored. Absorption by the He-and H-like states of these ions will not affect the spectrum < 2 keV (except if they are accompanied by significant columns of M-and/or L-shell ions; Behar & Netzer 2002) and therefore the soft X-ray EPIC and RGS data do not strongly constrain their presence. The upper limit on the optical depth of an absorption edge is τmax ∼ < 0.02 throughout most of this energy range. The only exception is at ≈ 4 keV where the addition of a weak edge (τ ≈ 0.03) did significantly improve the fit. Thus it remains plausible that some absorption by, for example, highly ionised Argon (e.g. Ar XVII) may weakly contribute in this region. However, this feature must be considered with caution. An examination of the EPIC calibration data for 3C 273 (see Appendix A) revealed similar residuals at the same (observed frame) energy. This feature could thus be due to an instrumental effect at the ≈ 3 per cent level (possibly due to Ca residuals on the mirror surfaces; F. Haberl priv. comm.). In any event the affect on the derived iron line parameters was negligible. Models including strong gravity The spectral fits detailed above demonstrated that the Fe emission line was significantly resolved, implying emission from near the SMBH. These did not directly address the issue of a highly redshifted component to the line. Furthermore, MCG-6-30-15 shows a strong reflection continuum (section 4.3.3) which needs to be accounted for in the spectral modelling. Therefore, in this section the 3 − 10 keV spectrum is fitted with models accounting for emission from an X-ray illuminated accretion disc extending close to the SMBH (see also W01, paper I and references therein). The code of Ross & Fabian (1993; see also ) was used to compute the spectrum of emission from a photoionised accretion disc. The input parameters for the model were as follows: the photon index (Γ) and normalisation (N ) of the incident continuum, the relative strength of the reflection compared to directly observed continuum (R) and the ionisation parameter (ξ = 4πFX/nH). This model computes the emission (including relevant Fe Kα lines), absorption and Compton scattering expected from the disc in thermal and ionisation equilibrium. To account for Doppler and gravitational effects, the disc emission spectrum was convolved with a relativistic kernel calculated using the Laor (1991) model. The calculation was carried out in the Kerr metric appropriate for the case of a disc about a maximally spinning ('Kerr') black hole. The model uses a broken power-law to approximate the radial emissivity (as used in paper I). The parameters of the kernel were as follows: inner and outer radii of the disc (rin, rout), inclination angle (i), and three parameters describing the emissivity profile of the disc, namely the indices (qin, qout) inside and outside the break radius (r br ). In addition, narrow Gaussians were included at ≈ 6.4 keV to model a weak, unresolved emission line and at ≈ 6.7 keV to model the unresolved absorption feature. This emission spectrum (power-law, reflection, narrow emission line) was absorbed using the model discussed in section 4.3.1. This model gave an excellent fit to the data (χ 2 = 652.3/771 dof ). The residuals are shown in Fig. 9. The best-fitting parameters corresponded to strong reflection (R = 1.48 +0.31 −0.06 ) from a weakly ionised disc (log(ξ) ∼ < 1.4). The parameters of the blurring kernel defined a relativistic profile from a disc extending in to rin = 1.8 ± 0.1 rg with a steep (i.e. very centrally focused) emissivity function within ≈ 3 rg and a flatter emissivity without. Specifically, the emissivity parameters were qin = 6.9 ± 0.6, qout = 3.0 ± 0.1 and r br = 3.4 ± 0.2 rg and the inclination of the disc was i = 33 ± 1 • . The narrow emission and absorption lines at 6.4 keV and ≈ 6.7 keV had equivalent widths of ≈ 10 eV and ≈ −10 eV, respectively. This model is very similar to the bestfitting disc model obtained in paper I from the MOS and BeppoSAX data. The only notable difference between the two models is that the emissivity is more centrally focused in the present model. This is primarily a consequence of including of the EPIC pn data in the fitting, which better defined the red wing of the line, and allowing for narrow Fe Kα absorption and emission lines, which subtly altered the appearance of the line profile. The above model includes emission from well within 6 rg, the innermost stable, circular orbit (ISCO) for a nonrotating ('Schwarzschild') black hole. In order to test whether a Schwarzschild black hole (or specifically rin 6 rg) is incompatible with the data the ionised disc model was refitted after convolution with a diskline kernel appropriate for emission around such a black hole (Fabian et al. 1989). The disc was assumed to be weakly ionised (log(ξ) = 1), the inner radius was fixed at Figure 9. Residuals from fitting the EPIC spectra with a variety of models (for clarity only the EPIC pn:s data are shown). The models comprise an absorbed power-law continuum plus: (a) broad Gaussian emission line, (b) two broad Gaussians, (c) relativistically blurred emission from an accretion disc and (d) broad Gaussian and partially covered continuum. Also included is all these models are an unresolved absorption feature at ≈ 6.7 keV and an unresolved, neutral iron emission line at 6.4 keV. rin = 6 rg in the fitting, and the emissivity was described by an unbroken power-law. This model provided an acceptable fit to the data (χ 2 = 698.9/775 dof ) albeit substantially worse than the model including emission from within 6 rg (∆χ 2 = 46.6 between the two models). Thus a model in which the broad iron line and reflection spectrum originate from an accretion disc about a Schwarzschild black hole cannot be ruled out based on the 3 − 10 keV EPIC spectrum. However, the model including emission down to rin ≈ 1.8 rg was preferred for two reasons. Firstly it provided a much better fit to the data. Secondly the derived strength of the reflection spectrum for the rin ≈ 1.8 rg model (R ∼ 1.4 − 1.8) was more compatible with that derived from the independent analysis of the BeppoSAX PDS data (R ∼ > 2; section 4.3.3). The best-fitting value for the Schwarzschild model was only R = 0.77 ± 0.04. Models excluding strong gravity The previous section discussed models in which the spectrum is shaped by strong gravity effects. In this section some trial spectral models are discussed that attempt to explain the spectrum without recourse to strong gravity. In particular, the core of the iron line emission is modelled in terms of a resolved Gaussian and an unresolved emission line, both centred on 6.4 keV (see section 4.4). This approximates the distortion on the Fe emission line due to Doppler but not gravitational effects. The asymmetry around the line, in particular the broad excess on the red side, is instead modelled in terms of additional emission components rather than allowing for highly redshifted iron line emission. The possibilities explored are as follows: (i) a blackbody, (ii) a blend of broadened emission lines from elements other than Fe, and (iii) a partially covering absorber. (i) Blackbody emission could provide a broad bump not dissimilar from an extremely broad line. A model comprising a power-law, three Gaussians (resolved and unresolved Fe emission lines and the unresolved absorption feature at ≈ 6.7 keV) and a blackbody gave a reasonable fit to the data (χ 2 = 767.0/775 dof ). The blackbody temperature was kT ≈ 1.1 keV and it produced a broad excess in flux from < 2 keV to ∼ 6 keV, i.e. on the red side of the line. This model is therefore formally acceptable but the fit is considerably worse than with the relativistic disc models. (ii) Along with the Kα emission from Fe there may also be Kα emission from e.g. S, Ar, Ca or Cr that would also be Doppler broadened. An additional broad Gaussian line was included in the model with a centroid energy in the range 3 − 5.5 keV, and a width fixed to be the same as the resolved Fe line. This provided an unacceptable fit (χ 2 = 920.6/780 dof ). Adding a third broad Gaussian yielded an acceptable fit (χ 2 = 809.7/778 dof ). The additional lines were centred on ≈ 3.7 keV and ≈ 5.0 keV. The first of these could plausibly be due to Ca XIX-XX, but may be caused by a calibration artifact (see section 4.5.3). No strong X-ray line is expected at ≈ 5.0 keV. The line equivalent widths were ≈ 105 keV and ≈ 60 keV, at least an order of magnitude higher than expected for fluorescence from low-Z elements (e.g. Matt et al. 1997;Ross & Fabian 1993). (iii) The presence of partially covered emission (i.e. a patchy absorber; Holt et al. 1980) can cause 'humps' in the 2 − 10 keV spectral range. In such model the observed spectrum is the sum of absorbed and unabsorbed emission. A neutral partial covering absorber (with column density, NH, and covering fraction, fc, left as free parameters) was applied to the power-law continuum in the spectral model. Such a model can provide a reasonable fit to the data (χ 2 = 745.7/775 dof ) with NH ≈ 1.4 × 10 23 cm −2 and fc = 0.27 ± 0.03. This analysis demonstrated that alternative emission components can reproduce the broad spectral feature observed in the 3 − 10 keV EPIC spectrum. However, these models all give bestfitting χ 2 values considerably worse than the relativistic disc model discussed in section 4.6. Possibilities (i) and (ii) are rather ad hoc and physically implausible. A partially covering absorber seems somewhat more reasonable since partial covering has been claimed for other Seyfert galaxies. However, such a model cannot explain the high energy data, in particular, the need for a strong reflection component (section 4.3.3 and see also . Furthermore, allowing for partial covering in the model did not eliminate the need for a strongly Doppler broadened iron line. As a final test of the possible impact of partial covering on fitting the relativistic disc models, the model from section 4.6 was re-fitted including partial covering. The addition of a partially covering absorber to the best-fitting reflection model did not provide a significant improvement to the fit (∆χ 2 = 2.6 for two additional free parameters). The derived covering fraction of the absorber was poorly constrained (0.21 fc 1.0) with a column density NH = 5.1 +4.9 −0.4 × 10 21 cm −2 . The inclusion of the partial coverer did not substantially alter the derived values of the relativistic profile parameters (specifically rin = 1.8 ± 0.1 rg). This analysis suggests that the possible presence of partially covering absorption does not alter the derived parameters for the relativistic emission line. Furthermore, the inclusion of the partial coverer resulted in a lower value for the reflection strength (R ≈ 1.1), in contradiction with the PDS spectrum (R ∼ > 2 see section 4.3.3). SPECTRAL VARIABILITY The rapid X-ray variability of MCG-6-30-15 has long been known to show complex energy dependence (e.g. Nandra, Pounds & Stewart 1990;Matsuoka et al. 1990;Lee et al. 1999;Vaughan & Edelson 2001). Vaughan et al. (2003a) used Fourier methods to examine differences between broad energy bands as a function of timescale. In this section the spectral variability is examined in a finer energy scale, at the expense of cruder timescale resolution, in order to probe the details of changes due to specific spectral components. The EPIC data were used over the 0.2−10 keV band as the calibration issues discussed above (see also Appendix A) will not affect these analyses. Flux-flux analysis In this section the spectral variability is investigated using 'fluxflux' plots. used the RXTE data of MCG-6-30-15 to investigate the correlations between the fluxes in different energy ranges. This section parallels their analysis. Figure 10 shows the count rates in a soft band (1 − 2 keV) against the simultaneous count rates in a hard band (3 − 10 keV) from the 1000 s binned EPIC pn light curves. Clearly the two sets of count rates (hereafter fluxes) show a very significant correlation. This is true for both XMM-Newton observations. The data were fitted with a linear model (of the form F (E2) = mF (E1) + c; where F (E1) and F (E2) represent the fluxes in the two bands) accounting for errors in both axes. This resulted in a formally unacceptable fit (rejection probability > 99.99 per cent). However, the poorness of the fit is a result of intrinsic scatter in the correlation, rather than any non-linearity. This intrinsic scatter in the relation between different energy ranges is a manifestation of both the difference in shape of the power spectral density (PSD) function and low coherence between variations in different energy bands . Therefore, to overcome this the data were then binned such that each bin contains 20 points and errors were calculated in the usual fashion (equation 4.14 of Bevington & Robinson 1992). The binned data, which reveal the average flux-flux relation, provided a reasonable fit to a linear function (rejection probabilities of 33 and 89 per cent for the 2000 and 2001 datasets, respectively). As concluded by Taylor et al. (2003) the tight linearity of the flux-flux correlation (at least over the relatively short time scales probed by each XMM-Newton observation) strongly indicates that the flux variations are dominated by changes in the normalisation of a spectral component with constant softness ratio m (i.e. the gradient of the flux-flux correlation). In addition there must be an additional spectral component that varies little and contributes more in the 3 − 10 keV band than the 1 − 2 keV band and therefore produces the positive constant offset c. In other words the flux in each band is the sum of a variable component and a constant component: (where NV and NC are the normalisations of the variable and constant spectra V (E) and C(E) respectively) 3 . The fact that the relation between F (E1) and F (E2) is linear means that the gradient gives m = V (E2)/V (E1) and the offset gives c = C(E2) − mC(E1). As can be seen from the figure the relation changed between the 2000 and the 2001 observations; these changes in slope and offset imply the shape of the two spectral components differed slightly between the two XMM-Newton observations. The flux-correlated changes in the spectrum are caused by changes in the relative contributions of these two spectral components. This 'two component' interpretation of the X-ray spectral variability of Seyfert 1s has been discussed by e.g. Matsuoka et al. (1990); Nandra ( constant component occurs in the iron K band, where it contributes ∼ 40 per cent of the total flux. There are two assumptions implicit in this analysis. The first is that the variable component has a constant spectral shape V (E) when averaged as a function of flux (which is implied by the linearity of the flux-flux relations). The second assumption is that there is a negligible constant component in the 1.0 − 1.5 keV band (which was used as the comparison), i.e. C(E1) = 0. If the 1.0 − 1.5 keV emission does contain a nonzero constant component (C(E1) > 0) then the absolute scale of the inferred spectrum would increase but its spectral form will be approximately the same. The maximum possible amount of constant emission in the 1.0 − 1.5 keV band is given by the minimum flux in that band (i.e. 0 C(E1) min[F (E1)]), therefore this can be used to place an absolute limit on the strength of the constant component: This upper limit is also indicated in figure 11. It should be noted that the constant component C(E) must be affected by the soft Xray absorption. The lack of features associated with absorption in figure 11 (in particular the lack of a spectral jump at ∼ 0.7 keV) demonstrates that the constant component is affected by absorption exactly as is the total spectrum (hence the absorption was factored out by normalising the spectrum C(E) by the total spectrum). This spectrum can be identified with the Reflection-Dominated Component (RDC) of and is henceforth referred to as the RDC. Flux-resolved spectra A more conventional way to examine how the spectrum of the source evolves with flux is to extract energy spectra from different flux 'slices.' This was done for the 2001 XMM-Newton data by dividing the pn data into ten flux intervals such that the total number of photons collected from the source in each flux interval was comparable (∼ 5 × 10 5 ). The highest and lowest flux slices differ by a factor of 3 in count rate and all ten are marked on the light curve in Figure 12. The ten spectra extracted from each flux slice were then binned in an identical fashion. There were clear, systematic spectral changes between the ten flux intervals, these are illustrated in two different ways in figures 13 and 14. Figure 13 shows the ten spectra as ratios to a (Γ = 2) power-law model (modified only by Galactic absorption). This illustrates the softening of the 2−10 keV spectrum as the source gets brighter. In addition the strength of the iron line relative to the continuum gets weaker as the source brightens (i.e. the equivalent width decreases with increasing continuum luminosity) meaning that the iron line becomes a lower contrast feature at high fluxes. Figure 14 shows nine of the ten spectra as a ratio to the highest flux spectrum (f 10). This again shows the spectrum became harder (in the 2 − 10 keV range) and the iron line became relatively stronger as the source became fainter. It is also interesting to note that at energies below ∼ 2 keV the spectrum becomes softer as the source becomes fainter, contrary to the trend at higher energies. In addition, there is no strong feature in the spectral ratios at ∼ 0.7 keV, indicating that the fractional strength of the ∼ 0.7 keV spectral jump remains constant with flux. This is consistent with the jump being due to absorption with a constant optical depth. These average changes in the detailed spectral shape as a function of flux can be explained using the simple two-component model discussed above. The inferred constant emission comprises a soft excess below ∼ 1 keV, a harder tail above ∼ 2 keV, and a strong iron line. One would expect the total (variable+constant) spectrum to change exactly as observed: as the total flux decreases the contribution from the constant component, relative to the variable component, increases. Therefore, at low fluxes, the spectrum will become softer below ∼ 1 keV, harder above ∼ 2 keV, and display a more prominent iron line. Average EPIC difference spectrum The five highest flux spectra were combined, as were the five lowest flux spectra, to produce two spectra representing the source during flux intervals above and below the mean (hereafter 'high' and 'low' fluxes). Subtracting the low flux spectrum from the high flux spectrum will remove the contribution from the constant component. As discussed in , under the assumption that the total spectrum can be accurately described (on average) by the superposition of two emission components, one variable and one not (both modified by absorption), the difference spectrum will give the spectrum of only the variable component (modified by absorption): Figure 15 shows the EPIC pn difference spectrum (compared to a power-law model fitted in the 3 − 10 keV range) from the 2001 Figure 13. EPIC pn spectra from each of the count rate slices (shown in figure 12) compared to a Γ = 2 power-law model (modified by Galactic absorption). The data have been shifted upwards to aid clarity. It can be seen that as the flux decreases from f 10 to f 1 the overall spectrum becomes harder (above 2 keV) and the strength of the iron feature decreases (relative to the continuum model). observation. A power-law (modified only by Galactic absorption) gave an excellent fit to the 3 − 10 keV data (χ 2 = 28.1/60 dof ) and yielded a photon index of Γ = 2.20 ± 0.05. Extrapolating this power-law down to lower energies reveals the signature of strong absorption, although the exact amount depends on the assumed shape of the underlying continuum (as illustrated by the upper and lower datasets shown in the figure). It also be noted that if the variable spectral component steepens at low energies (i.e. contains a soft excess) the inferred absorption profile at lower energies could be systematically underestimated even further (see section 6 of Turner et al. 2003a). Leaving aside these uncertainties on the in- Figure 14. EPIC pn spectra from the count rate slices shown as a ratio to the highest count rate spectrum (f 10). Similar to figure 13 the spectra can be seen to become harder and show relatively stronger iron features as the count rate decreases. ferred spectrum at lower energies, it remains true that above 3 keV the average spectrum of the varying component V (E) can be described accurately as a single power-law (Γ ≈ 2.2). In particular it should be noted that the difference spectrum (unlike the spectrum of the constant component described above) does not possess any obvious features in the iron K region. The upper limit on the flux of the line core (section 4.4) present in the difference spectrum is a factor ∼ 20 lower than that present in the time averaged spectrum. A further investigation of the flux-resolved spectra revealed that the difference spectrum is consistent with a power-law at all flux levels. Nine difference spectra were calculated by subtracting the lowest flux spectrum (f 1) from each of the nine higher flux spectra (f 2 − f 10). These nine difference spectra were fitted over Figure 15. EPIC pn difference spectra produced by subtracting the low flux spectrum from the high flux spectrum. The difference spectrum is shown as a ratio to a power-law (modified by Galactic absorption) fitted across the 3 − 10 keV range. The data/model ratio calculated using the best fitting power-law slope is shown in black (circles). Above and below this lie the ratios calculated assuming the 90 per cent lower and upper limits on the slope of the 3 − 10 keV power-law. The data were rebinned for clarity. the 3−10 keV range with an absorbed power-law model. In all nine cases the simple power-law model was found to provide a good fit to the data (χ 2 ν ≈ 1.0), with no evidence for strong systematic residuals around the Fe K band. The limit on the depth of possible Fe edges in the 7.1 − 8 keV region was τ ∼ < 0.07, consistent with the limits obtained in section 4.5.1. In the discussion below this variable component to the spectrum will be referred to as the variable Power-law Component (PLC; . rms spectra The root mean squared (rms) spectrum measures the variability amplitude as a function of energy and can in principle reveal which spectral components are associated with the strongest variability (see Edelson et al. 2002 andVaughan et al. 2003b). Figure 16 shows two rms spectra calculated from the 2001 XMM-Newton observation of MCG-6-30-15, clearly showing in both cases a strong dependence of the variability amplitude on energy. The two rms spectra show the variability amplitude on different timescales. The upper spectrum shows the fractional rms amplitude integrated over the entire observation in the standard fashion (eqn. 1 of Edelson et al. 2002) using 1000 s binned light curves. As the variability is dominated by long timescale changes Vaughan et al. 2003a) this therefore reveals the energy dependence of variations occurring on timescales comparable to the length of the observation (∼ 100 ks). The lower spectrum shows the fractional rms of the point-to-point deviation (i.e. the rms difference between adjacent time bins, as defined by eqn. 3 of Edelson et al. 2002). This therefore only measures fluctuations between neighbouring time bins and so probes the energy dependence of the variability on short timescales comparable to the bin size (∼ 1 ks). The errors were calculated as in eqn. 2 of Edelson et al. (2002), but as discussed in their appendix should be considered only as approximations since they strictly assume the light curves are drawn from independent Gaussian processes. Matsumoto et al. (2003) previously used the longest ASCA observation of MCG-6-30-15 to obtain rms spectra on different timescales. As is clear from the figures the rms spectra show broad peak at ∼ 1 keV and a localised depression around the energy of the iron K emission line, strongly suggesting a suppression of the variability due to the presence of the emission line (see also Inoue & Matsumoto 2001). The exact energy dependence of the variability amplitude differs between the two timescales (evident from the ratio of the two rms spectra), as should be expected if the shape of the PSD is a function of energy ). Modelling the rms spectrum In the previous sections is was shown how the spectral variability is consistent with a model comprising a variable component V (E) (the PLC; see Figure 15) and a constant component C(E) (the RDC; see Figure 11). Given these empirically derived spectral components it is possible to construct a model for the rms spectrum shown in Figure 16 (see also Inoue & Matsumoto 2001). It is assumed that the spectrum is given by equation 1 where both V (E) and C(E) include absorption effects. The (normalised) rms spectrum can be expressed as: where σ[F (E)] and F (E) represent the absolute rms amplitude and time-average of the spectrum over the observation, respectively. (Likewise for the two spectral components V (E) and C(E).) In terms of the model being discussed it is assumed that C(E) is not variable and so σ[NCC(E)] = 0. Therefore: and the term C(E)/F (E) is the fractional contribution of the constant component shown in Figure 11. The above analysis assumes that the variable component changes only in normalisation and not in shape (i.e. σ[NVV (E)]/NVV (E) = σ[NV]/NV), but is independent of the actual form of the spectrum V (E). The linearity of the flux-flux relations implies the variable component has a constant softness ratio with flux, and therefore its spectrum V (E) is independent of flux, validating this assumption. The first term on the right-hand side of equation 3 is thus simply a normalisation. Implicit in this analysis is the assumption that the absorption function is not time-variable. If the absorption function did vary this would introduce another term to the rms spectral model (Inoue & Matsumoto 2001), but such a term is not required by the data and thus it was assumed that the absorption did not vary. The histogram in Figure 16 shows the resulting rms spectral model when the constant component derived in section 5.1 (Figure 11) is used. This very simple model clearly reproduces (to within ∼ 1 per cent in Fvar) the energy dependence of the variability amplitude on these timescales. The peak at ∼ 1 keV is a result of the constant component C(E) being weakest at ∼ 1 keV and the suppression at ∼ 6.4 keV results from the strong iron line present in the constant component. The short timescale rms spectrum has a subtly different shape and thus it not so well reproduced using this simple model. However, it is known that at high frequencies the variability in different energy bands shows different PSD shapes and incoherent variability. Thus the short timescale rms spectrum will be affected by these additional effects, although their physical origin is not clear. This could perhaps be explained if the RDC (or perhaps the warm absorber) varies on short timescales but these changes are 'washed out' over longer timescales. A low-flux rms spectrum The shape of the rms spectrum is clearly a function of timescale, as expected given the energy-dependent PSD ). It also shows subtle changes with time when the rms spectrum of each revolution of XMM-Newton data are compared (an rms spectrum for the first revolution was shown in paper I). These may be caused by subtle changes in the RDC spectrum. As the relative strength of the RDC should be highest when the overall source flux is low, the low flux intervals of the light curve are the most promising to search for changes in the RDC. The rms analysis described above was repeated using only the last 45 ks of data from rev 0303. During this time interval the source flux was rather low and in fact reached its minimum for the observation (Fig. 12). The rms spectra from this interval are shown in Fig. 17. The low-flux rms spectrum, particularly on the Figure 17. The rms spectra, as shown in Fig. 16, including only the low flux interval towards the end of the observation. longer timescale, shows a similar overall shape to the rms spectrum from the entire observation (upper data from Figs. 16 and 17). Interestingly the prediction of the two component model discussed above does not match these data so well. In particular the model predicts either too little rms around the iron line or too much in the soft X-ray band. This could be indicating that at low fluxes some variation in the shape/strength of the RDC were detected (see also . However, this could also result from a random effect caused by the energy-dependent PSD (Vaughan et al. 2003a,b). More observations at very low flux levels could confirm whether the RDC shows rapid variability. Time-resolved spectra For completeness the spectrum was examined in consecutive 10 ks time slices. Fig. 18 shows the spectra extracted from each of these time slices. Clearly the strength of the iron line (relative to the continuum model) changes throughout the light curve. However, the only obvious, visible change is that described above; as the overall X-ray flux decreases the iron line becomes more prominent because it varies less than the continuum. For example, the spectrum for segment 303:l, which has one of the lowest fluxes, also has the most prominent line. It is possible that there are short-lived spectral features that would be 'washed out' in the time-averaged spectrum. Turner, Kraemer & Reeves (2003b) discuss the possible existence of transient, redshifted iron lines. However, given that 32 independent spectra were examined here, the likelihood of an apparently signifi-cant feature appearing purely by chance is quite high. For example, the probability of detecting a feature, considered detected at 99.7 per cent confidence in one individual spectrum, in one of the 32 spectra is ∼ 0.1 (using PN ≈ 1 − (1 − P1) N ). By the same argument only features found at > 3.5σ significance in any individual spectrum should be considered as significant detections from the entire dataset (i.e. probability of false detection PN ∼ < 0.01). There are no clear examples of strong, sharp features in these data except for the obvious iron line. The analyses of and demonstrated that these time-resolved spectra can be fitted using relatively simple reflection models. In addition, there were no sharp features seen in the rms spectra (section 5.3) that would indicate a preferred energy for transient features. Principal Component Analysis The method of Principal Component Analysis (PCA) was used to try and isolate independently varying components present in the time-resolved spectra discussed above. PCA is a powerful statistical tool used widely in the social sciences where it is often known as factor analysis. In practice PCA gives the eigenvalues and eigenvectors of the correlation matrix of the data. The data can be thought of as an m × n matrix comprising m observations of the n variables, in which case the correlation matrix would be a symmetric n × n matrix. The eigenvectors can be thought of as defining a new coordinate system, in the n-dimensional parameter space, which best describes the variance in the data. The first principal component, or PC1 (the eigenvector with the highest eigenvalue), marks the direction through the parameter space with the largest variance. The next Principal Component (PC2) marks the direction with the second largest amount of variance. The motivation behind PCA is to extract the (hopefully few) dominant correlations from a complex dataset. Francis & Wills (1999) provides a brief introduction to PCA as applied to quasar spectra, while Whitney (1983) and Deeming (1964) illustrate more general applications of PCA in astronomy. If the variance in the spectra is caused by only a few independently varying spectral components then PCA should reveal them in its eigenvectors. The application of PCA to spectral variability of AGN was first discussed by Mittaz, Penston & Snijders (1990). The first few Principal Components (those representing most of the variance in the data) should reveal the shape of the relevant spectral components. The weaker Principal Components might be expected to be dominated by the photon noise in the spectra (which should be uncorrelated between each of the n energy bins in the m spectra and so should not distort the shape of the first Principal Components). This method was used on the m = 32 time-resolved spectra over the 3 − 10 keV range (binned such that they each spectrum contained the same n = 21 energy bins). The spectra were unfolded using a Γ = 2 power-law model to convert them from counts to flux units (in EF (E) form). In the absence of sharp features in either the source spectrum or the detector response this should provide a reasonable estimate of the 'fluxed' data. The first three Principal Components are shown in Figure 19 along with the mean and rms spectra derived from the same data. As can be clearly seen, PC1 has a relatively flat spectrum and describes the vast majority of the variance in the data (96 per cent). The first Principal Component can therefore be identified with the variable PLC discussed above. All the remaining Principal Components each describe less than 1 per cent of the variance and most likely represent only the photon noise in the data. Thus the PCA confirms the above analyses and suggests the spectral variations in MCG-6-30-15 are a result of a single, varying continuum component (the PLC). Nearly identical results were found when the ranked data was used to produce the correlation matrix (which then represented the matrix of rank-order correlation coefficients). Figure 20 shows the difference spectrum produced by subtracting the 3 − 10 keV EPIC pn:s spectrum taken in 2000 from the spectrum taken in 2001. A power-law provided a good fit (χ 2 = 51.7/62 dof ) with Γ = 2.18 ± 0.09 (compare with the results of section 5.2.1). The only significant residuals appeared as an excess at ≈ 6.6 keV. Including a narrow Gaussian improved the fit (χ 2 = 44.1/60 dof ), with a best-fitting energy E = 6.63 ± 0.12 keV. THE 2001-2000 DIFFERENCE SPECTRUM Including an absorption edge in the model did not significantly improve the upon this fit. Figure 21 shows the ratio of the two spectra, confirming that the two spectra differ around E ≈ 6.65 keV. An unresolved emission feature in the difference and ratio spectra could be due to a change in either the flux of the broad Fe emission line or the depth of the Fe Kα absorption. In order to produce an apparent emission feature in the first case only the blue wing of the emission line must be stronger in the 2001 observation. In the second case the optical depth of the absorption must be deeper in the 2000 observation. An examination of the two spectra individually suggested the latter is more likely. A close examination of changes in the RGS spectrum between the two observations would clarify this issue. REVIEW OF XMM-Newton RESULTS In this section the main results obtained from the long XMM-Newton observation of MCG-6-30-15 are summarised prior to the discussion in the following section. • The two XMM-Newton observations (taken in 2000 and 2001) sampled fairly typical 'states' of the source. The former observation sampled a period of lower flux than the latter. However, the longterm RXTE monitoring shows this can be attributed to short-term variability and does not imply a systematic difference between the states of the source during the two observations (section 3). • The high energy spectrum obtained from BeppoSAX shows a strong Compton-reflection signature (paper I; , as did the earlier XMM-Newton/RXTE observation (W01; . There was no evidence for a low energy cutoff or roll-over in the continuum out to ∼ 100 keV. • The RGS spectrum shows complex absorption by O VII, O VIII and Fe I as well as a range of other ions Turner et al. 2003a). The opacity is concentrated mainly below ∼ 3 keV but still has an important effect on the 3 − 10 keV spectrum (section 4.3.1). See also Lee et al. (2001). • The fluorescent iron line is strong and broad (section 4.4). The bulk of the line flux is resolved with EPIC. The emission peak concentrated around 6.4 keV is resolved with a width F W HM ≈ 4.5 × 10 4 km s −1 , strongly indicating an origin within ∼ < 100 rg. There is also a significant, asymmetric extension to lower energies that indicates strong gravitational redshifts. In addition there is a weak, intrinsically narrow core to the line emission (sections 4.3.4 and 4.4; see also Lee et al. 2002). • The best-fitting model for the 3 − 10 keV EPIC spectrum explained the iron emission as from the surface of a relativistic accretion disc. The strong reflection explains the strength of both the iron line and the Compton reflection continuum. The best fitting model includes emission down to ≈ 1.8 rg (section 4.6; W01; paper I). • The variations in X-ray luminosity show many striking similarities with those seen in GBHCs such as Cygnus X-1 . In particular, the Power Spectral Density (PSD) function is similar to that expected by simply re-scaling the high/soft state PSD of Cyg X-1. The continuum variations are energy dependent and show the PSD is a function of energy and that the hard variations are delayed with respect to the soft variations. Similar results have been found in other Seyfert 1s (NGC 7469, Papadakis, Nandra & Kazanas 2001;NGC 4051, M c Hardy et al. 2003). • Previous observations with ASCA (Iwasawa et al. 1996Shih et al. 2003) and RXTE (Lee et al. 2001;Vaughan & Edelson 2001) showed the photon index of the 2 − 10 keV continuum to be correlated with its flux. The XMM-Newton observations confirm this and demonstrate the trend is reversed below ∼ 1 keV, where the spectrum hardens with increasing flux (section 5.2). The average variability amplitude is highest in the range ∼ 1 − 2 keV, and is lowest at energies around the iron line (section 5.3). • The variable spectrum can be decomposed into two components, a variable Power-law component (PLC; section 5.2.1) and a relatively constant Reflection Dominated Component (RDC; section 5.1). The spectral variability (at least on timescales ∼ 10 ks) can be explained almost entirely by variations in the relative strength of these two components, caused solely by changes in the normalisation of the PLC (section 5.3; see also Shih et al. 2003 andFabian &). An analysis of the flux-flux plots (section 5.1; see also Taylor et al. 2003) and the difference spectra (section 5.2.1) shows the slope of the PLC remains approximately constant with flux. • The EPIC spectrum indicates there is resonance absorption by ionised Fe at ≈ 6.7 keV (sections 4.3.2 and 4.4). This was predicted based on the presence of soft X-ray warm absorption (Matt 1994;Sako et al. 2003) and has been observed in at least one other high quality EPIC spectrum (NGC 3783; . This resonance absorption appeared to vary between the two XMM-Newton observations (section 6). Fitting the iron line in MCG-6-30-15 The broad iron line in MCG-6-30-15 has been unambiguously resolved using XMM-Newton/EPIC (W01; paper I; this paper), con-firming the previous results from ASCA Iwasawa 1996), BeppoSAX (Guainazzi et al. 1999) and Chandra (Lee et al. 2002). However, deriving reliable line profile parameters is a considerable challenge even with the exceptionally high quality XMM-Newton+BeppoSAX data. From the whole of the 320 ks observation the EPIC cameras collected about 6 × 10 3 counts from the core of the line, peaking at 6.4 keV, and about three times as many from the broad red tail of the line emission. The foremost problem is that there are no regions of the Xray spectrum unaffected by either the warm absorption (see section 4.3.1) or the broad emission components. Thus it is not possible to determine the underlying continuum without simultaneously constraining the other spectral components. Often used, simple techniques for determining the continuum by e.g. fitting a powerlaw over the 3 − 5 keV and 7 − 10 keV data (Nandra et al. 1997), will give a only crude first approximation of the continuum. In a source such as MCG-6-30-15, which possesses a strong warm absorber, the absorption will cause the spectrum to curve even above ∼ 3 keV (see section 4.3.1 and Fig. 3). In the present paper this effect was dealt with by including an absorption model derived from fits to the RGS data (section 4.3.1) and also allowing for additional absorption when fitting the EPIC data (section 4.5). In addition to the distortion imposed on the continuum the other effect of warm absorption that has a significant impact on iron line studies is resonant line absorption at ≈ 6.4 − 6.9 keV (section 4.3.2). The EPIC data indicate the presence of such absorption; allowing for the Fe resonance absorption had a subtle but significant effect on the best-fitting emission line models (see section 4.4). In particular, the line profile is slightly broader and bluer after accounting for the line absorption. Photoelectric absorption by Fe at energies ∼ 7.1 − 9.3 keV could, in principle, also be caused by the warm absorbing gas and further confuse the continuum estimation (see e.g. Pounds & Reeves 2002). However, in the case of MCG-6-30-15 this is not a significant effect. The total opacity due to the Fe K edges, as predicted based on fits to the soft X-ray RGS data, is negligible (section 4.3.2), and spectral analysis of the EPIC data confirm this by ruling out the presence of deep Fe edges (section 4.5.1). The same is not true of all Seyfert 1s galaxies. The columns of Fe ions derived using RGS data by Blustin et al. (2002) for NGC 3783, and by Sako et al. (2001) for IRAS 13349+2438 predict strong K edges with a total optical depth τmax ∼ 0.05 (see also . Further complications include the presence of the reflection continuum (section 4.3.3) and possible weak, narrow components to the line emission (section 4.3.4). The approach used in the present paper was an attempt not to 'unambiguously isolate' the iron line but to model it after allowing for these complicating factors. In order to limit the range of models and free parameters, constraints obtained from other instruments (such as the RGS and BeppoSAX spectra) were used where possible. The result of this spectral fitting confirms the previous analyses in W01 and paper I: The best-fitting emission model comprises a power-law continuum plus strong reflection from a weakly ionised, relativistic accretion disc (section 4.6). This model requires the disc emission to be centrally concentrated and extend in to ∼ 2 rg. Attempts to 'explain away' the red wing of the line using enhanced absorption or broken continua (section 4.5) did not substantially alter the main results. Additionally, alternative emission components such as iron line blends did not fit the data (section 4.4) while emission from soft X-ray lines or blackbodies led to physically implausible results (section 4.7). The only non-standard model that could fit the data with plau-sible parameters was the partial covering model (section 4.7). However, as shown also by , this model is strongly at odds with the higher energy data. Thus, a partially covered continuum does not provide a satisfactory explanation of the X-ray spectrum of MCG-6-30-15. However, it is possible that the source does contains some partially covered emission in addition to the disc reflection. Fitting the data with a relativistic disc and also allowing for partial covering suggested the possible impact of the partial covering is minimal (section 4.7). Spectral variability of MCG-6-30-15 The spectral variability of the source can be explained in terms of a two-component model (section 5.1;see also M c Hardy et al. 1998;Shih et al. 2002;Taylor et al. 2003;. This is the simplest model consistent with the flux-flux analysis. In this model the two spectral components are a power-law component (PLC) and a reflection dominated component (RDC) which carries most of the iron line flux. Both of these emission spectra are viewed through the warm absorbing gas. Flux variations are caused primarily by changes in the normalisation of the PLC, with the RDC remaining relatively constant. Such a model explains many aspects of the spectral variability including the linearity of the flux-flux plots (section 5.1), the rms spectrum (section 5.3), the correlation between 2 − 10 keV spectral slope and flux (section 5.2) and the lack of strong iron line variations (Reynolds 2000;Vaughan & Edelson 2001;paper I). The relative constancy of the RDC causes variations above ∼ 2 keV (and particularly around the iron line) to be suppressed, and also variations below ∼ 1 keV are increasingly suppressed (section 5.3). previously applied this model to direct fitting of the time-resolved EPIC spectra, where it gave a quite reasonable description of the time varying spectrum. The spectrum of the PLC was uncovered using the hi-low difference spectrum (Fabian & Vaughan;section 5.2) and the spectrum of the RDC was obtained by isolating the constant offset of the flux-flux plots section 5.1). This latter technique revealed the spectrum of the constant RDC independent of any spectral fitting, simply by analysing the light curves using fluxflux plots. The result clearly revealed a spectral form strongly suggestive of reflection, with a prominent iron line (as suggested by ). The RDC also shows a 'soft excess' below ∼ 1 keV which could also be emission from the reflecting disc if it is weakly ionised. The model derived from fitting the 3 − 10 keV EPIC spectra is in rough qualitative agreement with the RDC spectrum obtained independently (see Fig. 22). Thus the model derived from spectral fitting (section 4) is in broad agreement with the model invoked to explain the spectral variability (section 5). Variability constraints on complex absorption models The warm absorption spectrum does not appear to vary substantially (specifically, the optical depths of the absorption features remain approximately constant). For example, ratio plots of 'high' and 'low' spectral (section 5.2) show smooth profiles over the deepest absorption features, as does the rms spectrum (section 5.3). This result is consistent with the studies of the deep warm absorber in NGC 3783 using Chandra (Netzer et al. 2003) and XMM-Newton (Behar et al. 2003) which showed the X-ray absorption remained unchanged despite large changes in source luminosity. Fig. 11. The dotted curve shows the prediction based on the model derived from fitting the EPIC spectra in the 3 − 10 keV range only. Clearly the model has qualitatively the same shape, although the details do not match, particularly at lower energies. However, given that the model was derived without using the spectral data below 3 keV, and there is an intrinsic uncertainty in the normalisation of the RDC (section 5.1), the agreement is rather interesting. Furthermore, the spectral variability constrained the possible effect of warm or partial absorption on the iron line. Neither of the two spectral components isolated through spectral variability analysis resemble the absorbed part of the continuum in the partial covering model. The depth of any line-of-sight Fe K edge was constrained by the difference spectrum analysis (section 5.2) to be τ ∼ < 0.02 at 7.1 keV to τ ∼ < 0.07 at 8.0 keV. Fitting the partial covering model to the various difference spectra gave constraints comparable to those based on the time-averaged spectrum (section 4.7). As discussed above the effect of including such additional absorption (either warm or partial) on the derived iron line parameters was negligible. Alternative models for the spectral variability of MCG-6-30-15 in which the variations are caused primarily by changes in the warm absorber are in stark contrast with the data. This model predicts a power-law flux-flux relation (since the flux is given by f (E, t) = S(E) exp(−τ (E, t)), where S(E) is the underlying the emission spectrum and τ (E, t) is the time-variable absorption optical depth). Such a model would also predict strong features in the rms spectrum corresponding to the deepest edges in the absorption spectrum (paper I). Both of these were ruled out. Lack of iron line variations Although the model outlined above explains many aspects of the EPIC spectrum and its variability, there are some serious outstanding problems. The most significant of these is how the RDC flux, including the iron line, remains so steady in the presence of large changes in the flux of the PLC. In standard accretion disc/reflection models the reflection spectrum (including iron line) is fundamentally driven by the PLC luminosity. Thus, since the RDC is thought to originate close to the SMBH (and the source of the PLC) the fluxes of the two components should be tightly coupled. The lack of correlations between the iron line and con-tinuum is a long-standing issue (Iwasawa et al. 1996Lee et al. 2000;Reynolds 2000;Vaughan & Edelson 2001). These XMM-Newton observations demonstrate more clearly than before that not just the line but the bulk of the reflected emission (the RDC) shows little short-timescale variability. The physics responsible for the reduced variability of the RDC is unclear. During the 2001 XMM-Newton the total EPIC count rate from the iron line was ∼ 0.1 ct s −1 , with ∼ 0.02 ct s −1 coming from the resolved, quasi-Gaussian core on the line and the rest coming from the broad, red wing. As a result of this low count rate, compounded by the modelling difficulties, it is difficult to place firm limits on weak and/or extremely rapid (faster than ∼ 10 4 s) line variations. Intriguingly, claim that variations in the line/RDC become apparent only when the source flux is very low (see also Iwasawa et al. 1996). One possibility is the progressive ionisation of disc surface, as discussed by e.g. Reynolds (2000), Nayakshin & Kazanas (2002) and Ballantyne & Ross (2002). In this scenario an increase in the ionising luminosity leads to little change in the flux of the line because more of the Fe ions in the surface layers of the disc become fully-stripped of electrons. Under some circumstances this model can explain the lack of iron line variations (Reynolds 2000). However, the spectral fitting analysis (section 4) suggests the emitting region of the disc is only weakly ionised. Time-resolved spectral fitting of the XMM-Newton data using an ionised disc model showed that ionisation effects could not reproduce the constancy of the RDC. A second reflector that was intrinsically constant was also required. Thus, ionisation effects alone are insufficient to explain the lack of iron line variations. One intriguing alternative possibility is that gravitational light bending, which may be strong if the disc extends to ∼ 2 rg, is (partially) responsible . See for a detailed discussion of this model. Such a model may naturally explain the enhanced reflection strength (R ∼ > 2; section 4.3.3) and iron line. An interesting and related question is: how does the PLC retain its spectral shape whilst undergoing factors of ∼ 5 changes in luminosity? Both direct time-resolved spectral fitting (e.g. ) and flux-flux analyses (section 5.1; Taylor et al. 2003) show the slope of the PLC to stay remarkably constant. Similar results have been seen in ultrasoft Seyfert galaxies (e.g. Vaughan et al. 2002) where the primary power-law slope stays constant despite large amplitude luminosity variations. This poses an interesting challenge to models of the origin of the primary X-rays which typically predict some luminosity-correlated variation in slope (e.g. Zdziarski et al. 2003). The observations are suggestive of some kind of 'Compton thermostat' (Haardt, Marachi & Ghisellini 1994;Pietrini & Krolik 1995). Alternatively, if the apparent variations in the PLC flux are due to geometrical effects near the SMBH, the PLC might be expected to vary achromatically . Inner edge of the disc The intricacies of modelling the complex spectrum of MCG-6-30-15 have made it difficult to tap the potential of the iron line profile for probing the region of strong gravity around the SMBH. An often asked question is: how far in towards the SMBH does the disc extend? The answer to such a question may reveal the spin of the SMBH (e.g. Stella 1990; Martocchia et al. 2002). The strength/shape of the red wing of the line is determined by the emissivity of the innermost disc and so are the primary diagnostics of the position of the disc's inner boundary. The problem is that the very broad, low-contrast tail to the line is difficult to discern from the spectral curvature caused by the warm absorber (see Fig. 3). Thus it is not possible to unambiguously determine the inner disc edge (unless the transmission of the warm absorber is known to very high accuracy). For example, the method of Bromley, Miller & Pariv (1998) works by identifying the 'minimum energy' of the red wing of the redshifted line emission. However, no such energy can be identified without detailed, simultaneous modelling of the absorption and the reflection continuum. This does not mean that all is lost. The method of fitting the reflection emission while simultaneously fitting/constraining the absorbed continuum did require emission in to ∼ 2 rg (W01; paper I; section 4.6). Importantly, this result was robust to various possible effects that might bias the continuum determination (sections 4.5 and 4.7). This is strong but not conclusive evidence that the SMBH is spinning. An alternative approach to fitting the red wing of the line (perhaps isolating it through variability if the red wing shows variations on very short timescales not yet probed) would bolster the determination of its profile. The second and more fundamental objection is that some emission is possible from within 6 rg even if the SMBH is not spinning (see Reynolds & Begelman 1997;Young, Ross & Fabian 1998;Krolik & Hawley 2002;Merloni & Fabian 2003). Further refinement of the theory combined with a further increase in observational sensitivity should be able to settle this issue (see Young & Reynolds 2000;Yaqoob 2001). Why do only some Seyferts show broad lines? Recent, high-quality Chandra and XMM-Newton observations of bright Seyfert galaxies seem to have produced a mixed bag of iron lines. Examples of strong, narrow lines have been found in NGC 3783 (Kaspi et al. 2002;, NGC 5548 Pounds et al. 2003a), NGC 4151 , and NGC 3227 . Examples of highly broadened iron lines have been seen in MCG-5-23-16 (Dewangan et al. 2003), NGC 3516 (Turner et al. 2003c), IRAS 18325-5926 (Iwasawa et al. in prep.), Q 0056-363 (Porquet & Reeves 2003), IRAS 13349+2438 (Longinotti et al. 2003) andMrk 766 (Mason et al. 2003;Pounds et al. 2003b). The line in MCG-6-30-15 is the broadest of the known broad lines. There are some objects for which the line profile is in dispute (e.g. Mrk 205 and Mrk 509, see Pounds et al. 2001;Reeves et al. 2001;Page et al. 2003c). Even more confusing is the possible detection of rapidly variable yet narrow iron lines in Mrk 841 and NGC 7314 (Yaqoob et al. 2003). Recent studies of GBHCs have also revealed relativistically broadened iron lines in a few cases (Miller et al. 2003;Miller et al. 2002). This raises two obvious, related questions. The first is: why do some Seyferts show strong, broad lines while others show only narrow lines (with any highly broadened component being weak or absent)? The second question is: how can some Seyferts not have broad lines? The high luminosities of AGN in general argues that there must be a large amount of mass flowing deep into the potential well of the SMBH. Furthermore, the rapid X-ray variability argues that the X-ray source is spatially compact and probably also located close to the SMBH. There should be a substantial amount of relatively cool accreting matter in the region of strong gravity around the SMBH, close to the primary X-ray source. These are the only two ingredients thought necessary to produce a broad iron line (Fabian et al. 1989;Reynolds & Nowak 2003). The absence of broad lines in many Seyfert 1s is therefore quite puzzling and requires that either there simply is not enough cool matter extending close to the SMBH to produce the line or that the line photons are somehow lost (absorbed/scattered) on their way out of the inner regions. See also for a discussion of this problem. It is important to note that there could be some observational biases present. By their very nature highly broadened lines are difficult to isolate (as discussed above). The broad line in MCG-6-30-15 is perhaps the most clear example in part due to the strength of the reflection spectrum. The high level of reflection (R ∼ > 2) means that the equivalent width of the broad line is rather high (EW ∼ 400 eV). Other Seyfert galaxies with more typical reflection strengths (R ∼ 1) would be expected to have comparably weaker broad lines (EW ∼ 150 eV; George & Fabian 1991). Even with the high throughput of the EPIC cameras on-board XMM-Newton, a typical length observation (∼ 40 ks) of a bright Seyfert 1 might not clearly reveal a redshifted line with EW ∼ 150 eV. This is especially true if the continuum is absorbed. Thus, most observations of Seyfert 1s are unlikely to show broad lines; the lack of an obvious broad line in the spectrum does not rule out its existence. These observations can constrain the strength of highly broadened lines (e.g. upper limits on the equivalent width of a Laor profile) and so place useful limits on the line emission. This will help address the question of whether broad lines are a rare or common occurrence. Iron resonance absorption The fits to the iron line allowing for resonant absorption suggest the presence of an additional absorber whose dominant effect is Fe XXV Heα absorption at ≈ 6.7 keV. This absorption appears to have varied between the 2000 and 2001 XMM-Newton observations (section 6). Such an absorption system can be modelled (using the CLOUDY grids of Turner et al. 2003a) with a high column (NH ∼ > 10 22 cm −2 ) and high ionisation (log(ξ) ≈ 3) warm absorber. This is similar to the results obtained for another bright Seyfert 1, NGC 3783 except that this very high ionisation absorption does not significantly affect the low energy spectrum. Similar features may have been observed in other Seyferts (NGC 3516, Nandra et al. 1999;IRAS 13349-2438, Longinotti et al. 2003. Another approach is to estimate the column density using just the equivalent width (EW ∼ 10 eV) of the absorption line and the assumption that He-like Fe is the dominant ion in the absorbing material. Using the oscillator strength of the Fe XXV Heα line from Nahar & Pradhan (1999) the column density can be estimated assuming the line is unsaturated (linear part of the 'curve of growth'; Spitzer 1978). This gives N Fe +24 ∼ 10 17 cm −2 . If the line is saturated the column density will be higher. The predicted depth of the corresponding Fe XXV absorption edge (at 8.83 keV) is τ ∼ 0.05, consistent with the limits obtained from the data (section 4.5.1). Assuming solar abundances and that half all the Fe ions are in the Helike state this column corresponds to NH ∼ > 4 × 10 21 cm −2 . This compares reasonably to the estimate derived from the CLOUDY model, but should really be considered only a lower limit on the total column density since it does not account for other (lower oscillator strength) lines, possible line saturation or emission line filling. The Future The long XMM-Newton observation of MCG-6-30-15 has yielded a vast amount of detailed information (on absorption and emission spectra, variability timescales and delays and spectral changes). Only by making full use of the long and simultaneous observations (CCD and grating X-ray spectra) possible with XMM-Newton (and simultaneous BeppoSAX observations) could all of these be achieved. Comparable observations have been performed/planned for a small sample of other Seyferts (for example: NGC 3783, Behar et al. 2003; which will shed light on the detailed X-ray properties of Seyferts in general. Complimentary to these are the Chandra long-looks that have been particularly enlightening for warm absorber studies (e.g. Kaspi et al. 2002). Such observations also demonstrate the limits of present instrumentation. For example, without higher resolution, high S/N spectroscopy it is difficult to realise the diagnostic potential of the 6.4 − 6.9 keV iron resonance absorption lines. The presence of such a large column of highly ionised iron makes MCG-6-30-15 a good choice for high resolution Fe-band spectroscopy with the XRS on-board Astro-E 2. This has the capability (with resolution ∆E ∼ 7 eV and good throughput) to measure the depths and velocity shifts of some of the individual resonance lines. Indeed, the combination of the high resolution XRS spectrum of the iron absorption/emission, the high energy HXD spectrum which will constrain the continuum and reflection, and the high throughput XIS spectrum should provide the next major advance towards making full use of the information contained in the X-ray spectra of Seyferts. Aside from future missions, the present data indicate the most interesting observations are often made when the source is very faint. The spectral analysis presented above showed that when MCG-6-30-15 is faintest, the iron line is strongest relative to the continuum, making it a much higher contrast feature. The rare observations that show apparent changes in iron line properties tend to occur during periods of unusually low flux (Iwasawa et al. 1996;. Observations of NGC 4051 during its prolonged 'low states' have revealed other interesting phenomena, such as the warm absorbing gas in emission . Thus it would seem that XMM-Newton and Chandra observations of bright Seyfert 1s, taken during periods of unusually low activity, may prove particularly revealing. CONCLUSIONS The long XMM-Newton observation of MCG-6-30-15 has confirmed the existence of a highly broadened iron line. The main results of the present investigation are as follows: (i) It is not yet possible, given the present state of knowledge of the detailed properties of the warm absorber, to unambiguously dis-entangle the red wing of the line from the absorbed continuum. (ii) Nevertheless, the best-fitting model requires the reflection/line emitting region to extend inwards to ∼ 2 rg. This result was robust to various possible biases in the continuum determination. (iii) Alternative models for the spectral region around the iron line, not allowing for the effects of strong gravity, gave unsatisfactory results. (iv) The spectral variability of MCG-6-30-15, including the correlation between 2 − 10 keV slope and flux and the reduced vari-ability of the iron line, has been explained using a two-component model. (v) The relative constancy of the reflection dominated component (RDC), in the presence of large variations in the power-law component (PLC), is the primary cause of the spectral variations. The reasons for the lack of RDC variations are unclear. A model invoking gravitational light bending near the SMBH can qualitatively explain the suppressed variations, the relative strength of the RDC and the small inner disc radius.
2014-10-01T00:00:00.000Z
2002-06-06T00:00:00.000
{ "year": 2002, "sha1": "180ceb4bacb312e86599093017fc4a1b83036104", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/348/4/1415/2976757/348-4-1415.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "98afe086091a20a6c0072b079de233644ccf3ead", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14598879
pes2o/s2orc
v3-fos-license
Human Figure Drawing (HFD) Test is affected by Cognitive Style The drawing figures are widely managed as an ideal instrument for self-expression. HFD test (Human Figure Drawing) is an abbreviated test which was developed with the aim to evaluate various psychological states, especially assessing the psychic status including psychiatric illness and personality state. Thus, the reliance on the drawings must be proven that it has no bias due to cognitive differences between subjects. In the present study it is demonstrated that drawing tests are influenced, to a certain extent, by the subject’s cognitive style. Although the study was limited here, the results indicate the need for re-examination of the reliability limits of the Test. Introduction The medical field has its relevance as a tool for decision-making relating to the differential diagnosis, type of treatment required and prognosis. In this perspective, the psychological assessment purposes are to identify the psychological and neurological repercussions caused by the disease or injury process [1]. In opposed to medical tools, one of the central assumptions during this procedure is the use of assessing personality in an unconscious mind. From a psychoanalytic perspective an indirect approach such as projective drawing is an effective instrument that helps in viewing the inner world, unconscious defenses and conscious resistance. Moreover, Projective drawing test is a tool that allows also achieving information concerning the sensitivity and maturity of the patient, as a part of integrated personality and his interpersonal interaction with the environment [2]. In recent decades, a widespread use with projective tests as a tool for psychodiagnostic and neuropsychological assessments has become popular. Today there is no doubt that drawing test is a tool that allows getting information about the vulnerabilities, the maturity of the patient's personality integration and inter-personal interaction with his environment [3]. Goodenough [4] has already reported in the twenties of the last century on many clinical findings that can be produced by children's drawings and deduce the mental states beyond the level of intelligence. The test can also provide a lot of information about cognitive function, since it requires visual-spatial skills, attention and concentration and accurate perception of visual stimuli. HFD test (Human Figure Drawing) is an abbreviated test which was developed several decades later, with the aim to evaluate various psychological states, such as psychosocial status [5]. The practice of using human figure drawings (HFDs) to assess intellectual and psychological ability is pervasive among psychologists and therapists in many countries [6]. Recently a positive relationship was found between the size of the painting and the self-esteem of students in secondary division. Also found a positive correlation between the size of a cartoon character and its details and the degree of self-esteem in men [7]. Historically, DAP test (draw a person) or HFD was developed as a tool that allows assessing the level of intelligence of children. Actually, much earlier [4] reported numerous clinical findings which it is possible to draw beyond the level of intelligence. Consequently, Makower has expanded the use of DAP test tool that allows getting a personal information about the patient [8]. And then, one of the most significant studies were done in the context of a test attests to the reliability of the [9]. Koppitz [10] has defined an objective signs of emotional indicators that reflect the concerns or fears of the child. This test can also provide neurological information about the cognitive function of the subject such as agnosia and apraxia because they require skills and visual spatial, spatial orientation, attention, concentration and accurate perception of the visual stimulus lenders motor functions [11]. In light of studies that have found differences in elderly demented individuals in HFD test vs. healthy subjects; a study was conducted in an adult population to see if you can raise the validity of the test that examines dementia (MMSE) test using HFD. The subjects were asked to draw a man and a body painting particles were tested in comparison with a list of parts that the absence may predict dementia. They concluded that although test HFD cannot replace the existing tests for dementia, but it contributes to raising sensitivity and integration of the findings [11]. It was well proven that art activities are considered as suitable for creating a context through which a patient can upload your feelings and to express himself [7]. However Goodenough [4] has suggested that artistic qualities per se of the drawing are a negligible factor. However, there may be cognitive variables which now the tendency is to ignore them projective test, particularly the test of HFD. Cognitive styles are attributed to individual differences in the way people perceive, think, learn and interact with their environment. Witkin [12] by definition, cognitive style affects how information is represented, so the information will be represented in different ways depending on the style of cognitive and this will also affect how information processing and reaction [13]. One of the cognitive style delegates is the verbal and visual axis. Accordingly, when visual individuals are processing information, they are spontaneity experiencing mental sequence of images representing the information, or mental visual information visual indexing was calculated [16] indicating cognitive-style type and intensity was calculated as well. The index -called the cognitive index -was calculated according to the formula (S − V)/(S + V): sum of the semantic selections (S) minus sum of visual choices (V), divided by sum of semantic and visual answers together. The index values range from (1) to (-1); a negative result indicates the presence of a visual style, while a positive result points to the dominance of a semantic style. Accordingly, as much as the index is closer to the extreme values of the scale, a clear and significant thinking style is reflected. Results Consistent with previous research, and according to the Visual/ Semantic Categorization Test (VSCT; [17,18], out of seventy individuals only eighteen of them (25.7%) were with visual cognitive style tendency, and the other 52 individuals were with verbal/semantic cognitive style. Comparing the HFD quantity of items drawn by each cognitive style group revealed a significant difference (t (68) = 2.0; p = 0.05), when the visual group drew an average of slightly more items (16.7 ± 2.4) than the verbal cognitive style group (15.1 ± 2.7). Examining the relationship between the intensity of cognitive style and amount of detail painted by the participants revealed a significant correlation (R = 0.25; p = 0.04), so that the as far as a participant has a stronger visual style he/she is drawing more detail when drawing human, and vice versa. As the participant with more verbal cognitive power, so he is drawing fewer details. Discussion Projective analytic theory is based on the assumption that deep and often unconscious feelings and motives may be accessed through various means of self-expression. Drawing tests (HFD and House Tree Person (HTP) tests) are widely used as part of projective tests for assessment of psychological state. The drawing of human figure is widely managed as an ideal instrument for self-expression [21]. The popularity of the test is not questionable due to its easy administration and scoring. Though the interpretation of the test is targeted to psychic aspect of the person, researchers have found the test as effective in assessing neurological intactness, visual-motor coordination [22], cognitive development [23] and learning disabilities. Thus, reliance on the drawings must be proven that it has no bias due to cognitive differences between subjects. In the present study it is demonstrated that drawing tests are influenced, to a certain extent, by the subject's cognitive style. The findings are consistent with previous studies that found an association between cognitive style and visual design capabilities high [15,16], and artistic abilities [24]. The differences obtained in the present study were clear, distinct and significant. It can be argued that, despite the apparent significance, the differences are not large. However, it should note that the test here was very shallow and relied only on the quantity of drawn items. Of course that in order to better understand the contribution of cognitive style it requires to thorough research with not only the quantity but also the quality of details and its projection on emotional conclusions. The results are important through the conceptual level, although of course study is limited by the small subject's number so it is expressed in a relatively small amount of visual subjects. then arises associatively. By contrast, when verbal individuals read, see or hear something, they process information literally or by verbal associations. For example, verbal people can represent verbal thoughts by writing words on a page, while visual people with sketches. Verbal people are creating images if they try, but it is not a natural situation for them [14]. Accordingly, it is common for example to associate designer as of visual thinking ability, when representations that serve them are not only verbal, but also have a variety of shapes and patterns [15]. Evidence of this assumption was by correlating acceptance tests for Architecture by their cognitive style. Candidates were given the task of designing a week before the interview and were asked to create a model town with a conceptual design. After the interview we examined their cognitive style using a computerized task. Results demonstrated that cognitive style is strong and reliable predictor of success in the admission tests, as well as success in the first year of study design [16]. Additionally, studies found that the distribution between verbal and visual is not dichotomous but axis sequence, and only about 20-30% of the population has a visual cognitive style, while the majority has a verbal style [17,18]. Based on previous studies relating the linkage between cognitive style and design capabilities [16,19,20] it was assumed, here, that there will be differences in the number of the drawn items in HFD on basis of verbal-visual tendency, and that cognitive style is an intervening variable in HFD test results. Hypothesis was that individuals with visual cognitive style will be presenting more information in HFD test, including organs and accessories and thus will be ranked higher than verbal cognitive style individuals. Subjects The study involved a total of 70 healthy subjects (according to personal report) aged 20-30, undergraduate students in the department of Psychology and Behavioral Sciences at University of Ariel. The study was conducted in accordance with departmental ethics committee requirements. All subjects signed a consent form and after execution they were rewarded through a coupon which underlying degree requirements. Tools Test HFD: The participant was provided with a pencil and blank sheet of paper and was told to make the best possible drawing of the whole figure of a man. The test was not timed. To include subject's drawing in the study it must contain of basic organs (body, head, eyes, mouth, nose, legs, and arms). We based on [11] body details list of the human figure drawing. Each item was calculated for the total points beyond the basic items (body, head, eyes, mouth, nose, legs, arms), and each organ / item worth one point. Visual/Semantic categorization test: Is designed to assess the participant's cognitive style [17] consists of twenty-four stimuli. Each stimulus includes four words items, with two types of exceptions: one item (of the four item words) is exceptional on functionally basis and the other exceptional item is on visually basis. For example, in the cluster, 'WATERMELON; BALL; CARDS; BALLOON' , the 'WATERMELON' is exceptional functionally (all the other three are games or toys), and the 'CARDS' is exceptional visually (the other three are characterized by a spherical shape; in Hebrew there is a special word for 'playing cards'). It is important to emphasize that during the explanations that were given to the participants, it did not include information about the existence of the two exceptions. For each participant a functional-semantic/
2019-05-08T13:30:27.140Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "dd3cf70c09d3a103e19d20e186e6b32ba011cde8", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/human-figure-drawing-hfd-test-is-affected-by-cognitive-style-cep-1000111.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c0a3e835f9efe95817d8bea2cf269a4dfdb9a71e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
22046490
pes2o/s2orc
v3-fos-license
Probing the origin of excitonic states in monolayer WSe2 Two-dimensional transition metal dichalcogenides (TMDCs) have spurred excitement for potential applications in optoelectronic and valleytronic devices; however, the origin of the dynamics of excitons, trions, and other localized states in these low dimensional materials is not well-understood. Here, we experimentally probed the dynamics of excitonic states in monolayer WSe2 by investigating the temperature and polarization dependent photoluminescence (PL) spectra. Four pronounced PL peaks were identified below a temperature of 60 K at near-resonant excitation and assigned to exciton, trion and localized states from excitation power dependence measurements. We find that the localized states vanish above 65 K, while exciton and trion emission peaks remain up to room temperature. This can be explained by a multi-level model developed for conventional semiconductors and applied to monolayer TMDCs for the first time here. From this model, we estimated a lower bound of the exciton binding energy of 198 meV for monolayer WSe2 and explained the vanishing of the localized states. Additionally, we observed a rapid decrease in the degree of circular polarization of the PL at increasing temperatures indicating a relatively strong electron-phonon coupling and impurity-related scattering. Our results reveal further insight into the excitonic states in monolayer WSe2 which is critical for future practical applications. Monolayer transition metal dichalcogenides (TMDCs), a new class of two-dimensional (2D) materials analogous to graphene, have received considerable attention in recent years due to their unique optical and electronic properties [1][2][3] . Unlike other semiconductors and few-layer TMDCs, electrons and holes in monolayer TMDCs are tightly bound together at the energy degenerate ±K valleys, as a consequence of reduced dielectric screening effect and strong Coulomb interactions 4,5 , giving rise to valley excitons and trions. The exciton binding energy of monolayer TMDCs has been predicted to be in the range of 0.5 eV to 1 eV 4,6 , which is more than one order of magnitude larger than conventional semiconductors such as GaAs 7,8 . Due to this large binding energy, excitons remain stable even at room temperature and hence is predicted to play a key role in future optoelectronics and valleytronics applications 3 . Determination of the binding energy is also critical to provide insight into other physical properties of excitons such as the Bohr radius and many-body interactions. To date, a number of experimental techniques have been used to determine the exciton binding energy, however, the obtained values from theoretical calculations 6,9 and different experimental methods 5,10-12 are inconsistent, and may also depend upon specific sample preparation conditions. In addition to the determination of exciton binding energies of monolayer TMDCs, the control of valley exciton and trion dynamics is also of great importance and has been widely explored by various experimental methods, such as electrical gating 13 , optical pumping 14,15 and the application of a magnetic field 16 . However, our understanding of the fundamental properties of the intricate excitonic features in monolayer TMDCs remains incomplete. Various possible mechanisms, including impurity-related scattering, interaction with phonons and carrier-carrier interactions, still need to be systematically addressed. Moreover, the optical studies of excitonic features that have been reported so far [17][18][19][20][21] , are not consistent with each other and also show striking material variations. Here, we experimentally investigated the evolution of multiple PL emission peaks in monolayer WSe 2 in order to provide further insight into the behavior and origin of the excitonic and localized states in monolayer TMDCs as well as their associated binding energies. A multi-level model commonly employed for conventional semiconductors such as zinc oxide nanowires 22 , was employed here to describe the temperature-dependent behavior of excitonic and localized states and reveal a lower bound on the exciton binding energy in monolayer WSe 2 . At a temperature of T = 10 K, we observed a clear neutral exciton emission at ~1.75 eV and trion emission at ~1.72 eV, Scientific RepoRts | 6:22414 | DOI: 10.1038/srep22414 yielding a large trion binding energy of ~30 meV. Temperature-dependent PL measurements showed that both the exciton and trion emissions existed at room temperature as a result of their large binding energies, whereas other localized, defect-related emission states vanished above T = 65 K. The evolution of the PL emission with temperature in monolayer WSe 2 revealed a combined effect of large binding energies and strong electron-phonon interactions. Moreover, an observed difference in the temperature-dependent degree of circular polarization between WSe 2 and MoS 2 indicated stronger electron-phonon coupling and impurity-related scattering in monolayer WSe 2 . For these studies, monolayer WSe 2 flakes were exfoliated from its bulk crystal (2D Semiconductors Inc.) onto a 285 nm SiO 2 /Si substrate using the well-established micro-mechanical exfoliation technique 23 . The thickness of the SiO 2 layer was chosen to offer the best optical contrast between thin WSe 2 flakes and the substrate, thus increasing the visibility of the single-layer sheets 24 . After promising thin flakes were identified under an optical microscope, as shown in Fig. 1(a), and confirmed by Raman spectroscopy with a typical Raman mode at ~250 cm −1 for single-layer WSe 2 25 , we further used atomic force microscopy (AFM) to measure the layer thickness. Figure 1(b) shows the AFM image of the area indicated in the optical image in Fig. 1(a). The height profile shown in Fig. 1(c) was taken along the white dashed line in Fig. 1(b), revealing a thickness of 0.7 nm for monolayer WSe 2 , in agreement with previous studies 25,26 . For low temperature measurements, the sample was mounted in a cryostat (Janis ST-500 modified with an extension snout) cooled by liquid helium and all of the following measurements were carried out in a confocal microscopy set-up. The experiments were performed using two different excitation lasers. One was a 488 nm continuous wave (cw) Argon laser, and the other was a femtosecond laser at 632 nm generated by a tunable frequency-doubled optical parametric oscillator (OPO) pumped by a Ti 3+ :sapphire pulsed laser. For the experiments requiring circularly polarized excitation, the laser first passed through a Glan-Thompson polarizer and then a broadband quarter waveplate. The laser power was maintained below 30 μW, which is in the linear absorption regime as shown in Fig. 2, to avoid any heating or saturation effects. The laser beam was focused onto the sample via a 50×, 0.65 NA Nikon microscope objective with a laser spot size of ~1 μm. To identify the right-handed (σ + ) and left-handed (σ − ) circularly polarized PL signals, the emission from the sample was sent through a quarter waveplate followed by a linear polarizer. The beam was then focused at the entrance slit of a spectrometer and detected by a charge-coupled device (CCD) camera. Scattered laser light was blocked by a suitable long-pass filter placed immediately before the spectrometer entrance slit. PL spectra from monolayer and bulk WSe 2 flakes were first measured at a temperature of T = 10 K using the 488 nm cw excitation laser. Five pronounced PL peaks were observed from monolayer WSe 2 at low temperature as displayed in the inset of Fig. 2(a), which was significantly different from the spectral characteristics of WSe 2 monolayers at room temperature where only one broad peak was observed at 1.65 eV 25,26 . Additionally, a PL spectrum from bulk WSe 2 taken at a temperature of T = 10 K is also shown in the inset. Compared with the PL spectrum from the monolayer, the red-shifted, weak PL indicates the transition to an indirect bandgap in bulk WSe 2 . The origin of the multiple emission peaks in monolayer WSe 2 is further discussed below by observing the excitation power dependence and the evolution of the PL spectra from T = 10 K to room temperature. The excitation power dependence of the five PL peaks from monolayer WSe 2 at T = 10 K is shown in Fig. 2, where the peaks are labeled as indicated in the inset. The solid lines are fits to the data using a power law: I ∝ P α , where I is the PL peak intensity for a given excitation power, P. The extracted exponent factor, α, for the five peaks was 1.0, 1.0, 1.6, 1.3 and 0.8, in the order from peak 1 to 5, revealing insight into different dominating recombination processes for each peak. At T = 10 K, peaks 1 and 2 are primarily attributed to the radiative recombination of excitons and trions 16,17,19,27 , respectively, because the photon emission rate was observed to be linearly dependent on the excitation power (I ∝ P). This linear dependence is expected from the first-order rate equation for the radiative recombination process 28 . For peaks 3 and 4, several earlier reports have observed similar emission features, however, the nature of these peaks appear to depend on specific experimental conditions. While Wang et al. 17 and Jones et al. 27 assigned these peaks as localized states, You et al. 19 , under much higher pulsed laser excitation, have demonstrated the observation of biexciton emission. In our experiment, both peaks 3 and 4 were observable under cw excitation even with relatively low power (<30 μW) as shown in the low-temperature PL spectrum in the inset in Fig. 2(a), thus defect-related localized state transitions is the most likely origin of these peaks. When photo-excited electron-hole pairs are trapped in potential wells, which may be created by lattice defects or residual impurities commonly introduced during the mechanical exfoliation process 29 , localized states may form within the bandgap with an emission energy below the exciton and trion emission energies. An exponent factor α between 1 and 2 is expected from these bound exciton transitions 28 which is consistent with our observations. Several mechanisms could explain the sub-linear power dependence of peak 5 17,18,27,30 . Among those explanations, phonon side-band emission is one possible mechanism 27 , which can be interpreted as the radiative recombination of electrons and holes separately localized at different spatial sites 31 . Considering the rate equations for free and localized carriers, one can derive that the PL intensity arising from the recombination of localized electron-hole pairs is I ∝ P 0.5 31 . Abundant defects and impurities common in exfoliated monolayer WSe 2 may explain the slightly larger extracted exponent factor of 0.8 for peak 5 than predicted theoretically for phonon sideband emission. An alternative explanation for the origin of peak 5 may be excitons bound to isoelectronic defects in the silicon substrate 30 , where isoelectronic traps dominate the recombination process. The polarization of the PL offers opportunities for optically manipulated valley index in monolayer TMDCs. However, the degree of PL polarization under 488 nm (~2.54 eV) excitation was relatively low at ~10% even at T = 10 K, because the excitation was far from the exciton emission energy of monolayer WSe 2 as shown above 14 . Thus, in all of the following measurements, the excitation was performed using a 632 nm pulsed laser (~1.96 eV), which is closer to the neutral exciton emission energy, thus leading to a larger degree of PL polarization at low temperatures. Under this excitation, three pronounced peaks were identified at a temperature of T = 10 K and shown in Fig. 3(a) along with the evolution of the PL spectra with increasing temperature. In the temperature range between 30 and 50 K, four PL peaks were clearly observed. Peak 1 at ~1.75 eV and peak 2 at ~1.72 eV were recognized as the neutral exciton and trion emission, respectively, which were consistent with previous reports 27 and the cw excitation measurement presented above. A trion binding energy of ~30 meV is obtained from a clear separation between the exciton and trion peaks. Because of this relatively large trion binding energy, trions could theoretically survive at room temperature where the thermal energy is k B T ~25 meV. The other two peaks with lower photon energies were recognized as localized states. As the temperature increased, all of the peaks red-shifted, and followed each other closely representing a decreased bandgap. The intensity of peaks 3 and 4 significantly decreased above a temperature of 50 K and eventually vanished, whereas clear exciton and trion peaks remained in the temperature range between 50 and 150 K. As the temperature further increased beyond 150 K, the neutral exciton peak dominated the PL spectrum with a long low-energy tail, which may be a signature of the existence of trion emission at room temperature. The main exciton emission peak at room temperature is because of an extremely large exciton binding energy of monolayer WSe 2 compared with conventional semiconductors 5 ; this means that electrons and holes are tightly bound together and can hardly escape due to thermal fluctuations. Conversely, other peaks have smaller binding energies, which can be thermalized more easily with increasing temperatures. To further clarify the underlying dynamics responsible for the evolution of the emission peaks, we fitted the multiple PL peaks to an integrated Lorentz function and then extracted the photon energies of peaks 1, 2 and 4 as a function of temperature, as shown in Fig. 3(b). The evolution of peak 3 is not shown here because it only existed at three measured temperatures. The solid lines are fits to a modified Varshni's equation describing the temperature dependence of a semiconductor bandgap 13,32,33 : where E g (0) is the transition energy at T = 0 K, S is a dimensionless constant describing the strength of the electron-phonon coupling, and ω  represents the average acoustic phonon energy involved in electron-phonon interactions. From the fits, we extracted S ≈ 2.33, ω ≈ . 14 1  meV for the exciton, and S ≈ 2.32, ω ≈ . 14 1 meV for the trion. This electron-phonon coupling constant S of monolayer WSe 2 is larger than reported for other monolayer TMDCs, such as MoS 2 (S ≈ 1.82) and MoSe 2 (S ≈ 1.93) 32 . This difference may originate from the relatively smaller effective mass in the intervalley transition for monolayer WSe 2 34 , thus leading to a stronger electron-phonon coupling. Additionally, it should be noted that the exciton-phonon interaction also plays a significant role in the shift of the exciton peak energies 35 . When the exciton moves within the crystal lattice, it interacts with phonons via scattering processes. At low temperatures, phonons are relatively inactive; therefore, the scattering is primarily dominated by phonon absorption. As the temperature increases, phonon emission and absorption contribute equally to the exciton scattering; thus, the exciton energy shifts as a result of this exciton-phonon interaction 35 . It has also been calculated that beyond the Debye temperature, the exciton energy has a linear temperature dependence, whereas it is independent of temperature in the low temperature limit, which is consistent with the sole contribution of phonon absorption 35 . However, this calculation only provides a general trend and the actual dependence is largely associated with the material properties, which can also be modified by other effects such as thermal expansion and impurities in the material. Further analysis of the temperature dependent data reveals insight into the exciton and trion binding energies. By fitting the four emission peaks into an integrated Lorentz function, we extracted the PL intensities of peaks 1, 2 and 4 labeled in Fig. 3(a), and plotted them as a function of 1/T as shown in Fig. 3(c). As temperature increased, the intensities of peaks 1 and 2 first gradually increased then dramatically decreased, whereas the intensity of peak 4 only decreased with temperature. The multi-level model for the temperature dependence of the PL peak intensities is given by 22 where I(0) is the PL intensity at T = 0 K, k B is the Boltzmann constant, and A and B are fitting parameters. E 1 describes the activation energy that causes the increase in PL intensity with increasing temperature, whereas E 2 represents the activation energy for the normal thermal quenching process at higher temperatures. By fitting the experimental data, we obtained E 1 ≈ 0.1 meV, E 2 ≈ 198 meV for the exciton, and E 1 ≈ 3 meV, E 2 ≈ 54 meV for the trion. The value of E 2 for the exciton represents the thermal energy that is needed for the normal thermal quenching process as the temperature is increased up to 300 K, which is smaller than the previously reported exciton binding energy of 370 meV for monolayer WSe 2 obtained from two-photon PL excitation spectroscopy 5 and 790 meV obtained from optical reflectivity/absorption spectra 12 . Since our sample has not been completely thermally quenched yet, the E 2 value obtained here only represents a lower bound of the exciton binding energy of monolayer WSe 2 . An additional reason for the discrepancy could be variations in the number of impurities and defects in our exfoliated sample as well as interactions between the carriers and the heavily doped silicon substrate. As noted above, peaks 3 and 4, corresponding to localized states, were easier to thermalize than the exciton and trion states. At low temperatures, a certain amount of carriers could be captured by these localized, trapped states. As temperature increased, the trapped carriers could be released again from the localized states and recombine radiatively, leading to the increased PL intensity in the exciton and trion peaks below T = 60 K. Moreover, between 50 K and 100 K, the trion peak even surpassed the exciton peak because carriers from peaks 3 and 4 were thermalized into the trion peak, resulting in a dramatic increase in its PL intensity. The value of E 1 for the trion (~3 meV) was consistent with the temperature point of T ≈ 65 K where peaks 3 and 4 vanished in the measured PL spectrum. Next, we turn our attention to the circular polarization of the emission peaks and their evolution with increasing temperatures. The degree of circular polarization of peaks 1, 2 and 4 as a function of temperature is shown in Fig. 4. Here, the degree of circular polarization is given by 14,15 : where I(σ + ) and I(σ − ) correspond to the PL intensity of the σ + and σ − polarization components, respectively. As observed in Fig. 4, under near-resonant circularly polarized excitation, the trion peak generally has a larger degree of circular polarization than the exciton peak; however, they both follow a similar trend with increasing temperature. While the exciton and trion emissions show a relatively large value of 25 % and 37 %, the localized states also show a small circular polarization of 13% at a temperature of T = 10 K. The observed circular polarization of these localized states is consistent with previous reports 17,27 , however, the origin is presently not well-understood. One possible mechanism could be related to a partial transfer of the valley polarization from optically generated electron-hole pairs to the localized electrons or holes 38 . Additionally, we also observed variations in the degree of circular polarization between monolayer WSe 2 and MoS 2 . In contrast to monolayer MoS 2 , which displays a flat plateau with a circular polarization of ~31% below T = 90 K 15 , the degree of circular polarization of monolayer WSe 2 dramatically decreased at temperatures below T = 50 K for both the exciton and trion peak. Beyond T = 50 K, the circular polarization gradually reduced, indicating the domination of phonons in the intervalley scattering at high temperatures. As revealed from the fitting parameters in the modified Varshni's equation (1), the electron-phonon coupling strength S of monolayer WSe 2 is stronger than that of monolayer MoS 2 due to its relatively smaller effective mass in the intervalley transitions. As a consequence, the difference in the degree of circular polarization between WSe 2 and MoS 2 is likely due to the relatively stronger electron-phonon coupling and lower Debye temperature of monolayer WSe 2 , causing phonons to be involved in the intervalley scattering at much lower temperature than in monolayer MoS 2 . Therefore, the degree of circular polarization of monolayer WSe 2 displayed a significant drop below T = 50 K as opposed to showing a similar temperature independence to monolayer MoS 2 . Moreover, abundant impurities and vacancies, as well as the effect of the heavily doped substrate, also play roles in determining the degree of circular polarization, which may account for the absence of a plateau at low temperatures. Finally, we applied a small in-plane magnetic field of ~0.35 T to the WSe 2 monolayer at T = 10 K. The degree of circular polarization did not show any visible change, which agrees with previous reports 15 and further demonstrates that we are indeed probing the valley polarization in monolayer WSe 2 . To summarize, we have experimentally investigated the PL spectra from mechanically exfoliated monolayer WSe 2 and probed the dependence of the intensity and energy of the exciton and trion emission, as well as the localized states, with temperature and excitation power. Contrary to other members of monolayer TMDCs such as MoS 2 and MoSe 2 , the temperature dependence of the valley polarization in monolayer WSe 2 under near-resonant circularly polarized excitation lacks a flat plateau at low temperatures, which indicates a stronger electron-phonon coupling and impurity-related scattering in monolayer WSe 2 . We have also successfully applied a multi-level model developed for conventional semiconductors to monolayer TMDCs, which explains the dynamics of various excitonic states and revealed a lower bound for the exciton binding energy. The insight into the excitonic and localized states in monolayer WSe 2 provided by these experiments is an important step towards materials optimization for potential future optoelectronic device applications such as photodetectors and photovoltaic cells.
2017-10-17T05:29:44.370Z
2016-03-04T00:00:00.000
{ "year": 2016, "sha1": "ede168757f5a1f7d302d7a5f9896720948c2a45c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep22414.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ede168757f5a1f7d302d7a5f9896720948c2a45c", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
226959410
pes2o/s2orc
v3-fos-license
Overcoming resistance to BRAFV600E inhibition in melanoma by deciphering and targeting personalized protein network alterations BRAFV600E melanoma patients, despite initially responding to the clinically prescribed anti-BRAFV600E therapy, often relapse, and their tumors develop drug resistance. While it is widely accepted that these tumors are originally driven by the BRAFV600E mutation, they often eventually diverge and become supported by various signaling networks. Therefore, patient-specific altered signaling signatures should be deciphered and treated individually. In this study, we design individualized melanoma combination treatments based on personalized network alterations. Using an information-theoretic approach, we compute high-resolution patient-specific altered signaling signatures. These altered signaling signatures each consist of several co-expressed subnetworks, which should all be targeted to optimally inhibit the entire altered signaling flux. Based on these data, we design smart, personalized drug combinations, often consisting of FDA-approved drugs. We validate our approach in vitro and in vivo showing that individualized drug combinations that are rationally based on patient-specific altered signaling signatures are more efficient than the clinically used anti-BRAFV600E or BRAFV600E/MEK targeted therapy. Furthermore, these drug combinations are highly selective, as a drug combination efficient for one BRAFV600E tumor is significantly less efficient for another, and vice versa. The approach presented herein can be broadly applicable to aid clinicians to rationally design patient-specific anti-melanoma drug combinations. INTRODUCTION The rates of melanoma have been rapidly increasing (NIH, www. cancer.org). Melanoma is one of the most common cancers in young adults, and the risk for melanoma increases with age (NIH, www.cancer.org). However, alongside the rapid increase in incidence, there has also been rapid clinical advancement over the past decade, with targeted therapy and immunotherapy that have become available to melanoma patients 1 . Melanoma is associated with a great burden of somatic genetic alterations 2 , with the primary actionable genomic data being an activating mutation in the BRAF gene, BRAF V600E , occurring iñ 50% of all melanomas 2,3 . Nearly a dozen new treatments have been approved by the Food and Drug Administration (FDA) for unresectable or metastatic melanoma harboring the BRAF V600E mutation, among them vemurafenib (a BRAF V600E inhibitor), cobimetinib (a MEK MAPK inhibitor), or a combination of dabrafenib and trametinib (a BRAF V600E inhibitor and a MEK MAPK inhibitor, respectively) 1 . While targeted therapy revolutionized melanoma treatment, the high hopes shortly met a disappointment, as it became evident that most patients treated with BRAF V600E inhibitors eventually relapse and their tumors become resistant to the treatment [4][5][6] . Various combination treatments were suggested to overcome the acquired resistance to BRAF V600E inhibitors 4,5,7,8 . Nevertheless, BRAF V600E and MEK inhibitors remain the only targeted agents approved by the FDA for melanoma. In this study, we design patient-specific targeted treatments for melanoma based on individualized alterations in signaling protein networks, rather than on genomic/protein biomarkers. Attempting to treat patients based on the identification of single biomarkers or signaling pathways may overlook tumor-specific molecular alterations that have evolved during the course of the disease, and the consequently selected therapeutic regimen may lack long-term efficacy resulting from partial targeting of the tumor imbalance. We have shown that different patients may display similar oncogene expression levels, albeit carrying biologically distinct tumors that harbor different sets of unbalanced molecular processes 9 . Therefore, we suggest exploring the cancer data space utilizing an information-theoretic approach that is based on surprisal analysis [9][10][11] , to unbiasedly identify the altered signaling network structure that has emerged in every single tumor 9,10 . Our thermodynamic-like viewpoint grasps that tumors are altered biological entities, which deviate from their steady-state due to patient-specific alterations. Those alterations can manifest in various manners that are dependent on environmental or genomic cues (e.g., carcinogens, altered cell-cell communication, mutations, etc.) and give rise to one or more distinct groups of coexpressed oncoproteins in each tumor, named unbalanced processes [9][10][11] . A patient-specific set of unbalanced processes constitutes a unique signaling signature and provides critical information regarding the elements in this signature that should be targeted. Each tumor can harbor several distinct unbalanced processes, and therefore all of them should be targeted in order to collapse the altered signaling flux in the tumor 10,11 . We have demonstrated that with comprehensive knowledge about the patient-specific altered signaling signature (PaSSS) in hand, we can predict efficacious personalized combinations of targeted drugs in breast cancer 10 . Herein, we decipher the accurate network structure of coexpressed functional proteins in melanoma tumors, hypothesizing that the PaSSS identified will guide us on how to improve the clinically used BRAF V600E -targeted drug combinations. Our aim was to examine the ability of PaSSS-based drug combinations to reduce the development of drug resistance, which frequently develops following BRAF V600E inhibition in melanoma. To this end, we studied a dataset consisting of 353 BRAF V600E and BRAF WT skin cutaneous melanoma (SKCM) samples, aiming to gain insights into the altered signaling signatures that have emerged in these tumors. A set of 372 thyroid carcinoma (THCA) samples was added to the dataset, as these tumors frequently harbor BRAF V600E as well, therefore enabling studying the commonalities and differences between tumor types that frequently acquire the BRAF V600E mutation. We show that 17 distinct unbalanced processes are repetitive among the 725 SKCM and THCA patient-derived cancer tissues. Each tumor is characterized by a specific subset of typically 1-3 unbalanced processes. Interestingly, we demonstrate that the PaSSS does not necessarily correlate with the existence of the BRAF V600E , namely different tumors can harbor different signatures while both carrying the mutated BRAF, and vice versa-tumors can harbor the same altered signaling signature regardless of whether they carry BRAF V600E or BRAF WT . These data suggest that examination of the BRAF gene alone does not suffice to tailor effective medicine to the patient. SKCM and THCA patients harboring BRAF V600E can respond differently to the same therapeutic regimen or rather benefit from the same treatment even though their BRAF mutation status differs. We experimentally demonstrate our ability to predict effective personalized therapy by analyzing a cell line dataset and tailoring efficacious personalized combination treatments to BRAF V600Eharboring melanoma cell lines. The predicted PaSSS-based drug combinations were shown to have an efficacy superior to drug combinations that were not predicted to target the individualized altered signaling signatures, and combinations used in clinics, both in vitro and in vivo. We show that an in-depth resolution of individualized signaling signatures allows inhibiting the development of drug resistance and melanoma regrowth, by demonstrating that while melanoma models develop drug resistance several weeks following initial administration of the clinically used combination, dabrafenib+trametinib, individualized PaSSS-based drug combinations gain a longer-lasting effect and show high selectivity. RESULTS An overview of the experimental-computational approach Biomarker analysis in melanoma relies mainly on the identification of mutations in the BRAF gene 12 . If mutation/upregulation of the mutant BRAF V600E is identified (Fig. 1, left), the patient will likely be treated with a BRAF V600E inhibitor (e.g., vemurafenib 13 or dabrafenib 14 ), possibly concurrently with an inhibitor of MEK MAPK (e.g., trametinib 15 ). The combination of BRAF V600E and MEK MAPK inhibitors was shown to be superior to BRAF V600E inhibition alone and to delay or prevent the development of drug resistance 7 . However, the biomarker analysis utilized in clinics lacks information about the altered signaling network, and, for example, may overlook additional or alternative protein targets that, if targeted by drugs, may enhance the efficacy of the treatment (Fig. 1, left). We utilize an information-theoretic approach that is based on surprisal analysis (see "Methods" section) 9-11 to gain information regarding the patient-specific signaling signature (PaSSS) that has emerged in every individual tumor (Fig. 1, right). Based on proteomic analysis of the samples, we identify the set of altered protein-protein co-expressed subnetworks, or unbalanced signaling processes, that has arisen as a result of constraints (environmental or genomic) that operate on the tumor, and then design a combination of targeted drugs that are predicted to collapse the tumor-specific altered signaling signature (Fig. 1, right and see "Methods" section) [9][10][11] . We obtained from the TCPA database (The Cancer Proteome Atlas Portal, http://tcpaportal.org) a dataset containing 353 skin cutaneous melanoma (SKCM) and 372 thyroid cancer (THCA) samples (725 samples in total). The thyroid cancer samples were added to the dataset for two main reasons: (1) to increase the number of samples in the dataset, thereby increasing the resolution of the analysis; (2) THCA tumors frequently harbor the BRAF V600E mutation, and we were therefore interested in examining the commonalities and differences between the altered signaling signatures that emerged in SKCM and THCA tumors. Fig. 1 Conventional biomarker analysis vs. patient-specific signaling signature analysis. Genetic/protein biomarker analysis relies on the evaluation of the expression levels of common cancer-type-associated genes or proteins (left). The design of a drug combination is done according to an inference of the state of the surrounding signaling network, based on previous knowledge (left). In contrast, patient-specific signaling signature (PaSSS) analysis involves proteomic analysis of hundreds of cancer-associated proteins, and unbiased identification of the altered signaling signature in every sample, i.e., that does not depend on previous knowledge of melanoma-related signaling pathways. This enables rationally designing personalized combinations of targeted drugs that are based on the patient-specific uniquely rewired signaling network (right). 17 unbalanced processes repeat themselves throughout 725 SKCM and THCA tumors The analysis of the dataset revealed that the 725 SKCM and THCA tumors can be described by 17 unbalanced processes (Supp. Fig. 1; the amplitudes for each process in each patient and the importance of each protein in the different processes can be found in Supp. Data 1; the protein composition of each process is presented in Supp. Data 2), i.e., 17 distinct unbalanced processes suffice to reproduce the experimental data (Supp. Fig. 2 and "Methods" section). Unbalanced processes 1 and 2, the two most significant unbalanced processes, which appear in the largest number of tumors, distinguish well between SKCM and THCA tumors, as can be seen by the 2D plots of λ α (k) values (i.e., amplitudes of each process in every tumor; Fig. 2a, c, e). Unbalanced process 1 (Supp. Data 2) appears almost exclusively in THCA tumors (372 THCA tumors harbor unbalanced process 1, vs. 46 SKCM tumors; Fig. 2a, e), while unbalanced process 2 characterizes almost exclusively SKCM tumors (331 SKCM tumors harbor unbalanced process 2 vs. only 4 THCA tumors; Fig. 2c, e). Unbalanced process 1 involves upregulation of proteins that have been previously linked to THCA: LKB1 16 , fibronectin 17,18 , Bcl-2 19 , claudin 7 20 (Fig. 2b). Unbalanced process 2 is characterized by the upregulation of proteins that have been implicated in melanoma, such as Stat5α 21 , Akt 22 , cKit 23 , Her3 24 , and ATM 25 (Fig. 2d). As can be seen in the graph in Fig. 2c, unbalanced process 2 was assigned a positive amplitude in all 331 SKCM tumors in which it appears, while in 4 THCA tumors it was assigned a negative amplitude (see also Supp. Data 1). This means that the proteins that participate in this unbalanced process deviate to opposite directions in the two types of tumors (importantly, this remark denotes only the partial deviation that occurred in these proteins due to unbalanced process 2; some of these proteins may have undergone additional deviations due to the activity of other unbalanced processes. See Supp. Data 2 and "Methods" section). Although unbalanced process 2 appears in a significant number of BRAF V600E SKCM patients (Fig. 2c, d), it does not include pS(445)BRAF and downstream signaling. This finding corresponds to a recent Fig. 2 Unbalanced processes 1 and 2 distinguish well between SKCM and THCA tumors when plotted in 2D. The majority of THCA tumors harbor unbalanced process 1 (a), while the majority of SKCM tumors harbor unbalanced process 2 (c). Unbalanced processes 1 and 2 are shown in panels b and d. Note that red proteins are upregulated, and blue proteins are downregulated given that the amplitude of the process is positive. In tumors where the amplitude is negative, the direction of change is opposite. e A 2D plot showing λ 2 (k) against λ 1 (k) for all SKCM and THCA patients. The plot shows nicely the separation between SKCM and THCA patients in this 2D space. Note, however, that every tumor is characterized by a set of unbalanced processes (a PaSSS), and that unbalanced processes 1 and 2 alone do not suffice to describe the complete tumor-specific altered signaling signatures. characterization of melanoma tissues 3 and suggests that the signaling signatures of BRAF V600E tissues may diverge over time and acquire additional signaling routs which are not necessarily related to the original driver mutations, such as BRAF V600E or its downstream MEK MAPK signaling. Unbalanced process 2 can also be found in BRAF WT patients (Fig. 2c). See, for example, patient TCGA-YG-AA3P (Fig. 3). The signature of this patient did not include additional processes. A total of 181 SKCM patients harbor this signaling signature, consisting only of unbalanced process 2: 107 of them harbor BRAF WT , and 74 of them harbor BRAF V600E (Fig. 3). In contrast, no THCA patients harbor this signature (Fig. 3). The finding that BRAF WT and BRAF V600E SKCM patients can, in some cases, harbor the same altered signature suggests that these patients can also benefit from the same combination of targeted drugs. Although unbalanced processes 1 and 2 distinguish well between SKCM and THCA patients (Fig. 2a, c, e), these processes alone do not suffice to describe the PaSSS of all patients. Our analysis suggests that to decipher the altered signaling signature in every patient, 17 unbalanced processes should be considered. Hence, 2D plots may overlook important therapeutic information. When we inspect the patients in the context of a 17-dimensional space, where each dimension represents an unbalanced process, we find that not all SKCM patients harbor unbalanced process 2 and that those who do harbor this process may harbor additional unbalanced processes as well (Fig. 3, Supp. Fig. 3 and Supp. Data 1). We have shown that mapping the patients into a multidimensional space (a 17D space in our case) allows deciphering the set of unbalanced processes, namely the PaSSS, in every tumor. This mapping is crucial for the design of efficacious treatments 10 . The SKCM patient TCGA-GF-A2C7, for example, is characterized by a PaSSS consisting of unbalanced processes 2 and 4 ( Fig. 3). Only 5 SKCM patients were found to be characterized by this set of unbalanced processes, all of which harbor BRAF V600E (Fig. 3). The SKCM patient TCGA-EB-A85I was found to harbor a PaSSS consisting of unbalanced processes 1, 6, and 10 ( Fig. 3). This patient harbors a one-of-a-kind tumor, as no other patients in the dataset harbor this altered signaling signature (Fig. 3). The PaSSS of THCA patient TCGA-DJ-A1QD includes unbalanced processes 1 and 4 ( Fig. 3). This signature characterizes 38 THCA patients, 16 of them BRAF WT and 22 of them BRAF V600E (Fig. 3). These THCA patients may benefit from a combination of drugs that target central protein nodes in unbalanced processes 1 and 4, regardless of whether they harbor BRAF V600E or not. No SKCM patients harbor this altered signaling signature (Fig. 3). Another interesting finding is that SKCM and THCA patients may harbor the same PaSSS, as is the case of the signature consisting of unbalanced process 1, shared by 3 SKCM patients and 142 THCA patients ( Fig. 3 and Supp. Data 1). All these patients may be treated with the same drug combination, targeting key proteins in unbalanced process 1, e.g., LKB1 and fibronectin (Fig. 2b). The altered signaling signatures identified in SKCM and THCA are almost mutually exclusive To explore the entire dataset in terms of the set of unbalanced processes that each patient harbors, we assigned to each patient a patient-specific barcode, denoting the PaSSS, i.e., the set of active unbalanced processes in the specific tumor ( . This finding corroborates with our previous studies of signaling signatures in cancer 10 and underscores the need for personalized cancer diagnosis that is not biased by, e.g., the anatomical origin of the tumor. Patient-specific barcodes guide the rational design of personalized targeted combination therapy We have previously shown the predictive power of our analysis in determining effective patient-tailored combinations of drugs that target key proteins in every unbalanced process 10,11 . Utilizing the maps of the unbalanced processes identified in the dataset herein (Supp. Fig. 1), we predicted process-specific protein targets for each process (Supp. Data 5). Each individual patient is predicted to benefit from a therapy that combines drugs against all the unbalanced processes active in the specific tumor (Fig. 4, Supp. Data 5). As mentioned above, SKCM patients can in some cases benefit from the same combination therapy, regardless of their BRAF mutational status. This is the case for patients TCGA-EB-A553 (carrying BRAF V600E ) and TCGA-BF-AAOX (carrying BRAF WT ), that were found to harbor tumors characterized by the same barcode of unbalanced processes and were therefore predicted to benefit from the same treatment, where pMAPK and cKit are targeted Fig. 3 Examples for patient-specific sets of active unbalanced processes. Each patient typically harbors a set of 1-3 active unbalanced processes. Our results show that a specific set of active processes does not necessarily distinguish between BRAFV600E− and BRAFV600E+ patients, or between SKCM and THCA patients. simultaneously (Fig. 4). Patient TCGA-EB-A97M carries BRAF WT , as does patient TCGA-BF-AAOX (Fig. 4). However, unbalanced process 6, which is active in patient TCGA-BF-AAOX, is inactive in patient TCGA-EB-A97M (Fig. 4). In addition, patient TCGA-EB-A97M harbors three active unbalanced processes that are not active in the tumor of patient TCGA-BF-AAOX-processes 3, 4, and 8 ( Fig. 4). Therefore, the list of proteins that should be targeted in order to collapse the tumor differs in these patients (Fig. 4). We obtained from the GDC Data Portal (https://portal.gdc. cancer.gov/) data regarding genomic mutations that often occur in SKCM 26 (Supp. Data 6). We selected 6 mutually exclusive mutations (including BRAF V600E , Supp. Fig. 4). We found that SKCM patients harboring the same genomic mutations were characterized by various barcodes according to PaSSS analysis (Supp. Fig. 4) and may thus demand distinct treatments. This result supports the notion that analysis of genomic biomarkers alone may overlook patient-specific aberrations. A375 and G361 BRAF-mutated melanoma cell lines harbor distinct altered signaling signatures To experimentally validate our hypothesis that BRAF V600E harboring cells may benefit from drug combinations that are designed based on the PaSSS identified at the time of diagnosis, we turned to analyze a different dataset containing 290 cell lines originating from 16 types of cancer, including blood, bone, breast, colon, skin, uterus, and more (see "Methods" section). The cell lines were each profiled for the expression levels of 216 proteins and phosphoproteins using a reverse-phase protein assay (RPPA). PaSSS analysis of this cell line dataset revealed that 17 unbalanced processes were repetitive in the 291 cell lines (Supp. Data 7, Supp. Data 8, Supp. Fig. 5 and "Methods" section). We randomly selected two melanoma cell lines, A375 and G361, for experimental validation. Both cell lines harbor the mutated BRAF V600E . In the clinic, patients bearing tumors with BRAF V600E would all be treated similarly, with BRAF inhibitors alone or in combination with MEK inhibitors 7,15 . Since all of the proteins that participate in a certain unbalanced process undergo coordinated changes and the vast majority of them are functionally connected based on STRING (STRING database) (Supp. Fig. 5), we assumed that targeting one or two central nodes in a process should suffice to inhibit the altered signaling flux through the specific unbalanced process. We further hypothesized that an effective drug combination should consist of drugs that, together, target all the unbalanced processes that are active in the tumor. We have recently demonstrated that targeting one central node leads to reduced flux through the process in which it participates, while leaving other processes essentially unaffected 11 . Therefore, we searched unbalanced processes 1, 3, and 6 for upregulated central nodes that can be targeted by drugs, preferably FDA-approved (The full lists of proteins that participate in the different unbalanced processes are presented in Supp. Data 8, and the images of the unbalanced processes, including the functional connections according to STRING can be found in Supp. Fig. 5). An upregulation of pMEK1/2, GAPDH, and PKM2 was attributed to unbalanced process 6 ( Fig. 5b, Fig. 6b), while unbalanced process 3 was characterized by an upregulation of PDGFRβ (Fig. 5b), and unbalanced process 1 involved upregulation of pS6K and pS6 (Fig. 5b, Fig. 6b). We, therefore, Fig. 4 Patient-specific altered signaling signatures, or barcodes, can guide the design of personalized combination therapies. For each tumor, processes with amplitudes exceeding the threshold values (see "Methods" section) were selected and included in patient-specific sets of unbalanced processes. Those sets were converted into schematic barcodes. The sign of the amplitude denotes the direction of the imbalance, i.e., the same unbalanced process can deviate to opposite directions in different patients. Central upregulated proteins from each process were suggested as potential targets for personalized drug combinations. predicted for A375 cells that a combination of trametinib (a pMEK1/2 inhibitor, commonly used for melanoma in clinics; also inhibits pS6 27,28 ) and dasatinib (a multi-kinase inhibitor targeting also PDGFRβ) should effectively target the three unbalanced processes that constitute the PaSSS of these cells (Fig. 5b). The selection of a multi-kinase inhibitor, dasatinib, to inhibit PDGFRβ instead of a more specific kinase inhibitor was motivated by reports showing that induced expression of certain biomarkers, such as PRKCA and CAV1, was associated with the efficient activity of dasatanib in tissues 29,30 . PKC and CAV1 were associated with unbalanced process 3 along with PDGFRβ, and therefore we selected dasatinib to target this unbalanced process. Based on the PaSSS of G361, trametinib should effectively target both unbalanced processes, 1 and 6 (Fig. 6b). However, unbalanced process 6 was assigned a relatively high amplitude in G361 cells (Supp. Data 7). Thus, we decided to combine trametinib Even though A375 cells harbor BRAF V600E , as do G361 cells, they were found to be characterized by a different set of active unbalanced processes, or PaSSS. a Barcode representing the PaSSS of A375 cells, namely the set of active unbalanced processes based on PaSSS analysis. b Zoom-in images of the unbalanced processes active in A375 cells, and the drugs targeting the central proteins in each process. The upregulated proteins are colored red and the downregulated proteins are colored blue. c, d Survival rates of cells in response to different therapies. The cells were treated with the predicted combination (*) to target A375, the treatments used in the clinics for BRAF mutated melanoma malignancies, monotherapies of each treatment and the predicted combination used to target BRAF mutated melanoma cell line G361. The combination predicted to target A375 was more efficient than any other treatment. e Results of the survival assay (shown in panels c and d) are shown as a heatmap. f Western blot results after treatment with different therapies. The predicted combination depletes the signaling in A375 cells as represented by a decrease in phosphorylation levels of pS6, pERK, pAkt, pPKM2, and pPDGFRβ. Akt remains active when the cells are treated with monotherapiestrametinib or dabrafenib, and the combination therapies-dabrafenib + trametinib or trametinib + 2-deoxyglucose, the predicted combination of G361. g A375 cells were treated as indicated for 72 h and then the viability of the cells was measured in an MTT assay. The effect of the predicted combinations (marked in the figure with asterisk signs) was superior to combinations and single drugs expected to partially inhibit the cell line-specific altered signaling signature. The predicted drug combinations are cell line-specific and highly efficacious In A375 cells, trametinib, dabrafenib, and dasatinib killed up tõ 80% of the cells, when administered as monotherapies in a range of concentrations between 1 nM and 1 µM (Fig. 5c, e). Erlotinib was used as a negative control, as it was predicted not to target any major node in the PaSSS of A375 cells, and indeed killed only up to~15% of the cells (Fig. 5c, e). 2-DG, which was predicted to only partially target the unbalanced flux, namely one of the three unbalanced processes active in A375 cells (unbalanced process 6; Fig. 5a, b), killed up to~30% of the cells when administered as monotherapy (Fig. 5c, e). The clinically used drug combination, trametinib and dabrafenib, was more effective than each drug alone and killed up to~85% of the cells (Fig. 5d, e). However, we predicted that the clinically used combination would not be optimal in A375 cells, because neither trametinib nor dabrafenib was predicted to target unbalanced process 3 (Fig. 5a, b). Indeed, when trametinib and dabrafenib were administered to the cells, pPDGFRβ was not inhibited (Fig. 5f), suggesting that unbalanced Zoom-in images of the unbalanced processes active in G361 cells, and the drugs targeting the central proteins in each process. The upregulated proteins are colored red and the downregulated proteins are colored blue. c, d Survival rates of cells in response to different therapies. The cells were treated with the predicted combination (*) to target G361, the treatments used in the clinics for BRAF mutated melanoma malignancies, monotherapies of each treatment and the predicted combination used to target BRAF mutated melanoma cell line A375. The combination predicted to target G361 was more efficient than any other treatment. e Results of the survival assay (shown in panels c and d) are shown as a heatmap. f Western blot results after treatment with different therapies. The predicted combination depletes the signaling in G361 cells as represented by a decrease in phosphorylation levels of pS6, pERK, and pAkt. Akt remains active when the cells are treated with dabrafenib or dabrafenib + trametinib. g G361 cells were treated as indicated for 72 h and then the viability of the cells was measured in an MTT assay. The effect of the predicted combination (marked in with an asterisk sign) was superior to combinations and single drugs expected to partially inhibit the cell line-specific altered signaling signature. process 3 remained active in A375 cells (Fig. 5b). Interestingly, the combination of trametinib and dabrafenib, or trametinib alone, also invoked an upregulation of pAkt (Fig. 5f). We hypothesize that this can be explained by the fact that pAkt is anti-correlated to pMEK in unbalanced process 6 (Fig. 5b), and therefore in cases where the unbalanced flux is only partially inhibited, the levels of pAkt can increase when pMEK is inhibited. This result corresponds to the previous findings showing that MEK inhibitors may induce Akt activation 31 . Dasatinib, however, abolished the functional activity of PDGFRβ, but did not decrease the levels of pS6 and pS6K from process 1, pERK2 (MEK substrate that participates in process 6; Supp. Data 8) and pPKM2 from process 6 ( Fig. 5f), suggesting that only unbalanced process 3 was inhibited by dasatinib (Fig. 5b). p53 is anti-correlated to pMEK in unbalanced process 6 (Fig. 5b) and was upregulated as well when trametinib was added to A375 cells (Fig. 5f). Overall, these results strengthen the notion of the independence of the unbalanced processes in A375 cells and underscore the need for concurrent inhibition of patient-specific active unbalanced processes in cancer. Indeed, our predicted combination for A375, trametinib, and dasatinib (Fig. 5b), was highly efficacious and killed up to~95% of the cells (Fig. 5d, e). Trametinib and dasatinib, when combined, diminished pS6, pERK, pS6K, and pPKM2 signaling, lowered the levels of pPDGFRβ, and increased p53 levels (Fig. 5f). We tested the effect of the combination predicted for G361 cells, trametinib and 2-DG, on A375 cells, and found that it was less effective in inhibiting the intracellular signaling (Fig. 5f), and in inhibiting cell survival (Fig. 5d, e) as compared with the drug combination predicted specifically for the PaSSS of A375. We assume that leaving certain elements in the unbalanced signaling untargeted may not only enrich the cells/subpopulations harboring the untargeted processes but also invoke other, previously undetected pathways (e.g., subpopulations that were initially small and undetectable, and increased during treatment, or rather formed anew during treatment), thereby leading to a switch from one signaling state to another. In G361 cells, trametinib and 2-DG, both predicted by PaSSS analysis to target the unbalanced signaling flux in G361 cells, demonstrated efficient killing of G361 cells, achieving up to~65% and~75% killing, respectively, when administered to the cells as monotherapies at 1 µM (trametinib) and 1 mM (2-DG) (Fig. 6c, e). Dasatinib, which was highly effective in A375 cells, demonstrated a very weak effect in G361 cells, killing only~20% of the cells when administered at 1 µM (Fig. 6c,e). Erlotinib was used as a negative control, as it was not expected to target any of the unbalanced processes active in G361 cells (Fig. 6b), and indeed killed only up to~10% of the cells (Fig. 6c, e). When we tested combinations of drugs, we found that when G361 cells were treated with a combination of trametinib and dabrafenib, the combination was superior to each drug administered alone, and reached~90% killing of the cells when both drugs were administered at 1 µM (Fig. 6d, e). However, despite the relatively strong effect, this combination evoked pAkt (Fig. 6f), suggesting that some altered signaling pathways remained active in the cells. The results of our analysis denoted that unbalanced process 6 was active with a relatively high amplitude in G361 cells (Supp. Data 7). We, therefore, assumed that the reason that the combination of trametinib and dabrafenib did not abolish the unbalanced signaling flux entirely is that unbalanced process 6 was not effectively shut off, allowing some metabolic activity, and possibly signaling rearrangements. We predicted that the addition of 2-DG to trametinib will more effectively collapse the PaSSS that emerged in G361 cells, because of 2-DG targets GAPDH and PKM2, two additional central upregulated nodes in unbalanced process 6 (Fig. 6b). We indeed found that the combination of trametinib and 2-DG abolished the cells almost completely when trametinib and 2-DG were added at 1 µM and 2 mM, respectively (Fig. 6d, e). The combination of trametinib and 2-DG also effectively turned off the cellular signaling, as represented by the inhibition of pS6, pAkt, pS6K, and pERK, while each drug alone or other combinations we tested failed to do so, leaving some of the elements of the signaling active (Fig. 6f). For example, 2-DG alone reduced pPKM2 levels but did not influence the levels of pS6 and pERK (Fig. 6f), suggesting that process 1 remained active and process 6 was only partially inhibited (Fig. 6b). When tested in an MTT assay (assessing metabolic activity of the cells), the predicted combinations demonstrated higher efficacy and selectivity and were superior to other drug combinations or to each inhibitor alone (Figs. 5g, 6g). Interestingly, note that while the survival assay shows that treatment of G361 cells with dabrafenib+trametinib resulted in~90% killing of the cells (Fig. 6d, e), the results of the MTT assay showed that the treated cells remained highly metabolically active (Fig. 6f), suggesting that the treatment with dabrafenib + trametinib leaves the living cells viable. The PaSSS-based prediction, trametinib + 2-DG, however, led to significant inhibition of G361 cell survival, as well as viability (Fig. 6d,e,g). As opposed to common therapies used in clinics, the rationally designed cell line-specific drug combinations prevented the development of drug resistance in vitro We hypothesized that since our predicted drug combinations target the main altered processes simultaneously, they may delay or prevent the development of drug resistance (Fig. 7a). To test this hypothesis, G361 and A375 cells were treated twice a week with single inhibitors or with different combinations of inhibitors, for 4 weeks. In G361 cells, 1 nM of trametinib demonstrated little to no effect on the survival of the cells (Fig. 7b). 1 µM of dabrafenib killed up tõ 92% of the cells at day 21, and then the cells began to regrow, even though the drug was still administered to the cells twice a week (Fig. 7b). 2 mM of 2-DG killed up to~78% of the cells at day 7, and then the cells began to regrow regardless of the presence of the drug (Fig. 7b). Combined treatment with trametinib and dabrafenib, a combination expected to partially target the altered signaling signature (Fig. 6a, b), effectively killed up to~96% of the cells at day 21, but then the cells began to regrow at day 28 in the presence of the drugs (Fig. 7b). However, when the cells were treated with the G361 PaSSS-based combination, trametinib, and 2-DG (Fig. 6a, b), the cells continued to die until they reached a plateau at day 14, and no regrowth of the cells was evident (Fig. 7b). Similar results were obtained in A375 cells -all monotherapies led to cellular regrowth after several weeks of treatment (Fig. 7c). Combined treatment with trametinib and dabrafenib achieved 88% killing at day 3, but then the cells grew until they reached 20% survival at day 28 (Fig. 7c). Trametinib and 2-DG killed 55% of the cells at day 3 with an increase in effect over time, reaching 18% survival at day 28 (Fig. 7c). The A375 PaSSS-based combination, trametinib, and dasatinib (Fig. 5a, b), demonstrated a significant killing effect that became stronger with time, reaching near complete killing of the cells at 28 days (Fig. 7c). These results clearly show that the PaSSS-based combinations predicted for each melanoma cell line prevent cellular regrowth in-vitro. Thus, targeting the actual altered signaling state, identified in the melanoma cells, and not necessarily the primary driver mutations, can be especially effective in disturbing the signaling flux and preventing cellular regrowth. The predicted drug combinations were superior to clinically used therapies in vivo We turned to examine the effect of the PaSSS-predicted drug combination in murine models. The cells were injected subcutaneously into NSG mice, and then treated 6 times a week for up to 4 weeks ( Fig. 8; Supp. Fig. 7 shows that the mice demonstrated no significant weight loss during treatment). A375 tumors that were treated with trametinib alone or with the combination trametinib + 2-DG (predicted to be efficient for G361 but not for A375 cells (Figs. 5, 6)) demonstrated slightly reduced growth relative to vehicle-treated tumors (Fig. 8a). When A375 tumors were treated with the clinically used combination, trametinib + dabrafenib, a stronger effect was observed (Fig. 8a). Fig. 7 Development of resistance to different therapies. a The development of resistance to different types of therapies is shown in the illustration. The cells were treated with different therapies twice a week and then checked for cell survival. b G361 cells were treated with monotherapies, dabrafenib + trametinib, or trametinib + 2-DG, twice weekly for 28 days. The cells exhibited signs of drug resistance after 28 days. However, resistance development was not evident in cells that were treated with trametinib + 2-DG. c A375 cells were treated with the monotherapies, trametinib + dasatinib, dabrafenib + trametinib, or trametinib + 2-DG, twice weekly for 28 days. Development of resistance was evident after 21 days, but not in cells treated with trametinib + dasatinib. , and A2058 (c) cells were injected subcutaneously into mice, and once tumors reached 50 mm 3 , treatments were initiated. In all cases, the PaSSS-based drug combinations predicted to target the cell line-specific altered signaling signature, significantly inhibited tumor growth and demonstrated an effect superior to drug monotherapies/combinations predicted to partially target the PaSSS. Asterisk (*) denotes the cell-specific drug combination predicted for each cell line (red for A375, black for G361, and blue for A2058). PaSSS analysis predicted that trametinib + dabrafenib would achieve partial inhibition of the altered signaling in A375 cells (Fig. 5a, b) and that adding dasatinib to trametinib should achieve a more efficient inhibition of intracellular signaling that have emerged in A375 cells (Fig. 5a, b). Indeed, the combination trametinib + dasatinib demonstrated an effect superior to all other treatments and significantly inhibited the growth of A375 tumors (Fig. 8a, Supp. Fig. 6e). Trametinib alone, or in combination with dasatinib or dabrafenib, was predicted to partially target the PaSSS of G361 cells (Fig. 6a, b). And indeed, these treatments demonstrated a reduction in tumor growth relative to vehicle treatment (Fig. 8b). However, the PaSSS-based combination, trametinib + 2-DG, demonstrated the strongest effect, achieved significant inhibition of G361 tumor growth (Fig. 8b), and reduced the signaling flux (Supp. Fig. 6c). To further validate the PaSSS-based concept presented in this study we selected an additional BRAF V600E cell line, A2058. The signaling signature of A2058 consists of a single unbalanced process, unbalanced process 1 (Supp. Data 7,9), which is active in A375 and G361 as well (unbalanced process 1 is represented by the level of the central node, pS6K, in Supp. Fig. 6a). In contrast, process 6 (active in G361 and A375; represented by pMEK) and process 3 (active in A375; represented by PDGFR) were not found to be active in A2058 (Supp. Fig. 6a, Supp. Data 7 and 9). Thus, we predicted that A2058 malignancy should be treated with trametinib monotherapy. Figure 8c demonstrates that a low concentration of trametinib (0.5 mg/kg) was most effective (also Supp. Fig. 6d), and intriguingly more effective than a higher concentration of trametinib (1 mg/kg, Fig. 8c), corresponding to previously published results showing that high concentrations of trametinib were ineffective in A2058 melanoma 32 . We hypothesize that the administration of higher concentrations of trametinib may be followed by activation of anti-apoptotic pathways as was reported earlier 33 . Adding 2-DG did not significantly change the growth rate of the tumor while adding dabrafenib decreased the success of the treatment (Fig. 8c). Interestingly, adding either 2-DG or dabrafenib to trametinib led to increased pAkt and pS6 levels (Supp. Fig. 6d), suggesting again that random addition of drugs (i.e., not based on personalized signatures) to the treatment may evoke different, sometimes undesired, signaling feedback responses. These results point to the significantly higher efficiency of the PaSSS-predicted combinations relative to drug combinations used in clinics. Moreover, we demonstrated the selectivity of the individualized treatments. The predicted and very effective combination for one BRAF V600E melanoma malignancy was significantly less effective for the other, and vice versa (Fig. 8). Our results underscore the need for personalized treatment for each melanoma patient. Although the predicted combinations achieved an effect superior to the combination used in clinics, they did not flatten the tumor growth curves in all cases. This raises the possibility that certain subpopulations in the tumors were overlooked. This can result from the growth of subpopulations that were initially very small and were therefore undetected in bulk proteomics. Alternatively, such subpopulations may form during the course of treatment due to new unbalanced processes that are induced in response to environmental changes (e.g., communication between cancer cells and stroma, which cannot be detected in in-vitro assays) 34 . Taking several biopsies during treatment may resolve such expanded, initially undetected, cellular subpopulations and help to adjust the personalized treatment accordingly. DISCUSSION With the accelerated gain of knowledge in the field of melanoma therapy and cancer research, it is becoming clear that tumors evolving from the same anatomical origins cannot necessarily be treated the same way 35 . Inter tumor heterogeneity results in various response rates of patients to therapy [36][37][38] . Herein we extend this notion and show that even tumors that were initially driven by the same oncogenes, specifically BRAF V600E -driven melanoma tumors, often evolve in different molecular manners 39 , giving rise to distinct altered signaling signatures, or PaSSS (patient-specific altered signaling signature), at the time of biopsy. We show that 17 altered molecular processes are repetitive among the 725 SKCM and THCA tumors. Each tumor is characterized by a specific PaSSS, i.e., a subset of~1-3 unbalanced processes. Accordingly, each patient is assigned a unique barcode, denoting this PaSSS. We show that the collection of 725 tumors is described by 138 distinct barcodes, suggesting that the cohort of patients consists of 138 types of cancer, rather than only 4 types (SKCM or THCA; BRAF WT or BRAF V600E ). These 138 types of tumors, each representing a barcode, or a sub-combination of 17 unbalanced processes, are mapped into a multi-dimensional space, consisting of 17 dimensions. Once the tumor-specific information is transformed into a multi-dimensional space, treating these thousands of tumors becomes at an arm's reach. The specific barcode assigned to each patient allows the rational design of patient-tailored combinations of drugs, many of which already exist in clinics. We found that 353 BRAF V600E and BRAF WT melanoma tumors are described by 87 distinct barcodes of unbalanced processes and that 372 BRAF V600E and BRAF WT THCA tumors are described by 54 barcodes. Interestingly, the barcodes appeared to be almost mutually exclusive between SKCM and THCA tumors (Supp. Data 4). While this finding suggests that the molecular processes underlying SKCM and THCA tumor evolution may have organspecific differences, a large number of cancer type-specific barcodes and the large number of barcodes describing single patients underscore the need for personalized diagnosis and treatment. We show that tumors harboring BRAF V600E can harbor distinct PaSSSs, and in contrast, that tumors can harbor the same PaSSS regardless of whether they carry BRAF V600E or BRAF WT . We therefore deduce that profiling melanoma patients according to their BRAF mutational status is insufficient to assign effective therapy to the patient. Since the unbalanced processes each harbor a specific group of co-expressed altered proteins, they should all be targeted simultaneously to reduce the altered signaling flux in the tumor. We demonstrate this concept experimentally by analyzing a cell line dataset and predicting efficiently targeted drug combinations for three selected BRAF V600E melanoma cell lines, G361, A375, and A2058. We show that although all cell lines contain the mutated BRAF V600E , they harbor distinct barcodes, and demand different combinations of drugs (Figs. [5][6][7][8]. We demonstrate that our PaSSS-based combinations were significantly more efficient than the drug combination often prescribed clinically to BRAF V600E patients, dabrafenib+trametinib (Figs. [5][6][7][8]. Moreover, we demonstrated the selectivity of the PaSSS-based drug combinations. The highly efficient PaSSS-based drug combination for one melanoma malignancy can be significantly less efficient for another melanoma and vice versa. We note, however, that the PaSSS-based drug combinations did not achieve complete flattening of the tumor growth curves in vivo in some of the cases (Fig. 8). We hypothesize that an approach that, for example, tests the tumor several times during the treatment to examine whether small, previously undetected cellular subpopulations have expanded due to, for example, stroma-tumor communication, might be required 34 . A more holistic approach that combines immunotherapy might be beneficial as well. This is a highly interesting topic that is currently under study in our laboratory. The results reported here highlight the urgent need for the design of personalized treatments for melanoma patients based on individualized alterations in signaling networks rather than on initial mutational events. Furthermore, the study establishes PaSSS analysis as an effective approach for the design of personalized cocktails comprising FDA-approved drugs. Personalized targeted cocktails, which may be further combined with immunotherapy strategies, are expected to provide long-term efficacy for melanoma patients. METHODS Datasets This study utilized a protein expression dataset consisting of 353 skin cutaneous melanoma (SKCM) samples and 372 thyroid carcinoma (THCA) samples. The samples were selected from a large TCPA dataset containing 7694 cancer tissues from various anatomical origins (PANCAN32, level 4 (The Cancer Proteome Atlas Portal, http://tcpaportal.org)). Each cancer tissue was profiled on a reverse-phase protein array (RPPA) for 258 cancerassociated proteins. After filtering out proteins that had NA values for a significant number of patients, 216 proteins remained for further analysis. The dataset for the cancer cell lines was downloaded from the TCPA portal (The Cancer Proteome Atlas Portal, http://tcpaportal.org). The data was already published by Li et al. 40 . A part of the original dataset containing 290 cell lines from 16 types of cancers was selected, including breast, melanoma, ovarian, brain, blood, lung, colon, head and neck, kidney, liver, pancreas, bone, and different types of sarcomas, stomachesophagus, uterus and thyroid cancers. The cell lines in the dataset were profiled for 224 phospho-proteins and total proteins using RPPA. Surprisal analysis Surprisal analysis is a thermodynamic-based information-theoretic approach [41][42][43] . The analysis is based on the premise that biological systems reach a balanced state when the system is free of constraints [44][45][46] . However, when under the influence of environmental and genomic constraints, the system is prevented from reaching the state of minimal free energy, and instead reaches a state which is higher in free energy (in biological systems, which are normally under constant temperature and constant pressure, minimal free energy equals maximal entropy). Surprisal analysis can take as input the expression levels of various macromolecules, e.g., genes, transcripts, or proteins. However, be it environmental or genomic alterations, it is the proteins that constitute the functional output in living systems, therefore we base our analysis on proteomic data. The varying forces or constraints, that act upon living cells ultimately manifest as alterations in the cellular protein network. Each constraint induces a change in a specific part of the protein network in the cells. The subnetwork that is altered due to the specific constraint is termed an unbalanced process. The system can be influenced by several constraints thus leading to the emergence of several unbalanced processes. When tumor systems are characterized, the specific set of unbalanced processes is what constitutes the tumor-specific signaling signature. Surprisal analysis discovers the complete set of constraints operating on the system in any given tumor, k, by utilizing the following equation: 47 ln where i is the protein of interest, X 0 i is the expected expression level of the protein when the system is at the steadystate and free of constraints, and ΣG iα λ α ðkÞ represents the sum of deviations in the expression level of the protein i due to the various constraints, or unbalanced processes, that exist in the tumor k. The term G iα denotes the degree of participation of protein i in the unbalanced process α, and its sign indicates the correlation or anticorrelation between proteins in the same process (Supp. Data 1). Proteins with significant G iα values are grouped into unbalanced processes (Supp. Fig. 1, Supp. Data 2) that are active in the dataset 10 . The term λ α (k) represents the importance of the unbalanced process α in the tumor k (Supp. Data 1). The partial deviations in the expression level of the protein i due to the different constraints sum up to the total change in expression level (relative to the balance state level), ΣG iα λ α ðkÞ. For complete details regarding the analysis please refer to the SI of reference 11,47 . Determination of the number of significant unbalanced processes The analysis of the 725 patients provided a 725 × 216 matrix of λ α (k) values, such that every row in the matrix contained 216 values of λ α (k) for 725 patients, and each row corresponding to an unbalanced process (Supp. Data 1). However, not all unbalanced processes are significant. Our goal is to determine how many unbalanced processes are needed to reconstruct the experimental data, i.e., for which value of n: ln (X i (k)/M) ≈ −ΣG iα λ α (k). To find n, we performed the following two steps: Reproduction of the experimental data by the unbalanced processes was verified. We plotted ΣG iα λ α (k) for α = 1, 2, …, n against ln X i (k) for different proteins, i, and for different values of n, and examined the correlation between them as n was increased. An unbalanced process, α = n, was considered significant if it improved the correlation significantly relative to α = n -1 (Supp. Fig. 2) (see reference 9 for more details). Processes with significant amplitudes were selected. To calculate threshold limits for λ α (k) values (presented in Supp. Data 1 and Supp. Fig. 3) the standard deviations of the levels of the 10 most stable proteins in this dataset were calculated (e.g., those with the smallest standard deviations values). Those fluctuations were considered as baseline fluctuations in the population of the patients which are not influenced by the unbalanced processes. Using standard deviation values of these proteins the threshold limits were calculated as described previously 48 . The analysis revealed that from α = 18, the importance values, λ α (k), become insignificant (i.e., do not exceed the noise threshold), suggesting that 17 unbalanced processes are enough to describe the system. For more details see references 10,47 . Generation of functional subnetworks The functional sub-networks presented in Figs. 2, 5, 6, and Supp. Figs. 1 and 5 were generated using a python script as described previously 10 . Briefly, the goal was to generate a functional network according to the STRING database, where proteins with negative G values are marked blue and proteins with positive G values are marked red, to easily identify the correlations and anti-correlations between the proteins in the network. The script takes as an input the names of the genes in the network and their G values, obtain the functional connections and their weights from the STRING database (string-db.org), and then plots the functional network (using matplotlib library). Barcode calculation The barcodes of unbalanced processes were generated using a python script. For each patient, λ α (k) (α = 1, 2, 3, …, 17) values were normalized as follows: If λ α (k) > 2 (and is, therefore, significant according to the calculation of threshold values) then it was normalized to 1; if λ α (k) < −2 (significant according to threshold values as well) then it was normalized to −1; and if −2 < λ α (k) <2 then it was normalized to 0. Cell culture The BRAF mutated melanoma cell lines, A375, G361, and A2058 were obtained from the ATCC and grown in DMEM (G361 and A2058) or RPMI (A375) medium. The cells were supplemented with 10% fetal calf serum (FCS), L-glutamine (2 mM), 100 U/ml penicillin, and 100 mg/ml streptomycin and incubated at 37֯°C in 5% CO 2 . The cell lines were authenticated at the Biomedical Core Facility of the Technion, Haifa, Israel. Antibodies and western blot analysis The cells were seeded into 6 well plates (~1.5 × 10 6 cells/well) and grown under complete growth media. A375 cells were treated the next day as indicated for 48 h in a partial starvation medium (RPMI medium with 1.2% FCS). G361 and A2058 cells were treated in a complete growth medium for 24 h. The dead cells were collected from the medium. The adherent cells were then treated with IGF for 15 min. The cells were then lysed using hot sample buffer (10% glycerol, 50 mmol/L Tris-HCl pH 6.8, 2% SDS, and 5% 2mercaptoethanol) and western blot analysis was carried out. The lysates were fractionated by SDS-PAGE and transferred to nitrocellulose membranes using a transfer apparatus according to the manufacturer's protocols (Bio-Rad). Blots were developed with an ECL system according to the manufacturer's protocols (Bio-Rad).
2020-11-05T09:06:01.338Z
2020-11-03T00:00:00.000
{ "year": 2021, "sha1": "f00459c4f5fefa5a7a1c2a0464cd02607452947d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41698-021-00190-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2454264047dd1170df588c43879e6b75b40fc99", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14676023
pes2o/s2orc
v3-fos-license
Silencing CAPN2 Expression Inhibited Castration-Resistant Prostate Cancer Cells Proliferation and Invasion via AKT/mTOR Signal Pathway The mRNA expression of CAPN2 was upregulated in CRPC cells (DU145 and PC3) than that in non-CRPC cells. Silencing CAPN2 expression could inhibit DU145 and PC3 cells proliferation by cell cycle arrest at G1 phase. Knockdown of CPAN2 level suppressed the migration and invasion capacity of CRPC cells by reducing matrix metalloproteinase-2 (MMP-2) and MMP-9 activation, as well as repressing the phosphorylation protein expression of AKT and mTOR. In addition, we found that the expression of CAPN2 was elevated in Pca tissues than that in normal control tissues. Therefore, we showed the important roles of CAPN2 in the development and progression in CRPC cells, suggesting a new therapeutic intervention for treating castration-resistant prostate cancer patients. Introduction Castration-resistant prostate cancer (CRPC) is one of the most commonly diagnosed human cancers in men and the second cause of cancer-related death for Western men [1]. Recently, its incidence and mortality have been continuously rising among Chinese men [2]. As the progress of the disease, invasion and metastasis are the main characteristics of malignant tumor. In spite of current available treatments, CRPC becomes metastasis and castration-resistant and advanced metastatic CRPC may be incurable [3]. Therefore, a novel progressive and prognostic CRPC maker providing valuable information for patients is urgently needed. Cancer metastasis was a multistep process, including invasion and metastasis in tumor cells, alterations of the tumor microenvironment, and degradation of the extracellular matrix (ECM). And degradation of the ECM was considered to play a crucial role in the formation of tumor metastasis [4]. Matrix metalloproteinases (MMPs) such as MMP-2 and MMP-9 contribute to ECM degradation leading to tumor invasion and metastasis. The calpain system was a big family of calcium-activated protease, which was known to play important roles in cell proliferation, migration, and invasion [5]. CAPN2, also known as m-calpain, was a member of the calpains family. Currently, the two special isoforms -calpain (CAPN1) and m-calpain (CAPN2) have been fully studied [6]. And in 1996, Shiba et al. first reported the relationship between the calpains and cancer [7]. More recent evidences have showed the important role of CAPN2 in both carcinogenesis and tumor progression [8][9][10][11]. The high expression of CAPN2 in basal-like or triple-negative breast cancer was significantly associated with clinical outcome of patients and was conformed in an independent cohort of patients [8]. In addition, knockdown of CAPN2 mRNA expression reduced breast cancer cell invasion by regulating invadopodia dynamics [12]. Decreased CAPN2 expression using shRNA or chemical inhibition activity reduced glioblastoma cell invasion by 90% [11]. Moreover, silencing CAPN2 inhibited the migrative and invasive potentials of hepatocellular carcinoma cells by attenuating the MMPs secretion [10]. Furthermore, CAPN2 expression was also determined in CRPC samples, and its mRNA expression was significantly increased in metastatic prostate cancer compared with normal prostate by cDNA microarray [13,14]. These results strongly demonstrated that CAPN2 may aim to reduce tumor metastasis and act as a potential treatment for special cancers. Although the mechanism of calpain in tumor progression was well elucidated, how CAPN2 regulates CRPC cell migration and invasion was little known. In this study, we used qRT-PCR to determine CAPN2 levels in CRPC cell lines. Using siRNA based knockdown of CAPN2, we demonstrate that CAPN2 expression was required for CRPC cell migration and invasion through reducing MMP-2 and MMP-9 levels. Furthermore, silencing CAPN2 expression inhibiting CRPC cell proliferation was determined by flow cytometry. Furthermore, we used qRT-PCR, Western blot, and immunohistochemistry analysis to investigate the CAPN2 levels in clinical Pca tissues. Our results suggest that CAPN2 plays important roles in the invasive and metastatic potential of CRPC cells and may act as a candidate target for human CRPC diagnosis and therapy. Clinical Samples and Cell Culture. The tissues and nonmalignant prostate tissues were obtained from patients who underwent radical prostatectomy at First Affiliated Hospital of Nanjing Medical University, China. All the samples after surgery were immediately frozen in liquid nitrogen and stored at -80 ∘ C until further analysis. Only samples containing >70% tumor cells were used for the extraction of total RNA. The use of clinical samples was approved by the medical ethics committee of our hospital ( Table 1). The human prostate cancer cell lines (22RV1, LNCaP, DU145, and PC3) were purchased from the Cell Bank Type Culture Collection of the Chinese Academy of Sciences (Shanghai, China). The CRPC cell lines DU145 and PC3 human prostatic carcinoma cell lines were cultured in F-12K Nutrient Mixture (Gibco, USA), and 22RV1 and LNCaP were cultured in RPMI-1640 (Gibco, USA), all supplemented with medium containing 10% fetal bovine serum (FBS, Gibco, USA) and at humidified atmosphere containing 5% CO 2 at 37 ∘ C. RNA Isolation and qPCR. According to the manufacturer's instructions, the total RNA was isolated from tissues and cells using Trizol (Invitrogen, USA). RNA concentration was measured using NanoDrop (Thermo Scientific). For CAPN2, RNA was reverse-transcribed into cDNAs using a PrimeScriptOne-Step RT-PCR Kit (Takara, Dalian, China) in accordance with the manufacturer's instructions. The primers were CAPN2 (5 -GTTCTGGCAATACGGCGAGT-3 , forward; 5 -CTTCGGCTGAATGCACAAAGA-3 , reverse);actin (5 -ACTGGAACGGTGAAGGTGAC-3 , forward; 5 -AGAGAAGTGGGGTGGCTTTT-3 , reverse). The qRT-PCR program was as follows: 50 ∘ C for 2 minutes, 95 ∘ C for 5 minutes, 40 cycles of 95 ∘ C for 15 seconds, and 60 ∘ C for 60 seconds. The reactions were performed and analyzed using an Applied Biosystems StepOne Plus Real-Time PCR System (Applied Biosystems, USA). All reactions were run in triplicate and normalized by the internal control products of -actin. Cell Proliferation Assay. Forty-eight hours after transfection, cells were seeded into 96-well plates at a density of 2,000 cells/well and cultured for 24, 48, 72, and 96 hours. Cell Counting Kit-8 (CCK-8; Dojindo Molecular Technologies, Japan) was used to determine cell proliferation according to the manufacturer's protocol. Absorbance was detected at the wavelength of 450 nm. Three wells were measured for cell viability in each treatment group. Cell Cycle Analysis. The cell cycle distribution was analyzed by flow cytometry (Becton Dickinson). Forty-eight hours after transfection, cells were harvested, washed twice with ice-cold phosphate-buffered saline, and fixed with 70% ethanol at −20 ∘ C overnight. Then, cells were cultured in 50 mg/mL propidium iodide and 1 mg/mL RNase for 30 min at room temperature. Last, the treated cells were analyzed. At least 100,000 cells were acquired for each sample. The experiments were performed in triplicate. Protein Isolation and Western Blot. CRPC cell lines and Pca tissues were lysed using radio immunoprecipitation assay buffer (Keygene, Nanjing, China) supplemented with protease inhibitors at 4 ∘ C for 30 min. The protein samples were electrophoresed in 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis, transferred onto a polyvinylidene fluoride membrane (Millipore), and then blocked for 1 hour with 5% nonfat milk at room temperature. The membranes were incubated overnight at 4 ∘ C with primary antibodies for CAPN2 (Abcam, UK), MMP9, AKT, p-AKT, mTOR, p-mTOR (Cell Signaling Technology, USA), MMP2 (Sigma, Sweden), and GAPDH (Bioworld Technology, USA). A horseradish peroxidase-conjugated secondary antibody was incubated in the membrane for 1 hr after three washes with Tris-buffered saline and 0.1% Tween. The blots were detected using chemiluminescence detection reagent (Thermo Scientific). Protein levels were determined by normalization to GAPDH. Immunohistochemistry. Clinical Pca tissues and adjacent nonmalignant tissues were fixed in formalin, embedded in paraffin, and cut in 5 m thick consecutive sections. After deparaffinization and antigen recovery (in sodium citrate solution, pH 6.0, 20 min, 98 ∘ C), the sections were washed thrice in 0.01 mol/L phosphate-buffered saline (PBS) for 5 min each and blocked for 1 h in 0.01 mol/L PBS containing 0.3% Triton X-100 and 5% BSA. CAPN2 primary antibody (1 : 200) was incubated at 4 ∘ C overnight. Next day, the sections were washed with PBS and incubated with PBS containing horseradish peroxidase-conjugated IgG (1 : 500) for 1.5 h at room temperature. Then, the slides were performed by diaminobenzidine reaction. Immunohistochemistry for each sample was observed under microscope and repeated thrice. And the lower surface of the membrane was fixed in 95% ethanol and stained with crystal violet. Five random fields were counted. All of the experiments were performed in triplicate. Cell Migration Assay. The cells transfected with si-CAPN2 or NC were grown to 90% confluency in the medium containing 10% FBS. The cell monolayers were then wounded by scratching with a sterile 200 uL tip. After scratching, cells were incubated in serum-free medium. The wound was captured at 0, 24, and 48 hours under a microscope equipped with a camera. Statistical Analyses. Results are expressed as mean ± standard deviation (SD). Differences between groups were subjected to Student's -test. < 0.05 was considered statistically significant. All of the statistical calculations were performed using SPSS software (Version 13.0 SPSS). CAPN2 Was Highly Expressed in CRPC Cell Lines. To investigate the expression of CAPN2 mRNA expression, we used qRT-PCR to show that CAPN2 was significantly upregulated in CRPC cell lines (DU145 and PC3) than other cell lines ( < 0.05; Figure 1(a)). Effects of CAPN2 on Cell Proliferation of Human CRPC Cells. We demonstrated the CAPN2 expression in CRPC cell lines (Figure 1(b)). We further determined the effects of CAPN2 on growth of CRPC cells. DU145 and PC3 cells were transfected with si-CAPN2 or NC (Figure 1(d)). After siRNA transfection for 48 h, we used CCK8 assay to find that downregulation of CAPN2 expression markedly inhibited the growth of CRPC cells at 72 and 96 hours. ( < 0.05; Figure 2(a)). Furthermore, we used flow cytometry to analyze cell cycle distribution and to investigate and characterize the effects of CPAN2 on cell growth. The percentages of DU145 and PC3 cells transfected with si-CAPN2 in the G0/G1 phase were higher than that of the NC ( < 0.05; Figure 2(b)). These results show that slicing of CAPN2 could inhibit the proliferation of CRPC cells. CAPN2 Affects Cell Invasion and Migration. After siRNA transfection for 48 h, we used chemotactic cell invasion assay and wound healing to estimate whether CAPN2 was involved in the invasive and metastatic process of CRPC cells. In wound healing, the cells migration was more slower when cells transfected with si-CAPN2 in comparison with NC ( Figure 3(a)). Similarly, knockdown of CAPN2 significantly suppressed tumor cells invasion in DU145 and PC3 compared with the NC (Figures 3(b) and 3(c)). All these data strongly show that reducing CAPN2 expression significantly repressed the migratory and invasive abilities of CRPC cells in vitro. Effects of CAPN2 on Expression of MMP-2 and MMP -9 in CRPC Cells. As previously described, the MMPs play an important role in degrading ECM, which is required for tumor cell migration and invasion. Western blot analysis revealed a positive correlation between CAPN2 and MMP-2/-9; Western blotting indicated that si-CAPN2 observably decreased the expression of MMP2 and MMP9 when GAPDH served as a loading control (Figure 4(a)). Our results suggest that downregulation of MMP-2/-9 might be involved in reducing migration and invasion of DU145 and PC3 cells after being treated with si-CAPN2. Effects of CAPN2 on AKT/mTOR Pathways. AKT/mTOR pathways were known to play an important role in the proliferation, survival, and motility of tumor cells [15,16]. Therefore, we used Western blot to investigate the possible effects of CAPN2 on the regulation of AKT/mTOR signaling. CAPN2 protein level was effectively reduced in the treated cells than that in cells treated with control siRNA. The phosphorylation protein levels of AKT and mTOR were significantly repressed after being transfected with si-CAPN2 for 48 hours, while our result shows no impact on the total AKT and mTOR protein (Figure 4(b)). Furthermore, in rescue experiment, SC79 (a phosphorylation activator of AKT) treatment abolished the inhibitory effects of si-CAPN2 on invasion and proliferation of CRPC cells (Figures 5(a) and 5(b)). The expression of phosphorylation-AKT and mTOR also is reversed after being incubated with SC79 ( Figure 5(c)). In summary, CAPN2 promoted the invasion and growth capability of CRPC cells by activating AKT/mTOR signaling pathways. CAPN2 Was Highly Expressed in Pca Tissues. To investigate the expression of CAPN2 mRNA expression, we used qRT-PCR to show that CAPN2 was significantly upregulated in Pca tissues than that in normal control tissues ( < 0.05; Figure 6(a)). In addition, Western blot and immunohistochemistry analysis revealed that CAPN2 protein level was higher in tumor samples compared with that in adjacent nonmalignant tissues collected from the same patients ( Figures 6(b) and 6(c)). These findings suggest CAPN2 was correlated with tumor progression in prostate cancer. Discussion Accumulated evidences have indicated that CAPN2 play a crucial role in the progression and prognosis of various cancers, including prostate cancers [8-10, 13, 17]. Interestingly, an early study reported that calpain levels were not altered in human prostate tumor [18]. In contrast, an another researcher found that CAPN2 mRNA was significantly upregulated in prostate carcinomas compared with the normal control prostate tissues [14]. In the present study, we detected the mRNA expression of CAPN2 in CRPC cell lines (DU145 and PC3) using qRT-PCR analysis. But the underlying mechanism on CRPC remains to be poorly understood. Therefore, our study aims to show the possible effects of CANP2 on CRPC cells and provide new insights into the progression of CRPC. CRPC progression was a multiple step, among which was the capacity to reduce the extracellular matrix that stands for a crucial step for tumor invasion and metastasis [19]. The matrix metalloproteinases (MMPs) were known to break down ECM. Numerous studies providing the important roles of MMPs in CRPC metastasis mechanisms have been reported: MMP-2 and MMP-9 protein have been associated with aggressive CRPC and acted as significant prognostic factors in human prostate cancer [20][21][22]; downregulation of MMP2-/-9 protein levels could reduce the migration and invasion of the treated CRPC cells [23][24][25]. Previous results have shown that CAPN2 might affect the invasive and metastatic capability of tumor cells in HCC by attenuating MMPs protein levels [10]. Therefore, our study firstly showed the relationship between CAPN2 expression and MMPs in prostate cancer cells. We used mRNA silencing techniques to reduce CAPN2 expression, and our findings indicate that si-CAPN2 suppress the invasive ability and wound healing capacity. Furthermore, knockdown of CAPN2 expression in vitro also showed a significant reduction in MMP2 and MMP9 protein expression, suggesting a strong association of CAPN2 and MMPs. MMP activity was mediated and controlled by its relevant gene expression and enzymatic reaction. Several evidences have indicated that AKT and its downstream factor mTOR could have effects on MMP-2 and MMP-9 expression [26][27][28]. The RAS/MAPK/ERK and AKT/mTOR signal pathway has been verified to play critical roles in the development and progression of human carcinomas [26,29,30]. Furthermore, the treatment of cancer cells with calpain inhibitors reduced the levels of truncated retinoid X receptor-, which could act to promote cancer cell proliferation and survival through AKT activation [31]. Therefore, we further determined the possible effects of CAPN2 on AKT/mTOR signal pathway. In our study, the results showed that a low expression of CAPN2 expression could significantly repress the phosphorylation protein level of AKT, as well as mTOR, while no significant difference was found in the total AKT and mTOR protein. In addition, CAPN2 also affected the treated cells growth, by arresting in G1 phase of the cell cycle. All these results suggest that downregulation of CAPN2 might suppress CRPC cell proliferation and invasion by reducing MMP-2/-9 through AKT/mTOR pathway. Furthermore, we observed that CAPN2 mRNA expression was highly regulated in Pca tissues than that in the nonmalignant samples. Analogously, in comparison with the adjacent normal tissues, CPAN2 protein level was significantly upregulated in the tumor tissues by Western blot and immunohistochemistry analysis. It indicated that CAPN2 might act as oncogenic biomarkers and promote prostate cancer progression. In conclusion, our data indicated that silencing CAPN2 expression could inhibit cell growth, migratory, and invasion ability of CRPC cells by MMP2 and MMP9 enzyme activities. And, we also found that AKT/mTOR pathway was involved in the effect of CAPN2 on the transfected cells. We believe that CAPN2 may provide a new therapeutic strategy for treating prostate cancer patients. Figure 6: (a) The CAPN2 expression in CRPC tissue and CRPC cells was measured by qRT-PCR and Western blot. (a) CAPN2 expression in CRPC samples was upregulated than that in the nontumor tissues. The median in each triplicate was used to calculate the relative CPAN2 concentration using the comparative 2 −ΔΔCt method. (b) CAPN2 protein expression in three matched normal/tumor samples was detected by Western blot analysis. GAPDH was used as an internal control. N and T represent normal and tumor tissues, respectively. (c) Immunohistochemistry staining to examine the CAPN2 expression in Pca tissues and adjacent normal tissues (×400 magnification).
2018-04-03T03:00:12.632Z
2017-02-09T00:00:00.000
{ "year": 2017, "sha1": "6f6a700a3b582c3a9b6d68043ba0d3896bec7509", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2017/2593674.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36fc7de69f33ab27f27225b3b9d140bc7289d61c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
251269857
pes2o/s2orc
v3-fos-license
Novel Scaffolds for Modulation of NOD2 Identified by Pharmacophore-Based Virtual Screening Nucleotide-binding oligomerization domain-containing protein 2 (NOD2) is an innate immune pattern recognition receptor responsible for the recognition of bacterial peptidoglycan fragments. Given its central role in the formation of innate and adaptive immune responses, NOD2 represents a valuable target for modulation with agonists and antagonists. A major challenge in the discovery of novel small-molecule NOD2 modulators is the lack of a co-crystallized complex with a ligand, which has limited previous progress to ligand-based design approaches and high-throughput screening campaigns. To that end, a hybrid docking and pharmacophore modeling approach was used to identify key interactions between NOD2 ligands and residues in the putative ligand-binding site. Following docking of previously reported NOD2 ligands to a homology model of human NOD2, a structure-based pharmacophore model was created and used to virtually screen a library of commercially available compounds. Two compounds, 1 and 3, identified as hits by the pharmacophore model, exhibited NOD2 antagonist activity and are the first small-molecule NOD2 modulators identified by virtual screening to date. The newly identified NOD2 antagonist scaffolds represent valuable starting points for further optimization. Introduction Nucleotide-binding oligomerization domain-containing protein 2 (NOD2) is an intracellular innate immune receptor that belongs to the pattern recognition receptor (PRR) superfamily [1]. These receptors act as immune sentinels, orchestrating the first line of defense against invading pathogens by recognizing and responding to conserved pathogenassociated molecular patterns (PAMPs), such as nucleic acids and bacterial cell wall components [2]. NOD2, which is expressed in both hematopoietic (T cells, B cells, macrophages, dendritic cells) and some non-hematopoietic cells (Paneth cells, goblet cells, enterocytes), responds primarily to "muropeptides", specific substructures of bacterial peptidoglycan, with muramyl dipeptide (MDP; Figure 1) considered as the smallest fragment of peptidoglycan still capable of activating NOD2 [3][4][5]. NOD2 consists of two effector N-terminal caspase recruitment domains (CARDs), a central nucleotide binding and oligomerization domain (NOD), and a C-terminal leucinerich repeat (LRR) domain involved in ligand binding [1]. Following ligation of MDP, NOD2 undergoes self-oligomerization and recruitment of receptor interacting serine/threonine kinase RIP2, which, in turn, leads to the activation of nuclear factor-kB (NF-κB) and mitogen-activated protein kinase (MAPK) pathways. The resulting inflammatory response is characterized by the production of proinflammatory cytokines and the activation of antigen-presenting cells [6,7]. While the NOD2-mediated immune response is critical for mounting a successful defense against bacteria, conversely, dysregulation of this response can have deleterious effects. Several genetic variants of NOD2, characterized by aberrant overaction, have been linked to the development of inflammatory disorders, such as Blau syndrome, and cancer [8]. Given the central role of NOD2 in immunosurveillance, as well as its genetic association with inflammatory diseases, modulation of NOD2 with small-molecules has significant clinical potential. Due to their capacity to stimulate innate immunity, as well as contribute to the generation of adaptive immune responses, NOD2 agonists have been widely highlighted as vaccine adjuvants [9]. Notably, MDP was first identified as the active component of Freund's complete adjuvant [10]. In recent decades, considerable efforts have been made to explore and optimize the structure-activity relationship of MDP derivatives. While only slight modifications of the dipeptide moiety of MDP are permissible, the N-acetylmuramic acid carbohydrate moiety offers more opportunities for chemical modifications [11]. For example, it has been shown that the entire carbohydrate ring is dispensable and can be replaced by suitable aromatic/heteroaromatic groups. The resulting compounds, also known as desmuramylpeptides, have shown potent NOD2 agonist activity in vitro and enhanced adjuvant activity in vivo ( Figure 1, compound SG8) [12][13][14]. Conversely, the clinical potential of selective NOD2 antagonists is less explored and no compound has yet been introduced into the clinic. Nevertheless, inhibition of NOD2 signaling offers a viable alternative to potent and broad-range anti-inflammatory drugs for treatment of NOD2-associated diseases [8]. For example, the NOD2-mediated innate immune response was highlighted as an important factor in the pathophysiology of atherosclerosis [15,16]. It has also been shown that the proinflammatory response of NOD2 in the liver promotes hepatocarcinogenesis following activation by gut-derived bacterial PAMPs [17]. Although no clinical trials have been conducted to date, combination therapy with NOD2 antagonists and paclitaxel has shown a beneficial anticancer effect in vivo [18,19]. Although a variety of NOD2 agonists and antagonists have been reported, our understanding of the binding modes of these compounds remains limited. In 2016, a crystal structure of rabbit NOD2 in its ADP-bound inactive form was resolved, providing valuable structural insights into the function of NOD2 for the first time [20]. However, to date, no crystal structure of NOD2 in complex with a ligand has been reported, severely hindering rational structure-based drug design approaches. Therefore, the identification of new NOD2 modulators has been limited to ligand-based design approaches and high-throughput screening (HTS) campaigns. Consequently, besides MDP and its structurally closely related derivatives, no other NOD2 agonistic scaffolds have been identified so far [11]. Given that only slight changes to the dipeptide moiety of MDP are permitted, all NOD2 agonists share the predominantly peptide structure of MDP, which is prone to metabolic instability and rapid elimination [21,22]. Similarly, while dual NOD1/2 antagonists, such as benzodiazepine [18], benzofused five-membered sultam [23], quinazoline [24], and indole [25] derivatives have been reported, the only NOD2 selective antagonists discovered to date are based on the benzimidazole structure of GSK669 (Figure 1, compound GSK669 and its derivative SG84 [26]), which was first discovered by an HTS campaign [27]. Therefore, novel scaffolds capable of modulating the activity of NOD2 are required. In this work, we implemented homology modeling, molecular docking, and pharmacophore modeling to identify novel structural classes of NOD2 modulators. Prioritization of twelve compounds from virtual screening for biological evaluation was based on docking and rescoring using molecular mechanics with a generalized Born and surface area solvation (MM-GBSA) method. Surprisingly, while all hits were devoid of NOD2 agonist activity, two compounds showed an inhibitory effect on the MDP-and SG8-induced activation of NOD2, which was not due to cytotoxicity. Albeit our in silico investigation was initially directed toward the discovery of new NOD2 agonists, the successful discovery of two NOD2-active screening hits demonstrates that our approach is capable of identifying novel NOD2 modulators. The identified scaffolds represent promising starting points for further optimization and discovery of novel NOD2 antagonists. Virtual Compound Library Preparation Three virtual compound libraries were prepared for virtual screening: (1) a selected set of experimentally determined NOD2 active compounds, (2) a calculated set of decoy molecules, and (3) a set of commercially available compounds for hit identification via virtual screening with optimized pharmacophore models. For the NOD2 actives library, twelve NOD2 agonists of the muropeptide and desmuramylpeptide structural classes were manually selected from scientific publications (Supplementary Table S1) [3,4,[12][13][14]28,29]. A library of 600 decoy molecules was generated based on the structures of these NOD2 agonists by submitting their structures to the DUD-E decoy generator [30]. This resulted in 50 decoy molecules per compound with similar 1D physicochemical properties, but dissimilar 2D topologies in comparison to the 12 active compounds. A third library of 556,000 compounds from commercial providers was prepared based on the diversity sets from Enamine, Asinex, ChemBridge, Maybridge, LifeChemicals, Vitas-M and KeyOrganics. Libraries were downloaded in SDF format, merged and duplicates removed using the LigandScout Database Merger and Duplicates Remover nodes, as implemented in the Inte:Ligand Expert KNIME Extensions [31]. For each of the three libraries, a maximum of 200 conformations were generated for each molecule using LigandScout's iCon algorithm with the default "BEST" settings (max. number of conformers per molecules: 200, timeout (s): 600, RMS threshold: 0.8, energy window: 20.0, max. pool size: 4000, max. fragment build time: 30) [32]. Each library was saved in LDB (LigandScout database format) using LigandScout's idbgen algorithm with default settings (write all properties and remove duplicates). Pharmacophore Modeling Ligand-based pharmacophore modeling: Twelve NOD2 agonists of the muropeptide and desmuramylpeptide structural classes (Supplementary Table S1) were used for the creation of five ligand-based pharmacophore models in LigandScout 4.4 Expert [33]. A maximum of 200 conformations were generated for each compound, as described above. The models were generated using the following ligand-based pharmacophore creation settings: Scoring function: pharmacophore fit and atom overlap; Pharmacophore type: shared feature pharmacophore; Feature tolerance scale factor: 1.0; Maximum number of result pharmacophores: 5. A coat of exclusion volume spheres was also generated around the alignment of the ligands. All ligands of the training set were automatically aligned to the generated pharmacophore models. The resulting ligand-based pharmacophore models were inspected visually and tested for their performance in distinguishing the active and decoy molecules. Structure-based pharmacophore modeling: The best scoring poses of MDP and SG8 were loaded into the structure-based panel of LigandScout 4.4 Expert. For both poses, a structure-based pharmacophore model was generated, which represented the interactions of the ligand with the residues in the binding pocket. The created models were aligned, and a shared feature pharmacophore model was generated, which incorporated only pharmacophore features present in both models. A coat of exclusion spheres was added to prevent steric clashes with the residues in the binding pocket. The hydrophobic feature was marked as optional to reduce the restrictiveness of the model. Finally, the model was tested for performance in distinguishing between active and decoy molecules. Protein Preparation and Homology Modeling The homology model of human NOD2 was constructed with the Prime module of the Schrödinger software suite [34]. The 2.34 Å resolution crystal structure of rabbit NOD2 in its ADP-bound inactive state (PDB ID: 5IRN), which has 86% sequence homology to human NOD2, was used as the template [20]. Prior to homology modeling, the template structure was prepared using the Protein Preparation Workflow system [35]. Briefly, force-field atom types and bond orders were assigned, missing side chains were added using Prime, correct protonation states were assigned using a pH value of 7.4, the hydrogen bonding network was optimized to address any overlapping hydrogens, and a restrained minimization with a root mean square deviation (RMSD) value of 0.30 Å using the Optimized Potentials for Liquid Simulations 4 (OPLS4) force-field was performed to relieve any strain and alleviate backbone clashes. Water molecules and ADP were removed from the resulting structure. Sequence alignment with the human NOD2 target sequence (UniProtKB ID: Q9HC29) was performed in the Multiple Sequence Viewer using the MUSCLE (MUltiple Sequence Comparison by Log-Expectation) algorithm as implemented in Schrödinger software. The construction of the model was performed using the knowledge-based method. As none of the loop regions absent from the template structure were located in the vicinity of the putative ligand-binding site, missing loops were not built. The gaps were capped with N-methyl-amide (NMA) groups at the C-termini and acetyl (ACE) groups at the N-termini. The overall quality of the resulting model was evaluated with PROCHECK [36], ERRAT [37] using the Structural Analysis and Verification Server (SAVES, https://saves.mbi.ucla.edu/; Accessed date: 24 November 2021). A Ramachandran plot was generated in Maestro. Molecular Docking Three-dimensional models of compounds were built using the LigPrep module of Schrödinger software [38]. Input chiralities were retained, protonation states were generated with Epik at a physiological pH 7.4 [39], and the resulting structures were optimized using the OPLS4 force-field. A docking receptor grid was generated centering the docking box at the centroid of the following residues: Arg877, Trp931, and Ser933. The size of the docking box was set to 30 × 30 × 30 Å. The hydroxyl group of Ser933 and the thiol group of Cys961 were allowed to rotate during docking. Compounds were docked with the standard precision (SP) Glide docking methodology [40]. Ligands were docked flexibly, while the protein was kept rigid. Sampling of the ligand conformational space was enhanced by four times, 50,000 poses per ligand were retained after the rough scoring stage and 1000 poses per ligand were kept for energy minimization. Following final docking, 100 poses per ligand were passed to post-docking minimization, and the top ten scoring poses were inspected manually. Virtual Screening Selected ligand-and structure-based pharmacophore models were used as queries for virtual screening of a library of 556,000 commercially available compounds. The settings included: Scoring function: relative pharmacophore-fit; Screening mode: match all query features; Retrieval mode: get best matching conformation; Max. number of omitted features: 0; and Check exclusion volumes: checked. The hits were ranked according to their relative pharmacophore fit score. Higher ranking hits have a fit score closer to 1.0. The retrieved hits were transferred to Schrödinger software, prepared using LigPrep and docked as described in the Molecular Docking section above. Binding Free Energy Calculation The binding free energies of the top scoring docked poses of all screening hits were calculated with the MM-GBSA method in Prime. For the analysis, the variable-dielectric generalized Born model (VSGB) was used as the continuum solvation model and the OPLS4 was used as the force-field. The docked ligand and all residues within 5.0 Å of the ligand were minimized. The binding free energy was calculated as the difference between the energy of the minimized receptor-ligand complex and the energies of the minimized structures of the unbound ligand and receptor: Cytotoxicity HEK-Blue NOD2 cells were seeded (40,000 cells/well) in 96-well plates in 100 µL culture medium and treated with the screening hits (500 µM) or with the corresponding vehicle (0.2% DMSO; control cells). After 18 h of incubation (37 • C, 5% CO 2 ), the metabolic activity was assessed using the CellTiter 96 Aqueous One Solution cell proliferation assay (Promega, Madison, WI, USA), according to the manufacturer's instructions. The experiments were run in duplicates and repeated as two independent biological replicates. NOD2-NF-κB Reporter Assay HEK-Blue NOD2 cells were seeded (25,000 cells/well) in 96-well plates in 100 µL HEK-Blue detection medium (Invivogen, San Diego, CA, USA). To test for NOD2 agonism, the cells were treated with the screening hits or with the corresponding vehicle (0.2% DMSO). MDP and SG8 (1 µM) were used as the positive controls. After 18 h of incubation (37 • C, 5% CO 2 ), secreted embryonic alkaline phosphatase (SEAP) activity was determined spectrophotometrically as absorbance at 630 nm (BioTek Synergy microplate reader; Winooski, VT, USA). The experiments were run in duplicates and repeated as two independent biological replicates. To test for NOD2 antagonism, the cells were first pre-treated for 1 h with the screening hits, before the addition of MDP or SG8 (1 µM). After 18 h of incubation (37 • C, 5% CO 2 ), SEAP activity was determined as above. The experiments were run in duplicates and repeated as four independent biological replicates. Statistics Data analysis was performed using Prism software (version 9; GraphPad Software, CA, USA). Ligand-Based Pharmacophore Modeling In our first approach to in silico discovery of novel NOD2 modulators, we employed ligand-based pharmacophore modeling to identify the structural features required for molecular recognition of ligands by NOD2. Twelve representative, highly active NOD2 agonists from the muropeptide and desmuramylpeptide structural classes were selected as a training set (Supplementary Table S1) for the creation of a ligand-based pharmacophore model using LigandScout 4.4 Expert software [33]. Cellular stability studies have shown that the ethyl ester groups of the desmuramylpeptides S5-S12 primarily allow for cellular internalization and do not actively contribute to NOD2 binding [13]. Therefore, the hydrolyzed free acid forms of these compounds were used to create the model. Up to 200 low-energy conformations were generated for each ligand. Based on their alignment, a model was generated that included only pharmacophore features that were present in the entire training set. Finally, a coat of exclusion spheres was added around the alignment to represent restricted space inaccessible to any potential screening hit. The constructed model, which consisted of seven pharmacophore features (three hydrogen bond acceptors, two hydrogen bond donors, one negative ionizable feature, and one hydrophobic feature; Figure 2a), was then tested for its ability to discriminate between the 12 active and 600 decoy molecules generated by the DUD-E server [30]. The model performed well, retrieving all active compounds and only one decoy (Figure 2b). However, it proved too restrictive and returned no hits when used as a query for virtual screening of a library of 556,000 commercially available compounds. Simplification of this model by omitting some of the pharmacophore features or excluding the exclusion volumes coat improved the number of hits identified, but also reduced the specificity of the model, resulting in a higher number of identified decoys as false positives. Furthermore, there are currently no data on the exact binding modes of NOD2 ligands that would allow for absolute discrimination between essential and redundant pharmacophore features. In the absence of previous data to guide the simplification of the model, key pharmacophore features could therefore be excluded, resulting in a lower sensitivity of the model. Structure-Based Pharmacophore Modeling To address the limitations described above and to investigate the potential binding modes of selected NOD2 agonists, a homology model of human NOD2 was first constructed using the crystal structure of rabbit NOD2 (PDB ID: 5IRN) as a template [20]. Although this structure was resolved in its ADP-bound apo form, a potential ligand-binding pocket was identified via mutational studies on the concave surface of the leucine-rich repeat (LRR) domain. Interestingly, an unknown electron density that could not be assigned to any of the molecules used in the purification and crystallization of the protein was found to occupy this pocket. These findings are further supported by surface plasmon resonance (SPR) binding experiments using immobilized MDP and a recombinant functional LRR domain of NOD2 [41]. In both studies, Arg877, Trp931, and Ser933 were highlighted as critical residues for MDP recognition. Analysis of the aligned sequences of rabbit and human NOD2 revealed a high 86% sequence homology and complete conservation of the putative ligand-binding residues. A homology model was built using the Prime module of the Schrödinger software suite (Supplementary Figure S1) and the quality of the model was validated using PROCHECK and ERRAT. Assessment of the generated Ramachandran plot shows that only four residues were in the disallowed region and none of them were in the vicinity of the binding pocket (Supplementary Figure S2) making our model reliable for structure-based modeling. A docking grid was generated, centering the box on Arg877, Trp931, and Ser933, and MDP and SG8 were docked as representative potent NOD2 agonists of the muropeptide and desmuramylpeptide structural classes. The predicted binding models showed a striking similarity between the orientations of the dipeptide moieties of both molecules (Figure 3). The negatively charged carboxylate groups of D-glutamic acid and D-isoglutamine anchored the compounds in the pocket via electrostatic interactions with Arg823 and Arg877. The carbonyl oxygens of L-alanine and L-valine in MDP and SG8, respectively, formed a hydrogen bond with Ser933, while the hydrophobic side-chains of these amino acids fit tightly into the hydrophobic side pocket formed by Trp887, Val915, Cys941, and other adjacent residues. Additional hydrogen bonding of both compounds was also observed with Trp931 and Glu959. In addition, both compounds also exhibited unique interactions. The N-acetyl group of MDP formed an additional hydrogen bond with Arg877, which is consistent with previous docking and SPR studies that indicated that Arg877 interacts with both the dipeptide and the carbohydrate moieties of MDP [41]. Conversely, the trans-ferulic acid moiety of SG8 extends beyond the main pocket and forms an additional hydrogen bond with Lys986, while the methoxy group is oriented towards a hydrophobic side pocket. These data agree with our earlier in vitro investigations of desmuramylpeptide NOD2 activity, where a derivative with an unsubstituted aromatic ring exhibited weaker but still nanomolar activity. This suggests that the interactions of the hydroxy and methoxy groups are not essential for NOD2 activation but do contribute to tighter binding [13]. The complementary nature of the interaction patterns of MDP and SG8 prompted us to construct a structure-based pharmacophore model that would accurately represent the joint interaction fingerprint of both ligands. In contrast to the ligand-based approach described above, structure-based pharmacophore modeling is based on the probing of possible interaction points between the ligand and the protein. A pharmacophore feature is only added to the model when a complementary binding partner in the correct geometry is identified in the binding site. A structure-based pharmacophore model was built for the docked poses of MDP and SG8 in LigandScout. Following their alignment, only features present in both models were retained and a coat of exclusion spheres, based on the positions of the binding site residues, was added to prevent steric clashes of potential screening hits with the protein. The constructed model incorporated five pharmacophore features: a negative ionizable feature in the vicinity of the positively charged Arg823 and Arg877, two hydrogen bond acceptors predicted to interact with Trp931 and Ser933, a hydrogen bond donor predicted to interact with Glu959, and a hydrophobic feature, which was designated as optional to reduce the restrictiveness of the model (Figure 4a,b). Namely, it has been reported previously that replacement of L-alanine in MDP with its more hydrophilic analogs, such as L-serine, reduces but does not abolish the activity of NOD2, suggesting that this feature is dispensable for NOD2 activation [28]. The validation library of 12 active and 600 decoy molecules was used again to evaluate the performance of the model. As illustrated by Figure 4c, the model correctly identified all active compounds, while it also retrieved eleven decoys as false positives. Despite the slightly lower specificity in comparison to the ligand-based pharmacophore model, the enrichment factor (EF = 51) was still satisfactory. Importantly, the model was less restrictive and had a higher hit rate when used as a query for virtual screening of the library of commercially available compounds. Virtual Screening The optimized structure-based pharmacophore model found 79 hits from the library of 556,000 compounds, corresponding to an overall hit rate of 0.014%. In an additional filtering step, the hits were docked to the homology model using the same parameters as for the docking of MDP and SG8. Because the scoring functions of docking algorithms sacrifice accuracy in favor of computational efficiency, the calculated docking scores only allow for rough discrimination between active and inactive compounds and generally correlate poorly with experimental results. To reduce the proportion of false positive hits, the docked poses were therefore rescored with the MM-GBSA method, which estimates binding free energies using molecular mechanics and continuum solvent models [42]. Twelve compounds were prioritized for purchase and experimental evaluation based on the calculated binding affinities, scaffold diversity, and manual inspection of the pre-dicted binding modes. Only compounds predicted to interact with at least three of the four residues, identified by our structure-based pharmacophore model, were considered for evaluation. Table 1 summarizes the structures of the purchased hits with their pharmacophore fit scores, docking scores, and binding free energies calculated using the MM-GBSA method. All compounds contained a carboxylic acid group to match the negative ionizable feature of the pharmacophore model. Only 11 and 12 matched the optional hydrophobic feature, as indicated by the higher relative pharmacophore fit scores (0.916-0.929). In Vitro Biological Evaluation of Virtual Screening Hits The purchased compounds were evaluated using the commercially available HEK-Blue NOD2 reporter cell line. This cell line, based on HEK293 cells, stably expresses human NOD2 as well as an NF-κB-inducible secreted embryonic alkaline phosphatase (SEAP) reporter gene. Activation of NOD2, and subsequently of NF-κB, induces the expression and secretion of SEAP, the levels of which can be detected colorimetrically. Interestingly, although none of the compounds activated NOD2 at the highest concentration tested (500 µM; Supplementary Figure S3), two compounds, 1 and 3, exhibited a weak inhibitory effect on the activation of NOD2 by MDP and SG8 ( Figure 5). Pre-treatment of HEK-Blue NOD2 cells with 1 (500 µM) reduced the observed NF-κB transcriptional activity after stimulation with MDP and SG8 by 71% and 63%, respectively. This NOD2 antagonistic activity was comparable to a 10 µM concentration of a previously reported benzimidazole NOD2 antagonist SG84 [26]. Slightly weaker antagonism of NOD2 was observed by pretreatment with 3 (500 µM), which reduced the responses elicited by MDP and SG8 by 52% and 30%, respectively. Both compounds displayed dose-dependent antagonism of NOD2, with some activity still evident at 200 µM. Importantly, the observed antagonistic effect was not due to cytotoxicity. Neither compound reduced the measured metabolic activity of HEK-Blue NOD2 cells at the highest concentration tested (500 µM), as determined by the (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfo-phenyl)-2Htetrazolium) (MTS) method (Supplementary Figure S4). Binding Modes of Compounds 1 and 3 in the Putative NOD2 Ligand-Binding Pocket Our docking study identified four key pharmacophoric features shared by MDP and SG8. Namely, electrostatic interactions within the positively charged pocket formed by Arg823 and Arg877, and three hydrogen bonds with Trp931, Ser933, and Glu959. The docked poses of 1 and 3 were examined in detail and compared with those of MDP and SG8 to investigate whether the predicted binding modes were consistent with the constructed pharmacophore model. No constraints were defined during the docking protocol in order not to influence the orientation of the docked ligands. Both screening hits contain an aromatic carboxylate motif that is predicted to occupy the positively charged pocket where it interacts with Arg823 and Arg877 via electrostatic or cation-π interactions and with Trp907 via π-π stacking (Figures 6 and 7) A hydrogen bond is formed between Ser933 and the ether oxygen of 1, while the amide carbonyl oxygen forms a hydrogen bond with Trp931. In addition, the hydroxyl group on the five-membered ring forms two hydrogen bonds with Glu963 and Lys989. Interestingly, although the predicted binding conformation of 1 correlated well with its orientation in the pharmacophore model (Figure 6c), the nature of the identified interaction with Glu959 differed between the two methods and, consequently, between the docked poses of 1 and MDP/SG8. Namely, instead of hydrogen bonding formed between Glu959 and MDP/SG8, an electrostatic interaction forms between this residue and the cationic tertiary amine of 1. Figure 5. The NOD2 antagonistic activity of virtual screening hits 1 and 3. HEK-Blue NOD2 cells were pre-treated for 1 h with 1 or 3, before 18 h stimulation with SG8 (1 µM) or MDP (1 µM). A previously reported NOD2 antagonist SG84 was used as the positive control [26]. The data are means ± SEM of four independent experiments and are shown as % inhibition in comparison to the response observed after stimulation with SG8 or MDP alone. A similar binding orientation was predicted for 3 (Figure 7a,b). In addition to the electrostatic, cation-π, and π-π interactions of the aromatic carboxylate motif with Arg823 and Arg877, three hydrogen bonds form, corresponding to the hydrogen bonding network formed by MDP and SG8. Namely, Ser933 interacts with the carbonyl oxygen of 3, while its heteroaromatic imidazole ring forms two additional hydrogen bonds with Trp931 and Glu959. Remarkably, the conformation of 3 generated by LigandScout, which was identified during virtual screening as matching the pharmacophore model, is nearly identical to the pose of 3 generated during docking (Figure 7c). Analysis of the docked poses of 1 and 3, therefore, showed that both screening hits were able to form a similar interaction fingerprint compared to the compounds that served as the basis for constructing the pharmacophore model. Interestingly, the binding mode of 1 is complementary to that of SG8, while the docked pose of 3 correlates better with that of MDP ( Figure 8). All four compounds show highly similar positioning of the pivotal negatively charged carboxylic acid in the positively charged pocket and of the hydrogen bond acceptor interacting with Ser933. However, while both screening hits interact with a similar set of residues within the binding site, there are some differences in the nature of these interactions that may provide an explanation for the contrasting modes of action of these NOD2 modulators. For example, in addition to the electrostatic interaction of 1 with Glu959 mentioned above, both screening hits form a π-π stacking interaction with Trp907, an interaction not available to the D-isoGln and D-Glu moieties of MDP and SG8, respectively. In contrast to the hydrogen bond formed between Arg877 and the N-acetyl group of the carbohydrate moiety of MDP, 3 interacts with this residue via a cation-π interaction. Finally, the positioning of the hydrogen bond acceptors interacting with Trp931 is different between the screening hits and MDP/SG8. It is known that upon binding MDP or its derivatives, including SG8, NOD2 undergoes extensive ligand-induced conformational changes, which ultimately lead to downstream signal transduction. We therefore hypothesize that although our structure-based pharmacophore modeling approach correctly identified the structural features responsible for binding to the putative NOD2 ligand-binding site, it lacks the fine structural details that would enable NOD2 activation. SPR and backscattering interferometry (BDI) experiments have previously demonstrated that the individual components of MDP (dipeptide and carbohydrate) bind to purified NOD2 with nanomolar affinity but lack the NOD2 activating capacity of the entire MDP molecule [43]. This study has thus shown that strong binding is necessary but not sufficient for NOD2 agonism. Similarly, conjugates of paclitaxel with an MDP analogue containing a muramic acid moiety synergistically induced TNF-α and IL-12 production in murine peritoneal macrophages [44]. Interestingly, when the muramic acid moiety was replaced with a cinnamic acid derivative, the resulting conjugate exhibited NOD2 antagonist activity with potent antitumor activity due to suppression of the inflammatory tumor microenvironment [19]. This shows that subtle structural differences can influence the mode of action of NOD2 modulators. One of the major limitations in the rational design of NOD2 modulators remains the lack of a crystal structure of NOD2 in complex with a ligand. While many potent NOD2 agonists have been developed, the vast majority is closely based on the structure of MDP and other muropeptides. As exemplified by our initial ligand-based pharmacophore modelling approach, their low structural diversity hinders the elucidation of the key structural features involved in NOD2 binding. Without means to discriminate between redundant and essential features, the generated pharmacophore model contained too many features and was, therefore, deemed too restrictive for virtual screening. We successfully circumvented these limitations by employing a hybrid docking-pharmacophore modelling approach. Due to the low hit rate of the constructed structure-based model, it performed well as a primary screen to reduce the number of potentially active compounds and consequently to reduce the computational burden of the subsequent docking and MM-GBSA rescoring steps. Notably, as can be seen in Table 1, the MM-GBSA-predicted binding energies of both active hits were among the three best of all the purchased screening hits, showing that the MM-GBSA rescoring step provided a better correlation with the experimental results compared to the pharmacophore fit and Glide docking scores alone. Although the MM/GBSA method is computationally demanding, its use is, therefore, worthwhile, provided that an appropriate pre-filtering step, such as our pharmacophore model, is applied to the screening library prior to its use. Further experimental studies would be required to conclusively determine whether 1 and 3 interact directly with NOD2 or exert their inhibitory effect on one of the downstream signaling proteins. However, given their low structural complexity, the two novel scaffolds represent valuable starting points for further optimization to expand the currently limited library of NOD2 antagonists. Conclusions In conclusion, we successfully deploy here a hybrid docking and pharmacophore modelling approach to NOD2 ligand discovery. Docked poses of previously reported NOD2 agonists were used as the basis for the construction of a structure-based pharmacophore model. Two compounds, 1 and 3, which were identified by the constructed model, exhibited an inhibitory effect on MDP-and SG8-induced NOD2 activation, thus providing valuable novel NOD2 antagonist scaffolds suitable for further optimization. Interestingly, the mode of action of the compounds used in the construction of the model and the identified virtual screening hits do not match. This suggests that, while our approach has identified the structural requirements for NOD2 binding, it does not provide sufficient structural data for NOD2 activation. Nevertheless, it represents a promising starting point for further optimization and, to the best of our knowledge, is the first successful virtual screening campaign for the discovery of NOD2 modulators.
2022-08-03T15:10:18.352Z
2022-07-29T00:00:00.000
{ "year": 2022, "sha1": "a5e5510d31fab9d4ded5c33100eebbe109fad0c6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-273X/12/8/1054/pdf?version=1659091184", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "044cb1b4094ea29a02148f3634dfd28eadfcb1fe", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258436724
pes2o/s2orc
v3-fos-license
Development of IoT Smart Greenhouse System for Hydroponic Gardens This study focused on the development of a smart greenhouse system for hydroponic gardens with the adaptation of the Internet of Things and monitored through mobile as one of the solutions towards the negative effects of the worlds booming population, never ending - shrinking of arable lands, and the effect of climate change drastically in our environments. To achieve the goal of the study, the researchers created an actual hydroponic greenhouse system with completely developing plants, and automation in examining and monitoring the water pH level, light, water, and greenhouse temperature, as well as humidity which is linked to ThingSpeak. The developed SMART Greenhouse monitoring system was tested and evaluated to confirm its reliability, functions, and usability under ISO 9126 evaluation criteria. The respondents who include casual plant owners and experts in hydroponic gardening able to test and evaluate the prototype, and the mobile application to monitor the parameters with the results of 7.77 for pH level, 83 for light, 27.94 deg C for water temperature, 27 deg C for greenhouse temperature, and 75% for humidity with a descriptive result in both software and hardware as Very Good with a mean average of 4.06 which means that the developed technology is useful and recommended. The SMART Greenhouse System for Hydroponic Garden is used as an alternative tool, solution, and innovation technique towards food shortages due to climate change, land shortages, and low farming environments. The proponents highly suggest the use of solar energy for the pump power, prototype wiring should be improved, the usage of a high-end model of Arduino to address more sensors and devices for a larger arsenal of data collected, enclosures of the device to ensure safety, and mobile application updates such as bug fixes and have an e-manual of the whole systems. INTRODUCTION The fast change in climate, every nation's increasing population, and the never-ending shrinking of arable lands now necessitate innovative techniques to assure long-term agricultural and food security (Velazquez et al., 2022). Dait (2022) stated that climate change is one of the most alarming occurrences worldwide, with a severe impact mainly in the Philippines because of insufficient capability to adapt to global warming that will cause lower agricultural production. In the Philippine economy, agriculture is very important. It employs around 40% of Filipino employees and provides an average of 20% of the country's GDP. This production is mostly derived from agribusiness, which accounts for over 70% of total agricultural output. Crop cultivation is the most important agricultural activity. But even so, the value of agricultural production fell by 2.5 percent from January to September 2021. Accordingly, a crisis in Philippine agriculture has resulted from a significant drop in productivity, inefficiency, high production costs, and insufficient government support for the industry, among other factors, and crop cultivation, designed as an alternative way of solving these types of problems, also comes with shortcomings (Briones, 2021). As a result, greenhouse agriculture is viewed as a very viable alternative and longterm solution to future food shortages because it allows farmers and users to regulate their local environment and grow crops all year long, even in extreme weather conditions (Rayhana et al., 2020). As cited in the study (Acharya et al., 2021), the most important crops with a high potential for improving farmers' income due to the growing demand are vegetables, which can be planted in all possible places and ways. Accordingly, a large number of vegetables can be easily grown in a hydroponics system. Hydroponics is a way of doing soil-free gardening and is known as a new method of farming, as stated by (Patel et al., 2018). The hydroponic system encourages plant growth by using water instead of soil, and the features that the environment provides are extremely important for the plant to develop properly. Greenhouse farming is now one of the world's fastest-growing businesses, a diverse production of crops in any season using a house-like structure made of glass or plastic materials where the roof is usually covered with transparent material to maintain the necessary climatic conditions for plant development (such as temperature, humidity, illumination, and so on), as well as to protect the plants from pests, illness, and bad environmental circumstances. As noted in the study (Ortner & Agren, 2019), plant upkeep must be prioritized where pH level, oxygen, sunshine exposure, and water level must be all considered to provide a healthy and productive environment for the plant. Concerning the said statement, a monitoring system with the application of a microcontroller like Arduino along with several operational and management issues, the integration and adaptation of Internet of Things technology will be a great tool and could make a difference. The ever-evolving Internet of Things (IoT) technologies, to which certain people have very common knowledge, include smart sensors and devices, network topologies that can be applied in varieties of situations, large data analytics, as well as it being intelligence, and reliability when it comes to decision-making stated in the study of (Lakshmanan et al., 2020). These are some of the few advancements of IoT which are thought to be the answer to the key challenges facing greenhouse farming, such as crop management, water, pH level, and nutrient sufficiency, as well as climate control within the greenhouse. Furthermore, with this being implemented widely, it may be employed in a variety of situations to help farmers, croppers, and planters grow their businesses particularly if it is managed with wireless communication based. It is stated that wired systems that manage greenhouse farms tend to be more complex and harder than using wireless communication which is now being used in smart agriculture to replace the cable system, which was difficult to install and run. The goal of this study aims to: (a) design and develop an IoT-Based controlling and monitoring apparatus on a greenhouse for hydroponic gardens, (b) design and develop a controlling and monitoring android mobile application for the greenhouse system using MIT App Inventor, (c) construct an IoT-Based Greenhouse Hydroponic system that can monitor the pH level, light, water and greenhouse temperature that include humidity, (d) to develop an IoT-Based Greenhouse Hydroponic system that manipulates pH level, and to lower the water temperature, and (e) test and evaluate the Greenhouse System using ISO 9126, in terms of its functionality, reliability, and usability. Figure 1 shows the conceptual framework of the study. The conceptual framework includes the possible Input, Process, and Output of the research study. The input will focus more on the initial investigation of the study which includes the knowledge, hardware, and software requirements of the study. The process part of the framework is about the development of the system from planning up to summarizing it. Lastly, the output should be the objective of the study. Research Design The proponents of this study employed applied research to quantify and grasp enough knowledge about the issues and problems that individuals experienced when cultivating plants in a climate-dependent setting (Kothari, 2008). The data gathered was utilized by the proponents to analyze statistically and methodically with the use of the quantifiable data that has been taken from varieties of tests within the study. With this, understanding the theories that support and facilitate the study will allow researchers to visualize the difficulties with existing farming systems and how they may be remedied by developing an IoT -based Smart Greenhouse System for Hydroponic Farming. In terms of completely developing plants in this greenhouse system, the researchers will be examined and monitored the benefits and drawbacks of having this automated system linked to ThingSpeak. The hydroponic greenhouse is in a six by twelve feet plot of land and has a height of eight feet on the right side and six feet on the left side. The researchers wanted to discover what components and characteristics will be suitable and most appropriate to have a more reliable agricultural system by employing the applicable strategy where a thorough investigation was carried out to see how items altered to provide and aid the data collected. Data Gathering Procedure The proponents conducted the data-gathering procedures by means of observing and taking note of the changes and results from which the product itself has been shown. The said data vary from the pH level of the water, its light, temperature, as well as oxygen levels, and finally by testing if the structure and support of the greenhouse farm itself hold its weight. The results of these specific variables were gathered through a specific sensor of the said variable, with one being appointed to each component the proponents aim to test within their product. In this way, through the help of ThingSpeak, the researchers recorded the reliable data displayed which is shown in the later part of the document and then furthered explained thoroughly. Project Construction, System Model, Schematic Diagram, Control System, and Sensors Flowchart To construct the outcome of the study, the proponents strictly follow the identified project design model and diagrams. Figure 2 shows the overall flow on how the project was constructed form the beginning up to the end where the outcome of the study obtained. Figure 3 shows the system model for the IoT Smart Greenhouse System on how it works automatically. The device is connected through the cloud with the utilization of ThingSpeak which then associates itself with the Arduino Mega that checks for the pH level of the water, the plant's oxygen, light, temperature, and humidity levels with the use of its specific sensors. Figure 4 shows the schematic diagram of the system. It includes the connection of 4 sensors to the Arduino Mega: DHT11, LDR Module, DS18B20, and PH4502C. It also shows the connection of the Relay Module and the Pump. The control system manages the threshold of the system. It controls the quality of the outcome of the system and ensures that it follows what it intended to do. Project Testing and Evaluation Procedures The main purpose of this stage is to ensure that the developed project meets the expected functionality, reliability, and usability of the system as required. Santelices (2013) stated that alpha and beta testing are used to validate and verify system meets the technical requirements that guided its design and development. The test includes a stepby-step procedure on how the hardware components is interconnected with each other thru circuits and how the developed software integrated into the hardware systems and works as one system. Testing and operation procedures are performed and shown to the evaluators during the evaluation to ensure that all features of the system function and performed according to the required specifications of the system. To validate the outcome of this study, the proponents conducted two processes such as: (1) the testing results from the prototype conducted by the proponents were compared to the results coming from commercialized devices, and (2) through the test survey where the proponents picked a total of ten (10) respondents to assess the prototype's functionality, dependability, and usability, basing on the convenience sampling technique since this number is proven sufficient to construct a foundation of solid evaluation findings (Graglia, 2022). The survey contains questions about the functionality, usability, and reliability of the prototype. The ISO 9126 survey was used as an adaptation for the system's questionnaire since the results reflect the acceptability of the system in terms of being functional, dependable, and usable. Some questions adapted from MARS, the mobile application rating system, and a few questions created by the proponents were used to create the mobile application questionnaire. For the mobile application, the proponents will be using MARS (Mobile Application Rating Scale) for its criteria. The survey employs a five-point Likert scale, as recommended by comparable surveys, as part of the convenience sample method (Pollfish, 2021 The Likert scale shown in Table 1 was used in the project evaluation, its equivalent interpretation and mean score rating which was designed to capture ranges probability after averaging the scores (Doctor & Benito, 2019). Furthermore, this study used descriptive statistics in the interpretation and analysis of the data collected. This includes the frequencies and weighted means. A simple frequency count was applied in tallying responses while a weighted mean was utilized to determine the average response for every criteria description. The weighted average mean for each of the three criteria was also computed by adding all the weighted means divided by the total number of descriptions for every criterion. Lastly, an overall weighted mean was calculated to get the overall quality characteristics of the system perceived by their respondents. Working SMART Greenhouse in a Small Farm This study designed a working greenhouse for a small hydroponic farm. The size of the structure shown in Figure 6 is 6 ft high on one side and 8 ft on the other side, 12 ft long, and 6 ft wide. For the hydroponic system of the greenhouse, the materials used are 10 feet of the black PVC pipe 3 inches in diameter, for the plants, and 10 feet of blue PVC pipe half an inch in diameter for the connection of the black pipes and to the source. Furthermore, the whole system is composed of Arduino Mega which is placed at the center of the system that acts as the brain, where the sensors, relay, pumps, and ESP8266 are connected to control the system automatically. Figure 7 shows the actual presentation of the whole system which is located inside the greenhouse. It shows the front and side view of the greenhouse, and the systems (hardware components) are located beside the drum of the main pump. The DHTT-22 humidity sensor is positioned the same as the light sensor to measure the temperature and humidity of the greenhouse. Moreover, the placement of the pH sensor with the pH probe attached is inside the drum of the main pump which senses the pH level of the water in the hydroponic garden. The Relay modules are connected to the Arduino which can be automatically triggered when the temperature and pH level senses a certain amount level and the submersible pumps will be turned on automatically together with the Relay module. Controlling and Monitoring Android Mobile Application of the SMART Greenhouse To achieve the required function of the system in terms of control and monitoring of the SMART greenhouse, ThingSpeak program is needed. ThingSpeak is a cloud-based open-source software/site Internet of Things analytics tool that allows developers to gather, visualize, and analyze live data streams. Figure 10.a -10.c shows the components and stages of the developed mobile application used to monitor the hydroponic garden within the greenhouse setting. The developed mobile application used ThinkSpeak and MIT App Inventor software to achieve the goal of the study. Table 2-6 were the actual sample data logs for monitoring the temperature, humidity, pH level, light level, and water temperature level of the SMART greenhouse which was conducted by the researchers during the testing stage. Below is the explanation of the data that has been displayed during the testing. Table 2 shows that the highest temperature that is recorded at the time of testing is 34C which happened on May 25, 2022, at 10:31 AM. The temperature rise was due to the weather being sunny at that time. While the lowest temperature recorded was 26C, Table 3 shows the humidity sample logs during the testing period of the prototype. At 5:48 PM on May 24, 2022, the highest peak of humidity (84%) was recorded and on May 25, 2022, the lowest (46%) was recorded. During that time the humidity of the place depends on the weather that day. That is the reason why those two times are different. Table 4 shows the pH Level recorded on the days of trials. The highest recorded data is 7.19 and the lowest being -4.09. The negative result of the trial was caused by the improper calibration of the pH sensor during testing. pH sensor calibration is required to correctly identify the pH level of the water. Table 5 shows the light level of the sample logs. The highest that was recorded is 1013 on May 23, 2022, at 7:08 PM and the lowest is 80 on May 24, 2022, at 6:10 PM. The weather is also one of the reasons that affect the light level of the greenhouse. Another factor is the time that the data is taken and the light in the surrounding area of the greenhouse. Table 6. Water Temperature data from ThinkSpeak On May 25, 2022, at 10:31 AM, the highest reading was performed with a reading of 32.69 C and on May 23, 2022, at 4:26 PM the lowest reading happened (26.06 C). The change in temperature occurred because of the climate at that time. Additionally, the temperature inside the greenhouse also affects the temperature of the water. Shown in The outcome of the testing as displayed in Table 2 -6 shows that the results data is within range of the parameter required for adequate growth of the hydroponic plants which were for pH is 6.5 to 8, temperature of greenhouse is 26C to 29C, temperature of water is 28C to 31C, humidity is 70% where trials were done once a day. Results of Trial Testing of the Prototype in Comparison to Commercialized Device To validate the outcome of the study, the proponents conducted both prototype and commercialized device series of tests and take notes on the comparison of the results. The computed percentage difference between the test results of the project prototype and the commercialized device was used to verify the success of the study. Table 7 displays the results of the testing done by the proponents concerning the water pH level and temperature, greenhouse temperature, humidity, and light. The data gathered from the prototype and the commercialized device were not as far from each other as shown in the table. When it comes to the pH level as shown in the table, the results fall within the required range, which is 6.8 to 8 as stated in the study by (Judith, 2019). In the test trials done for water temperatures, the results were close to each other and meet the required range of 28°C to 31°C according to (Robles, 2022). The data presented in Table 7 regarding the greenhouse temperature, the results are in accordance with the range of 26°C to 29°C and with a percentage difference not being greater than 3.5% shows the precision of the prototype according to (Robles, 2022). In terms of humidity, the required basis for good plants growth must not be lower than 70% (AdvancedNutrients.com, 2017), and compared with the prototype results with 6.8% percentage difference, it means that the sensor used in the prototype is not reliable compared to the commercialized device. For light trial results seen in Table 7 with having a percentage difference not exceeding 2.4%, it can be said that the data gathered from the sensors are precise. With the use of the percentage difference formula, the data can prove the reliability of the sensor in the prototype. Table 8 shows the different sensors that are included in the greenhouse system. The table shows the 3 trials that were conducted to determine if the sensors are gathering data that is within the standard value range. The table shows that almost all the sensors read a value within the range; however, during trial 3 the PH4502C sensor reads a negative value that is not within the range. This is due to the reason that the calibration of the PH4502C was not working properly at that time. Overall, this table shows that the sensors read the data well. Hence, the Sensor system is working successfully. Evaluation Results To evaluate the prototype's functionality, reliability, and usability, the proponents chose a total of ten (10) respondents, as it is enough to build a basis for reliable 2127 evaluation results (Graglia, 2022). Through the convenience sampling technique, the survey uses a five-point Likert scale as advised with similar questionnaires (Pollfish, 2022). The responses to this survey indicate the suitability of the system when it comes to being functional, reliable, and usable, therefore using the ISO 9126 survey as an adaptation for the system's questionnaire. The questionnaire for the mobile application was adapted from MARS by the Queensland University of Technology, the mobile application rating scale, and a few questions were solely made by the proponents. Table 9 shows the result of the evaluation regarding the system's functionality. Based on the data gathered as rated using the Likert scale, the respondents agreed that the system functions according to its intended purpose. 10 displays the results from respondents concerning the system's reliability. The interpretation ranges from agreeing to strongly agreeing at best, declaring that the system is reliable when used correctly. Table 11 shows the results in which the system's usability was evaluated. Overall, according to the system's uses, the respondents agree that the system is simple and easily understandable enough to be used by them. Table 12 made use of MARS or Mobile Application Rating Scale, which tested the different aspects of the mobile application when downloaded and in use. In terms of engagement, the respondents agreed that the mobile application was engaging for them when in use. Table 13 evaluated the mobile application's functionality, and how it links into the system at hand. Based on the results, the respondents were happy with the level of functions that lies within the application. Table 14 describes the response of the people who evaluated the mobile application in terms of its aesthetics or UI. The consensus was ranging from agree to strongly agree, therefore the application UI was adequate and usable for the system utilized. Lastly, table 15 evaluates the reliability of the information seen in the application. The respondents agree that all the results they see on display were accurate, and very reliable. Table 16 shows the Cronbach's Alpha result regarding the reliability of the questionnaire given by the proponents to the respondents. With the given data, the survey questions had 76.40% reliability. Table 18 shows the results of the trials in comparison, which were done to get reliable data from the system prototype and the commercialized devices bought by the proponents. Close margins were observed based on the results and will be discussed further in this part of the document. Note: P -indicates the data generated by the Prototype. C -indicates the data generated by commercialized Device. The ideal range for pH is 6.5 to 8, temperature of the greenhouse is 260C to 290C, temperature of water is 280C to 310C, humidity is 70%. The trials were done once a day. In this part of the paper, testing happened every day for three days to garner such results. Tables in part display the data gathered from those days of testing. The testing phase also garnered each parameter needed for this study, mainly the pH level, the greenhouse humidity, and temperature, as well as light, and the water temperature. According to the results of the tests, the data acquired shared a resemblance to the ideal measurements of parameters needed to grow a functioning hydroponic system. Accordingly, the tables above present the results of the functionality, reliability, and usability testing of this study. Alpha tests were done to know if the prototype is working to the extent needed by the proponents, to know the expected output of the prototype, and check the process of the components utilized. Table 10-18 displayed the evaluation results from the project`s outcome in terms of functionality, reliability, and usability with highly commendable results performed by the selected respondents of the study. Based on data gathered from the series of tests and evaluation above, precision between the system's sensors data and the commercialized data are seen appropriately, and the system and its mobile application evaluated can be concluded as highly reliable. CONCLUSIONS AND RECOMMENDATIONS Nowadays, a lot of people wanted to grow plants at their own expense but have no way of doing so due to lack of space and time for monitoring since most are out working during. To aid this concern, the proponents successfully designed and developed an IoT-Based Controlling and Monitoring Apparatus on a Greenhouse for hydroponic gardens, as well as designed and developed a controlling and monitoring Android Mobile Application for the greenhouse system using MIT App Inventor. The researchers managed to construct an IoT-Based Greenhouse Hydroponic system that can monitor the pH level, Light, water and greenhouse Temperature, and Humidity. An IoT-Based Greenhouse Hydroponic system that manipulates pH level and lowers the water temperature was also developed successfully and was tested and evaluated using ISO 9126, in terms of its functionality, reliability, and usability. The proponents discovered the needs and requirements of different plants such as lettuce, kale, and basil when grown hydroponically and grown traditionally. With the use of IoT based platforms such as ThingSpeak, the convenience of automated systems such as the IoT Smart Greenhouse System is successfully defined in this study. The IoT Smart Greenhouse System for Hydroponic Gardens can be concluded to be an effective system to grow and monitor plants effectively. Without the use of excessive space and soil and any sort of pesticides, this study has proven that monitoring and growing plants can be possible just from the application itself. As for recommendation, this system can be improve by the future researchers in the following aspects: (1) the use of Solar Panels, so that the pump will not be relying on the AC power alone; (2) the wiring system of the prototype should be remodeled; (3) the usage of bigger microcontroller would be advisable to be able to accommodate more sensors and devices for a larger arsenal of data to be gathered; (4) adopt higher Arduino microcontroller model for this kind of project; (5) updates such as bug fixes for the application are also needed, as well as having a note on the application that shows the required ranges of the parameters in educate new users; (6) enclosures of the devices to ensure the safety of the sensors; (7) the proponents also recommend a longer period of testing, to get data that can further strengthen the reliability of the results; and (7) the ventilation will be controlled automatically when the ideal temperature goes above or below the set data.
2023-05-03T01:16:03.924Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "7a0593f960dffb5fb1084155d69fcdb8b256070f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7a0593f960dffb5fb1084155d69fcdb8b256070f", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
25454646
pes2o/s2orc
v3-fos-license
Conformational Isomerism Can Limit Antibody Catalysis* Ligand binding to enzymes and antibodies is often accompanied by protein conformational changes. Although such structural adjustments may be conducive to enzyme catalysis, much less is known about their effect on reactions promoted by engineered catalytic antibodies. Crystallographic and pre-steady state kinetic analyses of antibody 34E4, which efficiently promotes the conversion of benzisoxazoles to salicylonitriles, show that the resting catalyst adopts two interconverting active-site conformations, only one of which is competent to bind substrate. In the predominant isomer, the indole side chain of TrpL91 occupies the binding site and blocks ligand access. Slow conformational isomerization of this residue, on the same time scale as catalytic turnover, creates a deep and narrow binding site that can accommodate substrate and promote proton transfer using GluH50 as a carboxylate base. Although 34E4 is among the best catalysts for the deprotonation of benzisoxazoles, its efficiency appears to be significantly limited by this conformational plasticity of its active site. Future efforts to improve this antibody might profitably focus on stabilizing the active conformation of the catalyst. Analogous strategies may also be relevant to other engineered proteins that are limited by an unfavorable conformational pre-equilibrium. Ligand binding to enzymes and antibodies is often accompanied by protein conformational changes. Although such structural adjustments may be conducive to enzyme catalysis, much less is known about their effect on reactions promoted by engineered catalytic antibodies. Crystallographic and pre-steady state kinetic analyses of antibody 34E4, which efficiently promotes the conversion of benzisoxazoles to salicylonitriles, show that the resting catalyst adopts two interconverting active-site conformations, only one of which is competent to bind substrate. In the predominant isomer, the indole side chain of Trp L91 occupies the binding site and blocks ligand access. Slow conformational isomerization of this residue, on the same time scale as catalytic turnover, creates a deep and narrow binding site that can accommodate substrate and promote proton transfer using Glu H50 as a carboxylate base. Although 34E4 is among the best catalysts for the deprotonation of benzisoxazoles, its efficiency appears to be significantly limited by this conformational plasticity of its active site. Future efforts to improve this antibody might profitably focus on stabilizing the active conformation of the catalyst. Analogous strategies may also be relevant to other engineered proteins that are limited by an unfavorable conformational pre-equilibrium. Tailored antibody catalysts have been generated for a wide variety of reactions using transition state analogs or other appropriately designed template molecules as antigens (1). Although these proteins exhibit many of the properties of authentic enzymes, including rate accelerations, substrate specificity, regioselectivity, and stereoselectivity, their effi-ciency generally lags behind that of their natural counterparts. Among the many factors that contribute to antibody inefficiency (2), suboptimal conformational properties of the immunoglobulin scaffold have been cited as a potentially significant limitation (3,4). Proteins are innately flexible, undergoing conformational changes over a wide range of time scales and amplitudes. Such flexibility is believed to be important for enzyme function (5)(6)(7)(8). Dynamic structural fluctuations can influence substrate and product binding. They also enable the catalyst to adjust to changes in the substrate as the reaction coordinate is traversed, and they provide a means to position functional groups for efficient electrostatic, nucleophilic, and acid-base catalysis. Conformational changes may even shape the effective barrier of the catalyzed reaction in some cases (9). Antibodies undergo a similar range of conformational changes as enzymes. Switches between different rotamers of individual side chains, segmental movements of hypervariable loops, and alterations in the relative disposition of the V H and V L domains have all been observed (3,10,11). These structural adjustments increase the effective diversity of the primary immunological repertoire and are important for achieving high affinity and selective molecular recognition (12). However, such conformational changes are difficult to exploit intentionally for antibody catalysis, given the indirect nature of the immunological selection process, which optimizes binding to an imperfect transition state mimic rather than catalytic activity. In fact, affinity maturation reduces conformational flexibility in some antibodies and also increases specificity (13)(14)(15)(16). Comparisons of germ line and mature antibodies catalyzing an oxy-Cope rearrangement indicate that such rigidification can be deleterious to catalytic efficiency (17). Nevertheless, investigations of a family of esterolytic antibodies (18) provide evidence that conformational changes can be beneficial in some instances and contribute to higher rate accelerations. In other cases, structural dynamics may influence substrate binding or product release. For example, crystallographic snapshots of the complete reaction cycle of antibody cocaine degradation visualized significant conformational changes in the active site along the reaction coordinate (19). Although substrate and products were bound in partially open conformations, the antibody active site engulfed the transition state analog more tightly, thus enabling transition state stabilization through hydrogen bonding (19). In this study, crystallographic and kinetic approaches were employed to characterize structural changes in a catalytic antibody promoting the conversion of benzisoxazoles to salicylonitriles ( Fig. 1, 1 3 3). This reaction, known as the Kemp elimi-* This work was supported, in whole or in part, by National Institutes of Health Grant GM38273 (to I. A. W.). This work was also supported by a Skaggs predoctoral fellowship and a Jairo H. Aré valo fellowship from the The Scripps Research Institute graduate program (to E. W. D.) and the ETH Zü rich (to D. H.). This is publication 18451-MB from the Scripps Research Institute. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. nation, is a valuable model system for studying proton transfer from carbon (20 -22). Antibody 34E4 was generated against the cationic 2-aminobenzimidazolium hapten 4 and catalyzes this transformation with high rates (k cat ϭ 0.7 s Ϫ1 , k cat /K m ϭ 6 ϫ 10 3 M Ϫ1 s Ϫ1 ) and multiple turnovers (Ͼ10 3 ) (23). Combined modeling (24), mutagenesis (25), and structural (26) work implicate the carboxylate group of the heavy-chain glutamate 50 (Glu H50 , Kabat numbering) as the catalytic base. These studies further suggest that high catalytic efficiency in this system derives, at least in part, from successful exploitation of extensive hydrogen bonding, -stacking, and van der Waals interactions to align base and substrate. The possibility that conformational changes might be important in 34E4 is suggested by sequence comparisons with the recently characterized, noncatalytic antibody SPE7 (27,28). This antibody was raised against a 2,4-dinitrophenyl hapten, but is a highly dynamic receptor, which binds multiple ligands via different active-site conformations (27). The variable light chains of 34E4 and SPE7 are 93% identical, and the backbone atoms in their respective framework regions have a root mean square deviation of only 0.4 Å. Although their variable heavy chain domains do not appear to be closely related (52% identity, 1.1 Å root mean square deviation), key residues lining the binding pocket are also conserved. Indeed, consistent with these similarities, we find that 34E4 is subject to conformational changes that adversely affect catalytic efficiency. Specifically, the apo antibody exists predominantly in an unreactive form that must undergo slow and significant structural rearrangement to accommodate its substrate. Comparisons with other structurally characterized catalytic antibodies suggest that this behavior may be quite general. By stabilizing the active form of the catalyst, it may be possible to enhance efficacy in these systems significantly. 34E4 Fab Preparation, Crystallization, and Data Collection- The apo 34E4 Fab 4 was produced and purified as a murinehuman chimera, as described previously (25). The Fab, concentrated to 15 mg/ml, was crystallized by the sitting drop vapor diffusion method at 22.5°C. The Fab crystallized from 30% MPEG 2000, 0.1 M sodium acetate (pH 4.3), 0.2 M (NH 4 ) 2 SO 4 in orthorhombic space group P2 1 2 1 2 1 (crystal form A) and from 32% PEG 2000, 0.2 M CdCl 2 , 0.1 M Bistris propane (pH 7.0), in triclinic space group P1 (crystal form B). For data collection, the crystals were flash-cooled to 100 K using 25% glycerol as cryoprotectant. Data were collected at ALS Berkeley and SSRL Stanford. Data processing and scaling were performed in HKL2000 (29). The data processing statistics are summarized in Table 1. Structure Determination and Refinement-The structures of unliganded 34E4 Fab were determined by molecular replacement using the coordinates of the previously determined hapten complex (26) with the program Phaser (30). The structures were refined by alternating cycles of model building with the program O (31) and refinement with Refmac5 (32). During refinement, tight noncrystallographic symmetry restraints were applied to all main-chain atoms of the Fab molecules in the asymmetric unit, except for some loop regions. The final refinement statistics are shown in Table 1. The quality of the structures was constantly monitored using the programs Mol-Probity (33), WHAT IF (34), and PROCHECK (35). Fluorescence Titration-Thermodynamic dissociation constants (K d ) for the ligand⅐Fab complex were determined as described previously (36). A titration curve was generated by stepwise addition of ligand (0 -3 M) to a dilute solution of the chimeric Fab fragment (50 nM) and subsequent measurement of the fluorescence. The excitation and emission wavelengths were 290 and 340 nm, respectively, and the corresponding band pass was either 8 or 16 nm. The high voltage of the detector was set to 800 -900 V. These measurements were carried out in 20 mM Tris-HCl (pH 7.0) and 100 mM NaCl at 20.0 Ϯ 0.1°C on an Aminco-Bowman Series 2 luminescence spectrometer (SLM Aminco, Urbana, IL). From the measurements of the observed antibody fluorescence (F) at various concentrations of the ligand (L T ), the dissociation constant (K d ) was calculated by nonlinear least squares fitting of the data to Equation 1, where E T is the total Fab concentration; F E is the observed fluorescence intensity without ligand, and F EL is the fluorescence intensity of the Fab⅐ligand complex at infinite ligand concentration. Stopped-flow Methods-Kinetic measurements were performed on an SX.18MV stopped-flow fluorescence instrument (Applied Photophysics, Leatherhead, UK). The Fab fragment (25-50 nM) was allowed to react with ligand that was in at least The base abstracts the C-3 proton from the substrate (1) and induces its decomposition. Antibody 34E4 was raised against hapten 4 and efficiently catalyzes the Kemp elimination. Compound 5 is a hapten derivative, which was used for measuring the hapten binding kinetics, and compound 6 is a noncleavable substrate analog, which served as a model compound to determine the substrate binding kinetics. 10-fold excess at equilibrium. Reactions were performed in 20 mM Tris-HCl (pH 7.0) and 100 mM NaCl at 20.0 Ϯ 0.1°C. Samples were excited at 290 nm, and fluorescence was monitored through a 320 nm cutoff filter. Data points from 19 to 35 reactions were averaged to improve the signal-to-noise ratio. Averaged data were fit to a single exponential or, where systematic deviations in the residuals from the fitted curve were observed, to a double exponential, using Origin 6.1 (Origin Lab Corp., Northampton, MA). In particular, the slow phase of hapten binding (Fig. 5B) is in excellent agreement with a double exponential function, as confirmed by decreased 2 and increased R 2 values with respect to a single exponential fit. The extracted rate constants, designated as k fast , k slow , and k slowest (k obs ϭ 1/, where is the reciprocal relaxation time), are functions of a combination of terms, including ligand concentration and microscopic rate constants for individual kinetic steps. Values of k fast were fitted to a standard bimolecular association model as follows: 1/ 1 ϭ k fast ϭ k Ϫ2 ϩ k 2 [L], where [L] is the ligand concentration. Values of k slow were fitted to a pre-equilibrium model as follows: 1/ 2 ϭ k slow ϭ k 1 ϩ k Ϫ1 (K s /([L] ϩ K s )), where K s ϭ k Ϫ2 /k 2 (37). RESULTS Conformational Isomerism in 34E4-The crystal structure of the unliganded Fab fragment of antibody 34E4 was determined from an orthorhombic crystal form at 2.6 Å resolution and from a related triclinic form at 2.6 Å resolution by molecular replacement (Table 1). Both crystal forms contain a large number of noncrystallographic symmetry-related Fab molecules (four and eight, respectively) in the crystal asymmetric unit, which allows structural differences due to variable crystal packing environments to be distinguished from conformational changes associated with hapten binding. A detailed quantitative comparison of the free and haptenbound (26) antibody shows that ligand binding does not shift the relative disposition of the V L and V H domains or cause any major movement of individual CDR loops. The most striking structural difference between liganded and unliganded 34E4 is the orientation of the Trp L91 indole ring (Figs. 2 and 3). In 11 copies of the 12 unliganded Fabs, the side chain of Trp L91 predominantly assumes the m95°rotamer, with 1 of Ϫ60°and 2 of ϩ95° (38). This rotamer represents the most common sidechain conformation of tryptophan in a data base of 240 high quality protein structures at 1.7 Å resolution or better (38). In only one of the 12 unliganded Fabs does Trp L91 adopt the less common t-105°rotamer, with 1 of 180°and 2 of Ϫ105° (38). This t-105°rotamer is the only one observed for Trp L91 in the previously determined hapten complex (26). The switch between these two m95°and t-105°rotamers substantially remodels the binding pocket and converts it from a shallow indentation on the surface of the protein to a deep cavity that can accommodate ligand (Fig. 4, A and B). Because the electron density of the Trp L91 side chain of some apo Fabs was somewhat weaker than that of the surrounding residues, indicating greater e R free is calculated as for R cryst , but from 5% of the data that was not used for refinement. f r.m.s.d. means root mean square deviation. g Thr L51 and Ser L93 are the only residues in a disallowed region, as observed in the hapten complex structures. Thr L51 is in a well defined ␥-turn , as in almost all other antibodies (46). mobility and/or multiple conformations, final placement of the indole side chain into the electron density was guided by a difference Fourier omit map, where the Trp L91 side chain was truncated to the C-␤ atom. The resulting model then reflects the most prevalent conformation for this residue. Nearby His H100B is the only other active-site residue to undergo significant movement. However, the rearrangement of its side chain is less pronounced than that of Trp L91 . Triphasic Kinetics of Hapten Binding-Fab 34E4 binds hapten derivative 5 ( Fig. 1) with high affinity (K d ϭ 1.5 nM) (25). Ligand association is accompanied by changes in intrinsic Fab fluorescence that are resolvable by stopped-flow techniques. Typical kinetic traces are shown in Fig. 5. The fluorescence transients can be roughly divided into three phases, each of which is described by a distinct exponential function. The fast phase of the reaction appears to be complete within ϳ0.1 s with an observed rate constant, k fast , of 54 Ϯ 4 s Ϫ1 (Fig. 5A). Attempts to fit the subsequent slower process to a single expo-nential resulted in systematic deviations in the residuals, whereas a double exponential function gave an excellent fit to the data (Fig. 5B), with rate constants k slow ϭ 0.79 Ϯ 0.03 s Ϫ1 and k slowest ϭ 0.130 Ϯ 0.005 s Ϫ1 for the two additional phases. The observation of triphasic transients is consistent with two possible mechanisms. In one (Fig. 6A), two conformations of the antibody, Fab and Fab*, with different reactivity toward the ligand, are in equilibrium with each other and with the respective ligand-bound complexes (12). Induced fit (Fig. 6A, clockwise from top left) and pre-equilibrium (counterclockwise from top left) mechanisms constitute the two limiting cases of this cyclic scheme. In an alternative three-step linear mechanism (Fig. 6B), only one antibody conformer (Fab*) is capable of ligand binding. Once the pocket is occupied, the initially formed Fab*⅐L complex subsequently undergoes an additional isomerization to form Fab**⅐L. In this scheme, the central binding step in conjunction with either the initial or final equilibria alone would correspond to the limiting pre-equilibrium and induced fit mechanisms, respectively. In principle, an induced fit mechanism can be distinguished from a pre-equilibrium mechanism from the dependence of the slow phase on ligand concentration (37). However, because the change in k slow is most pronounced at ligand concentrations below the dissociation constant of the Fab⅐ligand complex, and negligible at saturating ligand concentrations (12), it was not possible to resolve this issue with the tight binding hapten 5. Biphasic Kinetics of Substrate Analog Binding-Benztriazole 6 ( Fig. 1) is a substrate analog that binds to the 34E4 Fab fragment with lower affinity (K d ϭ 643 Ϯ 26 nM) than the hapten, as judged by fluorescence titration. The weaker affinity of this ligand was exploited to examine the concentration dependence of the rate constants associated with binding. For complex formation between 34E4 and 6, only the fast (ϳ0.1 s) and slow (ϳ5 s) phases were observed (Fig. 7A). At 5 M, k fast and k slow were 69.0 Ϯ 0.5 and 0.68 Ϯ 0.03 s Ϫ1 , respectively, in good agreement with k fast and k slow values obtained with 5. Although a very slow phase with low signal amplitude cannot be definitively ruled out, the available data support a simpler mechanism, such as an induced fit or pre-equilibrium pathway, for association of the substrate analog. To distinguish between the induced fit and pre-equilibrium mechanisms, the influence of ligand concentration on the fast and slow phases was determined. The fast phase of the reaction was linearly dependent on the concentration of the substrate analog (Fig. 7B), consistent with a simple bimolecular association step for ligand binding to the antibody with a second-order association rate constant (k 2 ) of (1.23 Ϯ 0.01) ϫ 10 7 M Ϫ1 s Ϫ1 and a dissociation rate constant (k Ϫ2 ) of 6.96 Ϯ 0.12 s Ϫ1 (39). The resulting affinity constant (K 2 ϭ k 2 /k Ϫ2 ) for the Fab⅐6 complex is (1.77 Ϯ 0.01) ϫ 10 6 M Ϫ1 . By contrast, the rate constant for the slow phase, k slow , decreased with increasing concentration of the substrate analog (Fig. 7C), which is consistent with a pre-equilibrium mechanism (37). The slow phase when fitted to a preequilibrium model gave values of 0.50 Ϯ 0.02 s Ϫ1 for the forward rate constant (k 1 ) and 1.55 Ϯ 0.08 s Ϫ1 for the reverse rate constant (k Ϫ1 ). Thus, the Fab apparently exists as two conformational isomers in solution in an equilibrium ratio of ϳ3:1, which favors the nonbinding form. The overall association constant, defined as K a ϭ K 2 /(1 ϩ K 1 Ϫ1 ) (39), is (4.3 Ϯ 0.5) ϫ 10 5 M Ϫ1 , which corresponds to a dissociation constant (K d ϭ 1/K a ) of 2.3 Ϯ 0.3 M. An increase in k slow at very high ligand concentrations relative to K d (data not shown) suggests that the "inactive" Fab conformation may have weak affinity for the ligand or that ligand may induce another conformational change (12). This finding may account for the small difference between the calculated dissociation constant and the equilibrium value determined by fluorescence titration. DISCUSSION Ligand binding to 34E4 does not conform precisely to standard "lock and key" or "induced fit" mechanisms. Instead, the antibody exists as a mixture of "closed" and "open" forms in its resting state, only the latter of which is competent to bind ligand. In the closed state, which predominates in the absence of ligand, the bulky aromatic side chain of Trp L91 occupies the binding pocket, thereby blocking substrate access. When the indole moves out of the binding site, a deep pocket is created that can accommodate substrate and, hence, position it appropriately for rapid proton abstraction by Glu H50 . The initial burst phase detected by stopped flow, which is completed in about 50 ms, can be attributed to ligand binding to the small fraction of antibody present in its open conformation at equilibrium. The change in fluorescence upon complex formation during this phase (an increase in the case of cationic hapten derivative 5 and a large decrease with the neutral substrate analog 6) is consistent with stacking of the ligand against Trp L91 in the resulting complex. In the subsequent slow phase, which proceeds over several seconds, association of the ligand depends on re-equilibration of the isomeric antibody conformers. The structural data argue against the latter process being induced by ligand, because binding to the closed form of the antibody would hinder the Trp L91 rotameric switch. In contrast, the very slow phase, which is apparent only with the original hapten, probably reflects an induced fit adjustment of the protein around the bound ligand to form the high-affinity complex. The binding properties of 34E4 resemble those of the noncatalytic antibody SPE7 (27,28), which exploits multiple conformations to permit binding to several ligands with micromolar to nanomolar affinity. Consistent with their nearly identical light chains and similarly configured active sites, both antibodies exhibit triphasic kinetics for binding their cognate haptens and simpler biphasic kinetics with lower affinity ligands. The relative magnitudes of the derived microscopic rate constants are similar, although the rates of the pre-equilibrium and the very slow induced fit steps are roughly 10-fold slower in 34E4 than SPE7. In structural terms, the conformational changes in 34E4 are less dramatic than in SPE7. However, in both antibodies, the switch between the t-105°and m95°rotamers of structurally analogous tryptophans controls access of the aromatic haptens to the active site. Moreover, once bound, the planar ligands are sandwiched between this indole ring (Trp L91 or Trp L93 ) and a tyrosine (Tyr H100D in 34E4 or Tyr H105 in SPE7). What are the consequences of a pre-equilibrium mechanism of 34E4 for catalytic efficiency? In the absence of ligand, the inactive form of the antibody predominates and diminishes the concentration of functional active sites. As a consequence, catalytic efficiency (k cat /K m ) will be reduced by a factor of at least 3 or 4. Furthermore, conversion of the inactive into the active conformer is relatively slow. Given the similarity of the rate constant for this process (0.5 s Ϫ1 ) and the k cat value for proton transfer for the Fab (0.45 s Ϫ1 ) (25), conformational isomerism may (partially) limit turnover of the catalyst. 5 Whether conformational dynamics have a direct influence on the actual proton transfer step is uncertain, but the very slow induced fit adjustment of the binding site to hapten derivative 5 is suggestive. The complementarity between the protein and the substrate would be similarly expected to increase as the proton is transferred to Glu H50 in the transition state, which could lead to a tightening of the structure. Unliganded and liganded forms of 18 other catalytic antibodies have been characterized structurally and deposited in the Protein Data Bank (40). Conformational changes are evident in nine. These recapitulate the structural adjustments seen in many noncatalytic antibodies (3,10,11) and include rotations of single side chains to rigid-body movements of entire CDR loops. In three catalysts, an aromatic side chain flips into the binding pocket as in 34E4, occupying or partly overlapping with the substrate-binding site. In antibody 4C6, which catalyzes a cationic cyclization reaction, the Trp L89 indole regulates ligand binding in this way (41). In antibodies CNJ206 (42,43) and 5C8 (44), which promote an esterolytic reaction and a disfavored cyclization of trans-epoxy alcohols, respectively, the mobile residue is a tyrosine. Although these catalysts have not been characterized by pre-steady state kinetics, they would also be expected to display a pre-equilibrium mechanism like 34E4. Achieving enzyme-like efficiency in an immunoglobulin scaffold still remains a formidable challenge. General problems associated with imperfect hapten design, indirect selection, and limited screening procedures have been recognized (2). Here we show that efficacy can also be compromised by an unfavorable conformational equilibrium that transforms the active form of the catalyst into an inactive form that cannot bind substrate. As aromatic moieties are prevalent in haptens used to elicit high-affinity antibodies, such structural rearrangements may be inevitable (45). With respect to catalysis by antibody 34E4, stabilization of the ligand-competent form by active-site redesign to fix the Trp L91 in place may augment its catalytic power. More generally, protein engineering to attenuate the problem of adverse structural rearrangements promises to yield better antibody, as well as other protein catalysts.
2018-04-03T00:55:41.426Z
2008-06-13T00:00:00.000
{ "year": 2008, "sha1": "9f1801909ae56417362ed9ffb851700dc4470f96", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/283/24/16554.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "55cfa75cebfa78b25fe3ac5a73593dcf3cfdf2b6", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
211831120
pes2o/s2orc
v3-fos-license
REVEILLE Transcription Factors Contribute to the Nighttime Accumulation of Anthocyanins in ‘Red Zaosu’ (Pyrus Bretschneideri Rehd.) Pear Fruit Skin Anthocyanin biosynthesis exhibits a rhythmic oscillation pattern in some plants. To investigate the correlation between the oscillatory regulatory network and anthocyanin biosynthesis in pear, the anthocyanin accumulation and the expression patterns of anthocyanin late biosynthetic genes (ALBGs) were investigated in fruit skin of ‘Red Zaosu’ (Pyrus bretschneideri Rehd.). The anthocyanin accumulated mainly during the night over three continuous days in the fruit skin, and the ALBGs’ expression patterns in ‘Red Zaosu’ fruit skin were oscillatory. However, the expression levels of typical anthocyanin-related transcription factors did not follow this pattern. Here, we found that the expression patterns of four PbREVEILLEs (PbRVEs), members of a class of atypical anthocyanin-regulated MYBs, were consistent with those of ALBGs in ‘Red Zaosu’ fruit skin over three continuous days. Additionally, transient expression assays indicated that the four PbRVEs promoted anthocyanin biosynthesis by regulating the expression of the anthocyanin biosynthetic genes encoding dihydroflavonol-4-reductase (DFR) and anthocyanidin synthase (ANS) in red pear fruit skin, which was verified using a dual-luciferase reporter assay. Moreover, a yeast one-hybrid assay indicated that PbRVE1a, 1b and 7 directly bound to PbDFR and PbANS promoters. Thus, PbRVEs promote anthocyanin accumulation at night by up-regulating the expression levels of PbDFR and PbANS in ‘Red Zaosu’ fruit skin. Introduction Anthocyanins are a group of water-soluble flavonoid metabolites that exist widely in plants [1]. Anthocyanins play various roles in plant growth and development [2][3][4][5]. In plants, the antioxidant capacities of anthocyanins rely on the extent of B-ring hydroxylation, the type and degree of acylation and glycosylation [2][3][4]. Anthocyanins play important roles in promoting plant reproduction by transmitting bright colors to pollinators and seed spreaders [2,5]. Additionally, anthocyanins have human health-related benefits [6][7][8][9]. Therefore, investigating anthocyanin accumulation in fruit is worthwhile. In plants, the circadian system coordinates physiology and metabolism with the most appropriate or favorable time of day or season [23]. Plants can save energy and resources by regulating reaction times; therefore, the circadian system is crucial to the health and survival of plants. The anthocyanin biosynthetic pattern has diurnal oscillations in some plants. In A. thaliana, the expression levels of phenylalanine ammonia lyase, chalone synthase, chalcone isomerase, and DFR are regulated by a circadian clock [24]. MYBL2 and MYBD are regulated by circadian rhythms and involved in the anthocyanin biosynthetic pathway of A. thaliana [25,26]. Single MYB TFs, named REVEILLEs (LHY-CCA1-LIKE) act as important regulators of circadian clockwork [27]. Moreover, in A. thaliana, an atypical MYB (single MYB TF) named AtRVE8 regulates anthocyanin biosynthesis by binding directly to the promoters of anthocyanin structural genes, and AtRVE8 also regulates the expression levels of genes in response to diurnal oscillations [27]. However, the rhythmic regulation of anthocyanin biosynthesis has remains largely unknown in most fruit. In this study, we isolated four RVEs of 'Red Zaosu' pear (Pyrus bretschneideri Rehd.) because of the linkage between their expression patterns and nighttime increases in anthocyanin biosynthesis. Moreover, we found that the expression levels of the four RVEs exhibited rhythmic oscillation patterns in 'Red Zaosu' fruit skin. Then, we investigated the functions of the PbRVEs in anthocyanin accumulation in pear fruit skin. This study confirmed that PbRVEs promote anthocyanin accumulation by up-regulating the expression levels of PbDFR and PbANS in pear fruit skin. The Anthocyanin Content Oscillated Diurnally and Mainly Increased over Night in 'Red Zaosu' Fruit Skin To determine whether anthocyanin accumulates during the daytime or nighttime in red-skinned pear fruit, the anthocyanin content of 'Red Zaosu' fruit skin was measured during the daytime (from sunrise to sunset). Moreover, to accurately observe the anthocyanin accumulation in pear fruit skin during a short period of time, color-faded bagged fruit were used because of the low background level of anthocyanin. Color-faded fruit of 'Red Zaosu' were exposed to sunlight to re-accumulate anthocyanin over three continuous days. The anthocyanin content in 'Red Zaosu' fruit skin rhythmically increased after sunrise and then decreased from noon to sunset (Figure 1a). However, the anthocyanin content between sunrise and sunset was not significantly different during the course of a day. This phenomenon was also observed in other three red pear varieties (Supplementary Figure S1a). However, a significant increase in the anthocyanin content of 'Red Zaosu' fruit skin was detected from sunset to the next sunrise over three continuous days ( Figure 1a). Thus, the anthocyanin content in the skin of red pear fruit mainly accumulated during the night. Supplementary Table S1. HAS: hours after sunrise of day 1. Error bars represent the standard errors (SEs) of the means (n = 3). Data in (a) was determined using a one-way analysis of variance (p < 0.05); the significant differences are indicated by different lowercase letters. We further investigated the expression patterns of the anthocyanin late biosynthetic genes (ALBGs), including PbANS, PbDFR and PbUFGT, and a typical anthocyanin transporter gene, PbGSTF12, in the fruit skin of 'Red Zaosu' over three continuous days. The expression levels of the ALBGs in the fruit skin of 'Red Zaosu' rhythmically decreased from sunrise to sunset and then increased until the next sunrise over the three continuous days (Figure 1b). The significant nighttime increase in expression was observed for each of the ALBGs, but not for PbGSTF12, over the three continuous days (Figure 1b). Thus, the accumulation of anthocyanin in 'Red Zaosu' fruit skin occurred mainly during the night rather than during the day. The Expression Patterns of Candidate REVEILLE (RVE) TFs Correlated with the Nighttime Increase in the Anthocyanin Level in 'Red Zaosu' Fruit Skin To identify the candidate regulators of the nighttime increases in anthocyanin in 'Red Zaosu' pear fruit skin, the expression levels of typical anthocyanin-related TFs, including PbMYB9, PbMYB10, PbMYB10b, PbbHLH3, PbbHLH33a and PbbHLH33b, were initially investigated in the fruit skin of 'Red Zaosu' over three continuous days ( Figure 2). Surprisingly, none of these TFs showed an expression pattern similar to that of the ALBGs (Figure 2). Therefore, we further focused on the RVEs because of the linkage between their expression patterns and anthocyanin biosynthesis [27]. Seven selected pear RVEs were isolated from the Chinese pear genome [22] (http://peargenome. njau.edu.cn/, March 1, 2018). To analyze the relationship between PbRVE and AtRVE proteins, a phylogenetic tree was constructed using the Neighbor-Joining method ( Figure 3a). These PbRVE proteins were classified into two subgroups. The PbRVE1s and PbRVE7 clustered into type I, while the PbRVE3s, PbRVE6 and PbRVE8 clustered into type II (Figure 3a). To identify which PbRVEs participated in anthocyanin accumulation during the night, we analyzed the expression patterns of PbRVEs in the skin of 'Red Zaosu' fruit at sunrise and sunset over the three continuous days. The expression levels of PbRVE1a, 1b, 7 and 8, but not PbRVE3a, 3b and 6, significantly increased during the nighttime in skin of 'Red Zaosu' fruit ( Figure 3b). Therefore, the expression levels of PbRVE1a, 1b, 7 and 8 in the skin of 'Red Zaosu' fruit were further investigated over three continuous days. PbRVE1a, 1b, 7 and 8 expression levels peaked at dawn and then decreased until sunset during each day (Figure 3c). Furthermore, this expression pattern was also found in the skins of other red pear cultivars (Supplementary Figure S1b). Moreover, at sunrise and sunset of each of three continuous days, significant correlations were observed between PbRVE and ALBG expression levels in 'Red Zaosu' fruit skin, while the expression levels of PbMYB9, PbMYB10, PbMYB10b and PbbHLH33a were only slightly correlated with those of PbANS, PbDFR and PbUFGT (Table 1). A multiple-alignment showed that all the candidate PbRVEs and AtRVEs contained the conserved SHAQK[Y/F]F motif in the DNA-binding domains of their N-terminal regions (Figure 3d). Thus, PbRVE1a, 1b, 7 and 8 were selected as candidate genes because of the high correlations between their expression and the expression of ALBGs in pear fruit skin. The pairwise correlation coefficients between the expression levels of anthocyanin-related TFs and those of ALBGs in the fruit skin of 'Red Zaosu' pear over three continuous days. The NCBI accessions of the anthocyanin-related genes are listed in Supplementary Table S1. The significant correlation coefficients are indicated in bolded values. The data was analyzed using SPSS 20. *: Correlation significant at the 0.05 level (p < 0.05, two-tailed); **: Correlation significant at the 0.01 level (p < 0.01, two-tailed). Overexpression of PbRVEs in 'Zaosu' Pear Fruit Promoted Anthocyanin Accumulation To investigate the bio-functions of PbRVEs in anthocyanin regulation, these candidates were transiently overexpressed using agrobacterium-infiltration of the skin of 'Zaosu' fruitlets. The validity of the 'Zaosu' fruitlets' infection was validated by monitoring the β-glucuronidase gene (GUS) signal (Supplementary Figure S2). The transient overexpression of PbRVE1a, 1b, 7 and 8 independently in 'Zaosu' fruitlet skins increased the anthocyanin accumulation ( Figure 4). However, the promotive efficiency among these PbRVEs varied (Figure 4a,b). Consequently, the overexpression of PbRVE1b resulted in an intense pigmentation of the pear fruitlet skins (4.25 times darker than controls), whereas lighter pigmentation was observed when PbRVE1a, 7, and 8 were independently overexpressed in pear fruitlet skins (from 2.10 to 2.84 times darker than controls) (Figure 4c). Additionally, the virus-induced gene silencing (VIGS) of PbRVE1a, 1b, 7 and 8 independently in 'Palacer' (P. communis L.) fruitlet skins decreased the anthocyanin accumulation (Supplementary Figure S3). The expression levels of PbDFR and PbANS significantly increased in PbRVEs-overexpression (OE) 'Zaosu' fruitlet skins, especially PbRVE1b-OE 'Zaosu' fruitlet skins, but the expression level of PbUFGT was not affected (Figure 4c). Thus, PbRVEs promote the expression levels of PbDFR and PbANS to increase the anthocyanin accumulation in 'Zaosu' pear fruit. To determine whether PbRVEs regulated PbDFR, PbANS and PbUFGT directly, yeast one-hybrid (Y1H) tests were conducted. PbRVE1a, 1b, and 7 bound directly to the PbDFR and PbANS promoters (Figure 5a). However, direct interactions between the PbRVEs and PbUFGT were not detected using the Y1H test (Figure 5a). To determine the effects of PbRVEs on PbDFR, PbANS and PbUFGT, the promoter regions of PbDFR, PbANS and PbUFGT were used in a dual-luciferase assay system in Nicotiana benthamiana leaves. Infiltration with PbRVEs activated the promoters of PbDFR, PbANS and PbUFGT, and PbRVE1b showed a strongly ability to activate the promoters of PbDFR, PbANS and PbUFGT (Figure 5b). Thus, PbRVEs appear to activate directly the promoters of PbDFR, PbANS and PbUFGT, resulting in higher anthocyanin accumulations. PbRVE1a, 1b, 7 and 8 Expression Levels Correlated with Anthocyanin Accumulation during the Nighttime in 'Red Zaosu' Pear Fruit Skin The typical anthocyanin biosynthesis-regulating TFs, such as MYB10 and HY5, play important roles in the anthocyanin biosynthetic pathways of fruit [1,21,28]. In A. thaliana, HY5 promotes anthocyanin biosynthesis by binding directly to the promoter regions of ALBGs, such as DFR, LDOX and UF3GT [1]. MYB10 positively activates DFR in the anthocyanin biosynthesis of apple [28,29]. MYB10 and MYB10b have positive functions in regulating anthocyanin biosynthesis and accumulation in pear [20,30,31]. However, daily fluctuations in expression were not observed for the typical anthocyanin-related TFs involved in anthocyanin biosynthesis ( Figure 2). Therefore, these typical anthocyanin-related TFs are not the main elements active in the anthocyanin biosynthesis pathway during the nighttime in 'Red Zaosu' fruit skin. Consequently, in this study, we investigated the TFs involved in the overnight accumulation of anthocyanins in 'Red Zaosu' fruit skin. The single MYB-like TF, AtRVE8, has been identified as an activator in the anthocyanin biosynthetic pathway of A. thaliana [27]. Based on the phylogenetic tree and expression patterns analysis between PbRVEs and AtRVEs, PbRVE1a, 1b, 7 and 8 were selected for further investigation (Figure 3a,b). The expression patterns of PbRVE1a, 1b, 7 and 8 showed diurnal oscillations and increased during the nighttime in 'Red Zaosu' fruit ( Figure 3c). In other red pears, anthocyanin did not accumulate in the daytime (Supplementary Figure S1a). Moreover, the expression patterns of PbRVE1a, 1b, 7 and 8 peaked near dawn in red pear fruit skin (Figure 3c, Supplementary Figure S1b). This result was consistent with the occurrence of anthocyanin accumulation during the nighttime in 'Red Zaosu' pear fruit skin (Figure 1a). The data indicate that PbRVE1a, 1b, 7, and 8 are potential TFs involved in the nighttime increase in anthocyanin accumulation in 'Red Zaosu' pear fruit. PbRVEs Promoted Anthocyanin Accumulation by Up-Regulating the Expression Levels of PbDFR and PbANS in Pear Fruit Skin ALBGs (such as DFR, ANS and UFGT) are involved in the anthocyanin biosynthetic pathways of fruits [13]. The expression levels of anthocyanin-related structural genes exhibit diurnal oscillation patterns and appear to be regulated by the circadian clock in A. thaliana [24]. Additionally, in A. thaliana, the expression levels of the anthocyanin-related genes appear to change during light/dark cycles [27]. AtRVE8 up-regulates the expression of anthocyanin biosynthetic genes in RVE8-OE A. thaliana plants [27]. In this study, the expression levels of PbDFR, PbANS and PbUFGT exhibited diurnal oscillations and increased during the night in 'Red Zaosu' pear fruit skin (Figure 1b). The transient over-expression assay showed that the PbRVEs had different abilities to up-regulate the expression levels of structural genes ( Figure 4). According to the transient overexpression assay, PbRVE1b had a stronger ability than the other RVEs to increase anthocyanin biosynthesis in pear fruitlet skins (Figure 4a,b). Furthermore, transient VIGS assays indicated that anthocyanin did not accumulate when PbRVE1b was silenced in 'Palacer' pear fruitlet skin (Supplementary Figure S3). In A. thaliana, RVE8 directly binds and regulates the expression of anthocyanin structural gene promoters in response to the diurnal oscillation in anthocyanin accumulation [27,32]. However, the Y1H assay verified that PbRVE1a, 1b and 7, but not PbRVE8, bound directly to the promoters of PbDFR and PbANS (Figure 5a). The function of PbRVE8 in binding to the promoters of PbDFR and PbANS in pear has been precluded by other proteins [27]. In this study, the correlation analysis indicated that the expression levels of PbRVEs were significantly positively correlated with the expression levels of PbDFR, PbANS and PbUFGT (Table 1). Thus, we inferred that PbDFR and PbANS are directly downstream factors of PbRVE1a, 1b and 7 in 'Red Zaosu' pear. Using the dual-luciferase assay, we determined that the expression of PbRVE1b increases the activities of the PbDFR, PbANS and PbUFGT promoters (Figure 5b). However, PbRVE1a and PbRVE1b did not bind directly with the PbUFGT promoter (Figure 5a). This result was consistent with the transient overexpression of the PbRVEs (Figure 4c). We speculated that PbRVE1a and PbRVE1b do not directly affect the PbUFGT promoter in pear. In N. benthamiana leaves, RVE1a and RVE1b may interact with other factors to active the UFGT promoter [33][34][35]. Thus, PbRVE1a, 1b and 7 increase anthocyanin accumulation by directly binding and activating PbDFR and PbANS in pear fruit. Plant Material and Treatments The 'Red Zaosu' pear (P. bretschneideri Rehd.) is a bud sport of 'Zaosu' pear and has characteristic red fruit and leaves. The regulatory mechanism of anthocyanin biosynthesis in 'Red Zaosu' has been studied [20,36]. Therefore, we chose 'Red Zaosu' to investigate the diurnal accumulation of anthocyanin. The fruit of 'Red Zaosu' was selected from a commercial orchard in Mei County, Shaanxi Province, China, in 2018. The fruit of 'Red Zaosu' and 'Palacer' (P. communis L.) were selected at approximately 40 d after flower blossom and bagged for 30 d until the red pigments totally faded. Then, the fruit of 'Red Zaosu' was exposed to daylight for three continuous days. The experiment was conducted on 12-14 June 2018. Additionally, the fruit of 'Red Zaosu' were harvested at 0, 3,6,9,12,24,27,30,33,36,48, 50, 54, 57 and 60 hours after sunrise of day 1 (HAS). The faded 'Palacer' fruitlets were used for the PbRVE virus-induced gene silencing (VIGS) assay. The skins of these harvested fruit were frozen in liquid nitrogen and stored at −80 • C for the subsequent measurement of the anthocyanin content and RNA extraction. For the dual-luciferase assay infiltration, N. benthamiana seedlings were grown in a light incubator (16-h light/8-h dark) at 22 • C. Anthocyanin Content Measurements The pH differential method was used to measure the total anthocyanin contents of red skin pear fruitlets [37]. The extraction of total anthocyanins was performed using a previously reported method, with slight modifications [38,39]. Approximately 0.2 g samples of fruit skin were powdered in liquid nitrogen and mixed with PVP-K30 (Sigma, St. Louis, MI, USA), and then 1.5 mL of 1% HCL-methanol was added to the mixed sample. After centrifugation at 4 • C and 12, 000× g for 5 min, 200-µL aliquots of the supernatant were transferred separately to two clear tubes for dilution. One was diluted with 400 µL 0.025 M potassium chloride buffer (pH 1.0), and the other with 400 µL 0.4 M sodium acetate buffer (pH 4.5). These solutions were placed for 15 min in the dark at room temperature before the absorbance values were measured synchronously at 520 nm and 700 nm using a Microporous plate spectrophotometer (Multiskan GO; Thermo Scientific, Waltham, MA, USA). Isolation of RVE Genes and Their Phylogenetic Analysis The sequences of selected pear RVEs were isolated from pear databases [22] (http://peargenome. njau.edu.cn/, access date 1 March 2018). The RVEs from pear and A. thaliana were aligned using ClustalW (MEGA 7.0, The Biodesign Institute, Arizona State University, AZ, USA) [20]. The phylogenetic analysis was performed with the Minimum-Evolution method and the JTT model using the MEGA 7.0 program (The Biodesign Institute, Arizona State University, AZ, USA) [20]. The GenBank accession numbers for the functionally labelled RVEs are listed in Supplementary Table S1. The complete coding DNA sequences (CDSs) of candidate RVEs were cloned using PrimeSTAR Max Premix (TaKaRa, Dalian, China) and gene-specific primers (Supplementary Table S2) from 'Red Zaosu' cDNA sources. RNA Isolation and an Expression Analysis Using Quantitative Real-Time PCR (qRT-PCR) The total RNA of skins was extracted using the RNA prep Pure Plant Kit (Tiangen, Beijing, China). The RNA concentration and quality were detected by UV spectrophotometry and a 0.8% agar gel. In total, 1 µg of total RNA was reverse-transcribed to cDNA using the PrimeScript RT reagent kit with gDNA Eraser (TaKaRa, Dalian, China). The primers used for qRT-PCR were designed with Oligo 7 software (Molecular Biology Insights, Inc., Colorado Springs, USA) and synthesized by AuGCT Biotechnology Synthesis Lab (Beijing, China). The qRT-PCR was performed on an Applied Biosystems StepOnePlus™ Real-Time PCR Systems (Applied Biosystems, Waltham, MA, USA) with TB Green Premix Ex Taq II (Tli RNaseH Plus; TaKaRa, Dalian, China) according to the manufacturer's instructions. Data were analyzed using the 2 −∆∆CT method. All the qRT-PCR reactions were replicated three times for each biological repeat. The primers for actin, anthocyanin biosynthetic genes and candidate RVEs are listed in Supplementary Table S2. Transient Expression Assay in Pear Fruitlet Skins The complete CDSs of RVE TFs were cloned into the multiple cloning site (MCS) (BamHI-HindIII) of the pGreenII 0029 62-SK binary vector to form PbRVE-OE plasmids (PbRVE1a-OE, PbRVE1b-OE, PbRVE7-OE and PbRVE8-OE) [40]. The complete GUS CDS in the pBI121-GUS plasmid was cloned into the MCS of the pGreenII 0029 62-SK binary vector to form the pGreenII 0029 62-SK-GUS plasmid (described in Supplementary Figure S4a). The 400-600-bp fragments of the C-termini of RVE TFs were inserted into the MCS (BamHI-XhoI) of pTRV2 to form PbRVE VIGS vectors (PbRVE1a-TRV, PbRVE1b-TRV, PbRVE7-TRV and PbRVE8-TRV, described in Supplementary Figure S4b). The primers for amplifying the sequences are described in Supplementary Table S2. Agrobacterium tumefaciens strain EHA105 independently containing the constructed plasmids was grown at 28 • C on Luria-Bertani (LB) solid medium supplemented with 50 mg/L kanamycin and 25 mg/L rifampicin. After incubating for 48 h, the A. tumefaciens was resuspended in the infiltration buffer (10 mM MgCl 2 , 10 mM MES and 200 µM acetosyringone) and shaken for 3-4 h (up to a final OD600 of 0.8) at room temperature before being injected into pear fruitlet skins. The pear injection process was as described in Spolaore et al. [41] and the injection volume was as described in Zhai et al. [20]. The negative controls were infiltrated with A. tumefaciens containing pGreenII 0029 62-SK or the pTRV2 empty vector. The treated fruitlets were harvested at 5 d after injection. For GUS staining, the plant materials were stained with 5-bromo-4-chloro-3-indolyl glucuronide at 37 • C for 12 h as described previously [42]. Y1H Assay The Y1H assays were performed following the manufacturer's instructions for the Matchmaker Gold Yeast One-Hybrid System Kit (Clontech, Mountain View, CA, USA). We ligated independentlỹ 800-bp fragments of the PbANS and PbDFR promoters into pAbAi to construct the pAbAi-baits. Additionally, the complete CDSs of the PbRVEs were separately inserted into the pGADT7 vector to construct the prey-AD vectors. The pAbAi-bait vectors were linearized and transformed into Y1HGold separately. The colonies were selected in the absence of uracil on selective medium-containing agar plates. The correct integration of plasmids into the genome of the Y1HGold yeast was confirmed using a colony PCR analysis (Matchmaker Insert Check PCR Mix 1; Clontech, Mountain View, CA, USA). After determining the minimal inhibitory concentration of Aureobasidin A (AbA) for the bait-reporter yeast strains, the AD-prey vectors were transformed into the bait yeast strains and selected on synthetic dextrose (SD)/−Leu/AbA plates. All the transformations and screenings were performed three times. Each of these recombinant plasmids and the pSoup helper plasmid were transferred individually into A. tumefaciens strain EHA105 [40]. The A. tumefaciens cells containing PbRVE-SKs were separately mixed with proPbDFR-LUC or proPbANS-LUC at 1:1 ratio before being infiltrated into 4-week-old N. benthamiana leaves. After injection, the plants were grown for 3 d in a light incubator (16-h light/8-h dark) at 22 • C, and then, the treated leaves were collected in 1 × phosphate buffered solution for the dual-luciferase assay. The Firefly luciferase (Luc) to Renilla luciferase (Ren) activity ratios were analyzed using a Dual-Luciferase ® Reporter Assay System (Promega, Madison, WI, USA) on a Tecan Infinite M200pro Full-Wavelength Multifunctional Enzyme Labelling Instrument (TECAN, Männedorf, Switzerland). Three independent experiments were carried out with at least five biological replicates per experiment. Statistical Analysis All the experimental data were statistically processed using GraphPad Prism 6 software (GraphPad Software Inc., San Diego, CA, USA) and are shown as means ± standard errors (SEs). Additionally, the significant differences were analyzed using a one-way analysis of variance and Student's t-tests. Supplementary Materials: Supplementary materials can be found at http://www.mdpi.com/1422-0067/21/5/1634/ s1. Figure S1: The anthocyanin content of red pear fruit and expression patterns of candidate PbRVEs in red pear fruit; Figure S2: The GUS-stained 'Zaosu' fruitlets skin infiltrated by pGreen II 62-SK-GUS; Figure S3: Functional analysis of the PbRVEs using VIGS in pear fruitlets skin; Figure S4: Construction of the recombinant plasmid; Table S1: The accession numbers of the genes used in this study; Table S2: List of primers used in this study; Table S3: The actual transcript abundance data of PbRVEs. Conflicts of Interest: The authors declare no conflict of interest. The funder L.X. had the roles in conceptualization, validation, project administration, resources, supervision and the decision to publish the results. The funder Z.W. had the roles in methodology, validation, writing-review and editing, project administration and resources.
2020-03-04T14:04:34.822Z
2020-02-27T00:00:00.000
{ "year": 2020, "sha1": "b4730dd25310d9e58285633240813b3a52ea1de8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms21051634", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34dbfa2c81b52607d8edafc11bf6855a42f72112", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
118991445
pes2o/s2orc
v3-fos-license
Euclidean formulation of general relativity A variational principle is applied to 4D Euclidean space provided with a tensor refractive index, defining what can be seen as 4-dimensional optics (4DO). The geometry of such space is analysed, making no physical assumptions of any kind. However, by assigning geometric entities to physical quantities the paper allows physical predictions to be made. A mechanism is proposed for translation between 4DO and GR, which involves the null subspace of 5D space with signature $(-++++)$. A tensor equation relating the refractive index to sources is established geometrically and the sources tensor is shown to have close relationship to the stress tensor of GR. This equation is solved for the special case of zero sources but the solution that is found is only applicable to Newton mechanics and is inadequate for such predictions as light bending and perihelium advance. It is then argued that testing gravity in the physical world involves the use of a test charge which is itself a source. Solving the new equation, with consideration of the test particle's inertial mass, produces an exponential refractive index where the Newtonian potential appears in exponent and provides accurate predictions. Resorting to hyperspherical coordinates it becomes possible to show that the Universe's expansion has a purely geometric explanation without appeal to dark matter. Introduction According to general consensus any physics theory is based on a set of principles upon which predictions are made using established mathematical derivations; the validity of such theory depends on agreement between predictions and observed physical reality. In that sense this paper does not formulate a physical theory because it does not presume any physical principles; for instance it does not assume speed of light constancy or equivalence between frame acceleration and gravity. This is a paper about geometry. All along the paper, in several occasions, a parallel is made with the physical world by assigning a physical meaning to geometric entities and this allows predictions to be made. However the validity of derivations and overall consistency of the exposition is independent of prediction correctness. The only postulates in this paper are of a geometrical nature and can be condensed in the definition of the space we are going to work with: 4-dimensional space with Euclidean signature (+ + ++). For the sole purpose of making transitions to spacetime we will also consider the null subspace of the 5-dimensional space with signature (− + + + +). This choice of space does not imply any assumption about its physical meaning up to the point where geometric c G e entities like coordinates and geodesics start being assigned to physical quantities like distances and trajectories. Some of those assignments will be made very early in the exposition and will be kept consistently until the end in order to allow the reader some assessment of the proposed geometric model as a tool for the prediction of physical phenomena. Mapping between geometry and physics is facilitated if one chooses to work always with non-dimensional quantities; this is easily done with a suitable choice for standards of the fundamental units. In this work all problems of dimensional homogeneity are avoided through the use of normalising factors for all units, listed in Table 1, defined with recourse to the fundamental constants: Planck constant, gravitational constant, speed of light and proton charge. This normalisation defines a system of non-dimensional units with important consequences, namely: 1) all the fundamental constants, , G, c, e, become unity; 2) a particle's Compton frequency, defined by ν = mc 2 / , becomes equal to the particle's mass; 3) the frequent term GM /(c 2 r) is simplified to M/r. The particular space we chose to work with can have amazing structure, providing countless parallels to the physical world; this paper is just a limited introductory look at such structure and parallels. The exposition makes full use of an extraordinary and little known mathematical tool called geometric algebra (GA), a.k.a. Clifford algebra, which received an important thrust with the introduction of geometric calculus by David Hestenes [1]. A good introduction to GA can be found in Gull et al. [2] and the following paragraphs use basically the notation and conventions therein. A complete course on physical applications of GA can be downloaded from the internet [3] with a more comprehensive version published recently in book form [4] while an accessible presentation of mechanics in GA formalism is provided by Hestenes [5]. Introduction to geometric algebra We will use Greek characters for the indices that span 1 to 4 and Latin characters for those that exclude the 4 value; in rare cases we will have to use indices spanning 0 to 3 and these will be denoted with Greek characters with an over bar. The geometric algebra of the hyperbolic 5-dimensional space we want to consider G 4,1 is generated by the frame of orthonormal vectors {i, σ µ }, µ = 1 . . . 4, verifying the relations We will simplify the notation for basis vector products using multiple indices, i.e. σ µ σ ν ≡ σ µν . The algebra is 32-dimensional and is spanned by the basis 1, {i, σ µ }, {iσ µ , σ µν }, {iσ µν , σ µνλ }, {iI, σ µ I}, I; 1 scalar 5 vectors 10 bivectors 10 trivectors 5 tetravectors 1 pentavector (2) where I ≡ iσ 1 σ 2 σ 3 σ 4 is also called the pseudoscalar unit. Several elements of this basis square to unity: and the remaining square to −1: Note that the symbol i is used here to represent a vector with norm −1 and must not be confused with the scalar imaginary, which we don't usually need. The geometric product of any two vectors a = a 0 i+a µ σ µ and b = b 0 i+b ν σ ν can be decomposed into a symmetric part, a scalar called the inner product, and an anti-symmetric part, a bivector called the exterior product. Reversing the definition one can write internal and exterior products as When a vector is operated with a multivector the inner product reduces the grade of each element by one unit and the outer product increases the grade by one. There are two exceptions; when operated with a scalar the inner product does not produce grade −1 but grade 1 instead, and the outer product with a pseudoscalar is disallowed. Displacement and velocity Any displacement in the 5-dimensional hyperbolic space can be defined by the displacement vector ds = idx 0 + σ µ dx µ ; and the null space condition implies that ds has zero length ds 2 = ds·ds = 0; (8) which is easily seen equivalent to either of the relations These equations define the metrics of two alternative spaces, one Euclidean the other one Minkowskian, both equivalent to the null 5-dimensional subspace. A path on null space does not have any affine parameter but we can use Eqs. (9) to express 4 coordinates in terms of the fifth one. We will frequently use the letter t to refer to coordinate x 0 and the letter τ for coordinate x 4 ; total derivatives with respect to t will be denoted by an over dot while total derivatives with respect to τ will be denoted by a "check", as inF . Dividing both members of Eq. (7) by dt we getṡ This is the definition for the velocity vector v; it is important to stress again that the velocity vector defined here is a geometrical entity which bears for the moment no relation to physical velocity, be it relativistic or not. The velocity has unit norm becauseṡ 2 = 0; evaluation of v ·v yields the relation v·v = (ẋ µ ) 2 = 1. The velocity vector can be obtained by a suitable rotation of any of the σ µ frame vectors, in particular it can always be expressed as a rotation of the σ 4 vector. At this point we are going to make a small detour for the first parallel with physics. In the previous equation we replace x 0 by the greek letter τ and rewrite withτ 2 in the first membeṙ The relation above is well known in special relativity, see for instance Martin [6]; see also Almeida [7], Montanus [8] for parallels between special relativity and its Euclidean space counterpart. 1 We note that the operation performed between Eqs. (11) and (12) is a perfectly legitimate algebraic operation since all the elements involved are pure numbers. Obviously we could also divide both members of Eq. (7) by dτ , which is then associated with relativistic proper time; Squaring the second member and noting that it must be null we obtain (x 0 ) 2 − (x j ) 2 = 1. This means that we can relate the vector ix 0 + σ jx j to relativistic 4-velocity, although the norm of this vector is symmetric to what is usual in SR. The relativistic 4-velocity is more conveniently assigned to the 5D bivector iσ 4x 0 + σ j4x j , which has the necessary properties. The method we have used to make the transition between 4D Euclidean space and hyperbolic spacetime involved the transformation of a 5D vector into scalar plus bivector through product with σ 4 ; this method will later be extended to curved spaces. Equation (10) applies to flat space but can be generalised for curved space; we do this in two steps. First of all we can include a scale factor (v = nσ µẋ µ ), which can change from point to In this way we are introducing the 4-dimensional analogue of a refractive index, that can be seen as a generalisation of the 3-dimensional definition of refractive index for an optical medium: the quotient between the speed of light in vacuum and the speed of light in that medium. The scale factor n used here relates the norm of vector σ µẋ µ to unity and so it deserves the designation of 4-dimensional refractive index; we will drop the "4-dimensional" qualification because the confusion with the 3-dimensional case can always be resolved easily. The material presented in this paper is, in many respects, a logical generalisation of optics to 4-dimensional space; so, even if the paper is only about geometry, it becomes natural to designate this study as 4-dimensional optics (4DO). Full generalisation of Eq. (10) implies the consideration of a tensor refractive index, similar to the non-isotropic refractive index of optical mediȧ the velocity is then generally defined by v = n µ νẋ ν σ µ . The same expression can be used with any orthonormal frame, including for instance spherical coordinates, but for the moment we will restrict our attention to those cases where the frame does not rotate in a displacement; this poses no restriction on the problems to be addressed but is obviously inconvenient when symmetries are involved. Equation (15) can be written with the velocity in the form v = g νẋ ν if we define the refractive index vectors The set of four g µ vectors will be designated the refractive index frame. Obviously the velocity is still a unitary vector and we can express this fact evaluating the internal product with itself and noting that the second member in Eq. (15) has zero norm. v·v = n α µẋ µ n β νẋ ν δ αβ = 1. Using Eq. (16) we can rewrite the equation above as g µ ·g νẋ µẋν = 1 and denoting by g µν the scalar g µ ·g ν the equation becomes g µνẋ µẋν = 1. The generalised form of the displacement vector arises from multiplying Eq. (15) by dt, using the definition (16) ds = idt + g µ dx µ . This can be put in the form of a space metric by dotting with itself and noting that the first member vanishes Notice that the coordinates are still referred to the fixed frame vectors σ µ and not to the refractive index vectors g µ . In GR there is no such distinction between two frames but Montanus [8] clearly separates the frame from tensor g µν . We are going to need the reciprocal frame [4] {−i, g µ } such that From the definition it becomes obvious that g µ g ν = g µ ·g ν + g µ ∧g ν is a pure bivector and so g µ g ν = −g ν g µ . We now multiply Eq. (19) on the right and on the left by g 4 , simultaneously replacing x 4 by τ to obtain dsg 4 = ig 4 dt + g j g 4 dx j + dτ ; g 4 ds = g 4 idt + g 4 g j dx j + dτ. When the internal product is performed between the two equations member to member the first member vanishes and the second member produces the result If the various g µ are functions only of x j the equation is equivalent to a metric definition in general relativity. We will examine the special case when g µ = n µ σ µ ; replacing in Eq. (23) This equation covers a large number of situations in general relativity, including the very important Schwarzschild's metric, as was shown in Almeida [9] and will be discussed below. Notice that Eq. (20) has more information than Eq. (23) because the structure of g 4 is kept in the former, through the coefficients g µ4 , but is mostly lost in the g 44 coefficient of the latter. The sources of space curvature Equations (20) and (23) define two alternative 4-dimensional spaces; in the former, 4DO, t is an affine parameter while in the latter, GR, it is τ that takes such role. The geodesics of one space can be mapped one to one with those of the other and we can choose to work on the space that best suits us. The geodesics of 4DO space can be found by consideration of the Lagrangian The justification for this choice of Lagrangian can be found in several reference books but see for instance Martin [6]. From the Lagrangian one defines immediately the conjugate momenta The conjugate momenta are the components of the conjugate momentum vector v = g µ v µ and from Eq. The conjugate momentum and velocity are the same but their components are referred to the reverse and refractive index frames, respectively. The geodesic equations can now be written in the form of Euler-Lagrange equationṡ v µ = ∂ µ L; these equations define those paths that minimise t when displacements are made with velocity given by Eq. (15). Considering the parallel already made with general relativity we can safely say that geodesics of 4DO spaces have a one to one correspondence to those of GR in the majority of situations. We are going to need geometric calculus which was introduced by Hestenes and Sobczyk [1] as said earlier; another good reference is provided by Doran and Lasenby [4]. The existence of such references allows us to introduce the vector derivative without further explanation; the reader should search the cited books for full justification of the definition we give below The vector derivative is a vector and can be operated with any multivector using the established rules; in particular the geometric product of with a multivector can be decomposed into inner and outer products. When applied to vector a ( a = ·a + ∧a) the inner product is the divergence of vector a and the outer product is the exterior derivative, related to the curl although usable in spaces of arbitrary dimension and expressed as a bivector. We also define the Laplacian as the scalar operator 2 = · . In this work we do not use the conventions of Riemanian geometry for the affine connection, as was already noted in relation to Eq. (21). For this reason we will also need to distinguish between the curved space derivative defined above and the ordinary flat space derivative When using spherical coordinates, for instance, the connection will be involved only in the flat space component of the derivative and we will deal with it by explicitly expressing the frame vector derivatives. Velocity is a vector with very special significance in 4DO space because it is the unitary vector tangent to a geodesic. We therefore attribute high significance to velocity derivatives, since they express the characteristics of the particular space we are considering. When the Laplacian is applied to the velocity vector we obtain a vector Vector T is called the sources vector and can be expanded into sixteen terms as The tensor T µ ν contains the coefficients of the sources vector and we call it the sources tensor ; it is very similar to the stress tensor of GR, although its relation to geometry is different. The sources tensor influences the shape of geodesics but we shall not examine here how such influence arises, except for very special cases. Before we begin searching solutions for Eq. (31) we will show that this equation can be decomposed into a set of equations similar to Maxwell's. Consider first the velocity derivative v = ·v + ∧v; the result is a multivector with scalar and bivector part G = v. Now derive again G = ·G + ∧G; we know that the exterior derivative of G vanishes and the divergence equals the sources vector. Maxwell's equations can be written in a similar form, as was shown in Almeida [10], with velocity replaced by the vector potential and multivector G replaced by the Faraday bivector F ; Doran and Lasenby [4] offer similar formulation for spacetime. An isotropic space must be characterised by orthogonal refractive index vectors g µ whose norm can change with coordinates but is the same for all vectors. We usually relax this condition by accepting that the three g j must have equal norm but g 4 can be different. The reason for this relaxed isotropy is found in the parallel usually made with physics by assigning dimensions 1 to 3 to physical space. Isotropy in a physical sense need only be concerned with these dimensions and ignores what happens with dimension 4. We will therefore characterise an isotropic space by the refractive index frame g j = n r σ j , g 4 = n 4 σ 4 . Indeed we could also accept a non-orthogonal g 4 within the relaxed isotropy concept but we will not do so in this work. We will only investigate spherically symmetric solutions independent of x 4 ; this means that the refractive index can be expressed as functions of r in spherical coordinates. The vector derivative in spherical coordinates is of course The Laplacian is the inner product of with itself but the frame derivatives must be considered ∂ r σ r = 0, ∂ θ σ r = σ θ , ∂ ϕ σ r = sin θσ ϕ , After evaluation the Laplacian becomes In the absence of sources we want the sources tensor to vanish, implying that the Laplacian of both n r and n 4 must be zero; considering that they are functions of r we get the following equation for n r with general solution n r = b exp(a/r). We can make b = 1 because we want the refractive index to be unity at infinity. Using this solution in Eq. (35) the Laplacian becomes When applied to n 4 and equated to zero we obtain solutions which impose n 4 = n r and so the space must be truly isotropic and not relaxed isotropic as we had allowed. The solution we have found for the refractive index components in isotropic space can correctly model Newton dynamics, which led the author to adhere to it for some time [11]. However if inserted into Eq. (24) this solution produces a GR metric which is verifiably in disagreement with observations; consequently it has purely geometric significance. The inadequacy of the isotropic solution found above for relativistic predictions deserves some thought, so that we can search for solutions guided by the results that are expected to have physical significance. In the physical world we are never in a situation of zero sources because the shape of space or the existence of a refractive index must always be tested with a test particle. A test particle is an abstraction corresponding to a point mass considered so small as to have no influence on the shape of space. But in reality a test particle is always a source of refractive index and its influence on the shape of space may not be negligible in any circumstances. If this is the case the solutions for vanishing sources vector may have only geometric meaning, with no connection to physical reality. The question is then how do we include the test particle in Eq. (31) in order to find physically meaningful solutions. Here we will make one add hoc proposal without further justification because the author has not yet completed the work that will provide such justification in geometric terms. The second member of Eq. (31) will not be zero and we will impose the sources vector When n r is given the exponential form found above the solution is n 4 = √ n r . This can now be entered into Eq. (24) and the coefficients can be expanded in series and compared to Schwarzschild's for the determination of parameter a. The final solution, for a stationary mass M is n r = e 2M/r , n 4 = e M/r . Equation (39) can be interpreted in physical terms as containing the essence of gravitation. When solved for spherically symmetric solutions, as we have done, the first member provides the definition of a stationary gravitational mass as the factor M appearing in the exponent and the second member defines inertial mass as ∇ 2 n 4 . Gravitational mass is defined with recourse to some particle which undergoes its influence and is animated with velocity v and inertial mass cannot be defined without some field n 4 acting upon it. Complete investigation of the sources tensor elements and their relation to physical quantities is not yet done. It is believed that the 16 terms of this tensor have strong links with homologous elements of stress tensor in GR but this will have to be verified. In order to replace the angular coordinate ρ with a distance coordinate r we can make r = τ ρ and derive with respect to timeṙ Taking τρ from this equation and inserting into Eq. (43), assuming that sin ρ is sufficiently small to be replaced by ρ v = n 4 σ τ − r τ σ r τ + n r (σ rṙ + rσ θθ + r sin θσ ϕφ ). we have also replaced σ ρ by σ r for consistency with the new coordinates. We have just defined a particularly important set of coordinates, which appears to be especially well adapted to describe the physical Universe, with τ being interpreted as the Universe's age or its radius; note that time and distance cannot be distinguished in non-dimensional units. When rτ /τ is small in Eq. (45), the refractive index vectors become orthogonal and we use n 4 and n r in conjunction with Eq. (24) to obtain a GR metric whose coefficients are equivalent so Schwarzschild's on the first terms of their series expansions. When rτ /τ cannot be neglected, however, the equation can explain the Universe's expansion and flat rotation curves in galaxies without dark matter intervention. A more complete discussion of this subject can be found in Ref. [9]. Conclusions Euclidean and Minkowskian 4-spaces can be formally linked through the null subspace of 5dimensional space with signature (− + + + +). The extension of such formalism to non-flat spaces allows the transition between spaces with both signatures and the paper discusses some conditions for metric and geodesic translation. For its similarities with optics, the geometry of 4-spaces with Euclidean signature is called 4-dimensional optics (4DO). Using only geometric arguments it is possible to define such concepts as velocity and trajectory in 4DO which become physical concepts when proper and natural assignments are made. One important point which is addressed for the first time in the author's work is the link between the shape of space and the sources of curvature. This is done on geometrical grounds but it is also placed in the context of physics. The equation pertaining to the test of gravity by a test particle is proposed and solved for the spherically symmetric case providing a solution equivalent to Schwarzschild's as first approximation. Some mention is made of hyperspherical coordinates and the reader is referred to previous work linking this geometry to the Universe's expansion in the absence of dark matter.
2019-04-14T03:13:38.372Z
2004-06-07T00:00:00.000
{ "year": 2004, "sha1": "ee4efc5261f491d1d8bdebe916e3a6b6e4e2b518", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ae4dec876fe7d6abe9cc568b18fe64abb6409ad1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250405479
pes2o/s2orc
v3-fos-license
Effects of Nb microalloying on properties of Zr-Al-Fe-Cu glassy alloy The effects of Nb-microalloying on glass forming ability, thermal properties and mechanical properties of (Zr0.6032Cu0.2256Fe0.0995Al0.0717)100-xNbx (x = 0, 1, 2, 3, 4) alloys were investigated. The best glass former was obtained for (Zr0.6032Cu0.2256Fe0.0995Al0.0717)97Nb3, which could be fabricated into full glass with diameter up to 6 mm at least. In addition, the origin of enhancing GFA of ZrAlFeCu amorphous alloy by means of the minor addition of Nb, Gd and Hf, was also discussed from the aspects of clusters and mixing entropy, which might provide a method of understanding the mechanism of enhancing glass-forming ability via microalloying, and choosing minor alloying element with an aim of enhancing glass-forming ability. It was found that the thermal stability reduced as the content of Nb increased along with the supercooled liquid region decreased. Nb-microalloying decease the fracture strength. However, moderate Nb microalloying could enhance the room temperature plastic strain. Introduction Owing to the high glass-forming ability (GFA), good biocompatibility, excellent mechanical properties and high corrosion resistance, Zr-based bulk metallic glasses (BMGs) have been chosen as a candidate for biomedical applications [1][2][3]. In recent years, a series Be-free Zr-based BMGs have been developed, such as in Zr-Al-Fe, Zr-Al-Fe-Cu, Zr-Al-Ni, Zr-Al-Cu and Zr-Al-Co-Ag etc systems [4][5][6][7][8]. Among the designed Zr-based glass formers, those containing Ni element are blamed for being allergic and possibly carcinogenic [7]. Besides, considering the cost of manufacturing, glass-formers containing noble metal, might hinder the application. Based on this, taking into account various factors such as manufacturability, the cost and the mechanical properties, Zr-Al-Fe-Cu BMGs are attractive, and the experimental results have shown the feasibility of biomedical devices applications reported by Jin and Han et al researchers [2,9]. In our previous work, a method combining clusters and mixing entropy was applied to understand glass formation and design good glass formers [19]. Under the guidance of this method, a novel Ni-free Zr-based glass former Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 was designed with a critical diameter up to 5 mm, superior to the well-known composition Zr 60 Cu 25 Fe 5 Al 10 BMG under the same laboratory condition, which shows a good prospect for future application as biomedical materials [20]. As mentioned above, Nb-microalloying always plays an important role in enhancing GFA and changing properties. However, the discussion on the mechanism of Nb-microalloying understood via microstructure is quite few. Based on this, the effects of minor addition of Nb on GFA, thermal properties and mechanical properties of Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 have been investigated. The reasons for influencing GFA, thermal properties and mechanical properties were discussed from the aspect of clusters and mixing entropy with an aim of offering a method of understanding the mechanism of microalloying, which might lay a solid foundation of helping us quickly and exactly choose the beneficial microalloying elements. Materials and methods Zr-based alloys with compositions of (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 100-x Nb x (x=0, 1, 2, 3, 4) and (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 97 Gd 3 were prepared in a Ti-gettered argon atmosphere. To ensure the homogeneity, the master alloys were melted four times. The purities of elements are 99.99 wt. % for Al, Fe, Cu and Nb ,99.95 wt. % for Zr and Gd. Bulk samples with diameters of 2 mm and 6 mm were produced by copper mold suction casting. The ribbons (0.02 mm×1.2 mm) were fabricated by melt spinning. The structure of the sample was identified by x-ray diffraction (XRD, Philips PW 1050, Cu Kα). The thermodynamic parameters of glassy rods, such as the glass transition temperatures and the liquids temperatures etc, were evaluated using differential scanning calorimetry (DSC, Netzsch DSC 404C) at a heating rate of 0.67 K s −1 . The compression properties were tested, using samples having 4 mm long and 2 mm diameter, by Instron testing machine at a strain rate of 5.0×10 −4 s −1 . The fracture features of the specimens were observed by scanning electron microscope (SEM, Supra 35). Figure 1 shows the XRD patterns of the casted (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 100-x Nb x (x=0, 1, 2, 3, 4) alloys with diameters of 6 mm. Under the same laboratory environment, the XRD pattern of Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 (x=0) with a diameter of 5 mm was examined and shown in our previous paper [20]. The results showed that Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 s could form full amorphous alloy with diameter up to 5 mm. However, as shown in figure 1, the XRD pattern of Zr 60.32 Cu 22.56 As can be concluded above, the critical diameter of Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 increased from 5 mm to 6 mm with the addition of 3at% Nb. As one of effective ways of enhancing glass forming ability, the internal mechanism of minor alloying is still unclear from the aspect of clusters. It is essential to figure out the origin of high GFA of Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 before understanding the mechanism of enhancing GFA via microalloying. Results and discussion In our previous work, a method combining clusters and mixing entropy was applied to understand glass formation and design good glass formers [19]. Essentially, this method of understanding glass forming ability balances both microstructure and thermodynamics. Under the guidance of this method, clusters were treated as the basic units of glass formers. The coefficients of clusters are calculated based on the premise that composition owns the corresponding largest mixing entropy. Under the guidance of this method, glass formation in Zr-Al-Fe-Cu system was studied in our previous work [20]. The best glass former Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 could be expressed as Fe(Fe 3 +Zr 9 )+0.961Al(Zr 8 +Al 2 )+1.511Cu(Cu 5 +Zr 5 ). These Zr-Fe, Zr-Al and Zr-Cu topological clusters Fe(Fe 3 +Zr 9 ), Al(Zr 8 +Al 2 ) and Cu(Cu 5 +Zr 5 ) are the basic units of this glass former. The high topological packing in clusters and high entropy are the origin of high GFA of Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 . The enthalpies of mixing of the atomic pairs Nb-Zr, Nb-Al, Nb-Fe and Nb-Cu at equi-atomic compositions are respectively ΔH Nb-Zr =4KJ mol −1 , ΔH Nb-Al =−18KJ mol −1 , ΔH Nb-Fe =−16KJ mol −1 and ΔH Nb-Cu =−3KJ mol −1 [21]. The negative enthalpies of mixing mean the tendentiousness of gathering together to form clusters. However, it has been pointed that, to enhance GFA of basic composition, the microalloying element enjoying negative enthalpies of mixing between component elements is a prerequisite but not sufficient condition. The key to enhancing GFA is introducing topological clusters. As analyzed above, Nb-Fe and Nb-Al pairs are negative. They are likely to gather together to form clusters. Miracle pointed that, the cluster was similar to the microstructure of competing phases [22]. As for Nb-Fe pairs, a novel Fe-centered Fe-Nb 7.5 Fe 4.5 could be obtained as shown in figure 1. By calculating the radius ratio, the degree of close packing in clusters could be evaluated. The Goldschmidt radiuses of Fe and Nb are 0.128 nm and 0.147 nm. As for cluster Fe-Nb 7.5 Fe 4.5 , the radius of center atom Fe is 0.128 nm. The average radius of atoms in the cluster's shell is 0.1399. The ratio of the radius of center atom to the average radius of atoms in the cluster's shell is 0.9151. The ideal radius ratio has been calculated according to the coordination number (CN) [23]. The CN of cluster Fe-Nb 7.5 Fe 4.5 is 12. The ideal radius ratio for CN12 cluster is 0.902 [23]. The deviation of calculation result and theoretical value is only 1.45%. The cluster Fe-Nb 7.5 Fe 4.5 meets the requirement of topological packing. As for Nb-Al binary pair, it has been reported that Nb and Al could form topologically packed Al-centered Al-Al 8 Nb 4 cluster. These topologically packed clusters could increase the degree of atomic packing state, which is beneficial for glass-formation. Moreover, Nb is adjacent to Zr in the periodic table of the elements. Nb element might take the place of the position of Zr in Zr-based clusters, namely changing from Zr-Al, Zr-Fe binary clusters to Zr/Nb-Al and Zr/Nb-Fe clusters, which might enhance the degree of clusters' topological packing. For the Zr-based amorphous alloys, except Nb element, rare-earth element was also one of the important microalloying elements [24]. To further study the mechanism of rare-earth element microalloying, similarly, 3 at% Gd was also microalloying to Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 . As shown in figure 1, the GFA of Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 increased from 5 mm to at least 6 mm. The enthalpies of mixing of the atomic pairs Gd-Zr, Gd-Al, Gd-Fe and Gd-Cu at equi-atomic compositions are respectively ΔH Gd-Zr =9KJ mol −1 , ΔH Gd-Al =−39KJ mol −1 , ΔH Gd-Fe =−1KJ mol −1 and ΔH Gd-Cu =−22KJ mol −1 [21]. In our previous study, a CN12 cluster Al-Al 6 Gd 6 was obtained from Al 2 Gd [19]. Similarly, as shown in figure 2, a CN12 cluster Fe-Fe 6 Gd 6 could be obtained from phase Fe 2 Gd. The Goldschmidt radiuses of Gd, Fe and Al are 0.180 nm, 0.128 nm and 0.143 nm, separately. Similar to the calculation of Nb-Fe cluster, the actual radius ratios of Fe-Fe 6 Gd 6 and Al-Al 6 Gd 6 are 0.831 and 0.885. The deviation of calculation results and theoretical value are −7.87% and −1.89%. The above two Fe-Fe 6 Gd 6 and Al-Al 6 Gd 6 can be treated as topologically packed clusters. The new topologically packed clusters introduced by Gd-microalloying, could increase the degree of atomic packing state, thus enhancing the GFA of basic composition. Similarly, it has been proven effective in enhancing GFA of ZrAlFeCu amorphous alloys via Hfmicroalloying [2]. It could also be explained via clusters. The enthalpies of mixing of the atomic pairs Hf-Zr, Hf-Al, Hf-Fe and Hf-Cu at equi-atomic compositions are respectively ΔH Hf-Zr =0KJ mol −1 , ΔH Hf-Al =−39KJ mol −1 , ΔH Hf-Fe =−21KJ mol −1 and ΔH Hf-Cu =−17KJ mol −1 [21]. The enthalpies of mixing of Hf-Al, Hf-Fe and Hf-Cu binary pairs are negative, which means the tendency of gather together to form clusters. As shown in figure 3, an Archimedean octahedral anti-prism CN10 cluster Al-Al 2 Hf 8 is obtained from phase AlHf 2 . Two CN12 clusters Fe-Fe 6 Hf 6 and Fe-Fe 3 Hf 9 are obtained from phase Fe 2 Hf and FeHf 2 . The Goldschmidt radiuses of Hf, Fe and Al are 0.159 nm, 0.128 nm and 0.143 nm, separately. The ideal radius ratios of CN10 and CN12 cluster are 0.799 and 0.902 [23]. As for Al-Al 2 Hf 8 , Fe-Fe 6 Hf 6 and Fe-Fe 3 Hf 9 clusters, the actual radius ratios are 0.918, 0.892 and 0.846. The deviation of calculation results and theoretical value are 14.89%, −1.11% and −6.21%. It can be seen that, Fe-Fe 6 Hf 6 and Fe-Fe 3 Hf 9 clusters are topologically packed clusters. As for Hf-Cu binary system, a series of topologically packed clusters Cu-Cu 8 Hf 4 , Cu-Cu 7 Hf 5 and Cu-Cu 5 Hf 5 have been found and used to design good glass-formers in Cu-Hf-Al system [25]. Moreover, Hf and Zr are in the same group on the periodic table. They share similar atomic radius, physical properties and chemical properties. Hf might take the place of the position of Zr in Zr-based clusters, namely changing from Zr-Al, Zr-Fe and Zr-Cu binary clusters to Zr/Hf-Al, Zr/Hf-Fe and Zr/Hf-Cu clusters, which might enhance the degree of topological packing. It can be seen that, Nb-, Gd-and Hf-microalloying could enhance the GFA of ZrAlFeCu amorphous alloys. On one hand, from the point of clusters, with the addition of Nb/Gd/Hf element, the newly introduced topologically clusters might enhance the degree of atomic packing state, which is beneficial for glass formation. On the other hand, from the point of entropy, with the addition of Nb/Gd/Hf, the degree of chaos of the system rising, it would be more difficult for the precipitation of crystalline phases which is also beneficial for glass formation. It can be inferred that, due to the topologically packed Al-Al 8 Nb 4 , Fe-Nb 7.5 Fe 4.5 , Fe-Fe 6 Gd 6 , Al-Al 6 Gd 6 , Fe-Fe 3 Hf 9 , Fe-Fe 6 Hf 6 clusters, Nb-, Gd-and Hf-microalloying might be effective in enhancing the GFA of the amorphous alloys containing with element Al or Fe. Furthermore, these Hf-based and Gd-based binary topologically packed clusters could help researchers further develop Hf-based and Gd-based glass-formers. Figure 4 shows the DSC curves of as-cast (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 100-x Nb x (x=0, 1, 2, 3, 4) metallic glass alloy series, showing their crystallization and melting behaviors. As shown in figure 4, the glass transition temperatures (T g ), crystallization temperatures (T x ), solidus temperatures (T m ) and liquidus temperatures (T l ) of these alloys are marked with arrows. Thermal parameters are given in table 1. A series of typical thermal parameters T rg (= T g /T l ) [26], γ (= T x / (T g +T l )) [27] and γ m (= (2T x -T g ) / T l ) [28] has been proposed to predict glass forming ability. These parameters are summarized in table 2. A series of crystallization behaviors of as-cast (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 100-x Nb x (x=0, 1, 2, 3, 4) metallic glass alloys are shown in figure 4(a). The heating rate is 40K min −1 . It can be seen that, with the temperature increased, all of the curves appear one distinct endothermic platform and two distinct exothermic peaks, which correspond to the glass transition and crystallization process respectively, showing typical amorphous crystallization behaviors. Combining figure 4(a) and table 1, it can be found, with the addition of Nb, the glass transition temperature (T g ) of this system increases along with the content of Nb increases. When the content of Nb does not exceed 3 at%, the crystallization temperature (T x ) of this system increases along with the content of Nb increases, while the content of Nb increases to 4 at. %, the crystallization temperature slightly decreased. Figure 4(b) shows the melting behaviors of (Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 ) 100-x Nb x (x=0, 1, 2, 3, 4) metallic glass alloys. It can be seen that minor addition of Nb doesn't change the melting process of Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 . All of the curves show a distinct endothermic peak and a weak endothermic peak, indicating these alloys have multiple melting phases. It can be seen that, with the addition of Nb, the supercooled liquid region ΔT decreases, along with the thermal stability getting worse. However, moderate minor Nb addition could enhance the glass forming ability. It can be inferred that, in the system of Zr-Al-Fe-Cu-Nb, the width of the supercooled liquid region ΔT could not properly describe the glass forming ability. As stated above, a series of thermal parameters T rg , γ and γ m have been proposed to predict the glass forming ability. However, combining table 2 and figure 1, all of the parameters couldn't represent the glass-formation ability well. As can be seen in figure 5 and table 2, with the addition of Nb, the supercooled liquid region ΔT decreases, along with the thermal stability get worse. As analyzed above, the enthalpies of mixing of the atomic pairs Nb-Zr and Nb-Cu are positive quite close to zero, Nb has an aggregation phenomenon. With excessive Nb element doping, the phenomenon of aggregation is enhanced, which would promote the formation of Nb-rich clusters, increase the possibility of nucleation and precipitation, decrease the thermal stability of alloys in supercooled liquid region, thus causes narrowing of the width of supercooled liquid region in DSC curves. The number of short-range order clusters and activation energy have a close relationship [29]. Based on the DSC curves of (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 100-x Nb x (x=0, 1, 2, 3, 4) amorphous ribbons at the heating rates of 0.167 K s −1 , 0.333 K s −1 , 0.5K s −1 , 0.667 K s −1 and 0.833K s −1 , under the guidance of Kissinger equation [30], the corresponding Kissinger plots of (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 100-x Nb x (x=0, 1, 2, 3, 4) are shown in figure 6. The activation energies for crystallization growth E p of (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 100-x Nb x (x=0, 1, 2, 3, 4) have been obtained. For x=0,1,2,3 and 4, the E p are separately 190.5 kJ mol −1 , 218 .9 kJ mol −1 , 218 .0 kJ mol −1 , 222 .2 kJ mol −1 and 282.6 kJ mol −1 . Compared to primary composition Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 , with Nb-microalloying, the activation energies for crystallization growth E p for (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 100-x Nb x (x=0, 1, 2, 3, 4) increased. Ep stands for the activation energy of the growth of crystal phases. One of the steps of growth of crystal phases is breaking down the local structure (cluster) to form ordered crystalline phases. However, the clusters in this paper obtained according to the Miracle theory, enjoy special asymmetry and high degree of topological packing. Firstly, from the point of thermal dynamics, the high topologically packed clusters would increase viscosity of molten alloy, which would increase the difficulty of the growth of crystalline phases. Secondly, from the point of energy, the high topologically packed clusters would also decrease the thermodynamic free volume, thus decreasing the energy of system, leading a more stable state, which would also increase the difficulty of the growth of crystalline phases [25]. Thirdly, from the shape of clusters, forming ordered crystalline phases need break down more clusters due to the special asymmetry of obtained clusters, which would also enhance the difficulty of crystalline phases growing. Nb microalloying might bring more topologically packed clusters. As analyzed above, these clusters would greatly enhance the difficulty of the growth of crystal phases. Therefore, the activation energy of the growth of crystal phases Ep would increase along with Nb doping. As shown in figure 7, during the compression process, (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 100-x Nb x (x=0, 1, 2, 3) alloys have elastic strain and plastic strain. However, with 4 at. % Nb addition, the compressive deformation behavior displays a ductile-to-brittle transition. The compressive deformation behavior is sensitive to compositions. The mechanical parameters of as-cast (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 100-x Nb x (x=0, 1, 2, 3, 4) glassy alloy series are shown in table 3. σ y , σ m and ε f represent yield stress, maximum compressive strength and fracture strain, separately. As can be seen in table 3, moderate Nb-microalloying could enhance the fracture strain. As can be seen in figure 8(c), for the primary alloy composition Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 , there exists a small number of parallel and crossed shear bands. These shear bands are mostly in one direction. A certain amount of shear bands corresponds to the high plasticity. However, based on this, with 2 at% Nb further doping, the amount of shear bands increases significantly. The shear bands of (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 98 Nb 2 have different directions. As can be in figure 9(b), interaction, restraint and divarication take place among the shear bands in different directions. Compared to the primary alloy composition Zr 60.32 Cu 22.56 Fe 9.95 Al 7.17 , the shear bands of (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 98 Nb 2 are denser. These crossed, biforked and more dense shear bands in different directions correspond to the increase of compassion plasticity. Conversely, with 4 at% Nb further doping, there exist no shear bands of (Zr 0.6032 Cu 0.2256 Fe 0.0995 Al 0.0717 ) 96 Nb 4 . It can be more easily seen that the shear bands are closely relevant to plasticity.
2022-07-10T15:14:59.224Z
2022-07-08T00:00:00.000
{ "year": 2022, "sha1": "4fe0efd0f30d28129e0e0a206bce61369223eba5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2053-1591/ac7fe2", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2bf8c78c51369406b4a9c6676b948217d3e4ef17", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
102242173
pes2o/s2orc
v3-fos-license
Efficient Production of Acetic Acid from Nipa ( Nypa fruticans ) Sap by Moorella thermoacetica (f. Clostridium thermoaceticum ) : To valorize the underutilized nipa sap composed mainly of sucrose, glucose and fructose, acetic acid fermentation by Moorella thermoacetica was explored. Given that M. thermoacetica cannot directly metabolize sucrose, we evaluated various catalysts for the hydrolysis of this material. Oxalic acid and invertase exhibited high levels of activity towards the hydrolysis of the sucrose in nipa sap to glucose and fructose. Although these two methods consumed similar levels of energy for the hydrolysis of sucrose, oxalic acid was found to be more cost-effective. Nipa saps hydrolyzed by these two catalysts were also fermented by M. thermoacetica . The results revealed that the two hydrolyzed sap mixtures gave 10.0 g/L of acetic acid from the 10.2 g/L of substrate sugars in nipa sap. Notably, the results showed that the oxalic acid catalyst was also fermented to acetic acid, which avoided the need to remove the catalyst from the product stream. Taken together, these results show that oxalic acid hydrolysis is superior to enzymatic hydrolysis for the pretreatment of nipa sap. The acetic acid yield achieved in this study corresponds to a conversion efficiency of 98%, which is about 3.6 times higher than that achieved using the traditional methods. The process developed in this study therefore has high potential as a green biorefinery process for the efficient conversion of sucrose-containing nipa sap to bio-derived acetic acid. INTRODUCTION Nipa (Nypa fruticans) is a high sugar-yielding palm that can be found along coastal areas, river estuaries and mangrove forests with brackish water environments [1].By removing the infructescence of this plant, it is possible to collect a sugar-rich sap from its stalk on a daily basis for a minimum of 60 days in the Philippines and up to 340 days in Malaysia [2].Furthermore, nipa sap production ranges from 0.5 to 2.5 L/palm/day with an average sugar content of 16.4 w/v% [3,4]. According to Tamunaidu et al. [5], the major components of nipa sap are sucrose, glucose and fructose [6], which can be used for different purposes.For instance, fresh tapped nipa sap is consumed as a popular sweet drink in the coastal areas of Southeast Asia [7].Additionally, some villages in Thailand earn a living from tapping and selling nipa sap for wine and sugar production [7].In the Philippines, nipa sap is poured into huge earthen jars, where it is kept for up to a month for acetous fermentation to produce vinegar [8].Despite its high potential as a source of raw materials, nipa palm is tapped limitedly by local communities for domestic use with no reported international trade [9]. Nipa is an abundant plant material in Asia, where it is regarded as a non-threatened and underutilized sugar-yielding palm [10].It is noteworthy that nipa sap was recently assessed as a high potential feedstock for bioethanol production [11].In fact, estimated annual ethanol yield of 3,600-22,400 L/ha/year from nipa sap [12] makes it an attractive raw material for ethanol production compared to sugarcane and corn, which afford yields of 5,300-6,500 and 3,100-3,900 L/ha/year, respectively [13].Nipa sap could therefore become a promising source of sugars for manufacturing bioproducts. Acetic acid is one of the most important industrial reagents in the world.Acetic acid has a broad range of applications, including its use as building block for the synthesis of monomeric vinyl acetate, ethyl acetate, butyl acetate and acetic anhydride, as well as its use as a solvent for the production of purified terephthalic acid [14,15].However, acetic acid is mainly produced from petrochemical resources via methanol carbonylation and the liquid-phase oxidation of butane, naphtha and acetaldehyde [16].In light of dwindling fossil fuel supplies and the push for new environmentally manufacturing processes based on renewable resources, there has been considerable interest in the production of acetic acid via bio-based routes [15]. Traditional processes for the production of vinegar from nipa sap involve a two-stage fermentation process in which alcohol is initially formed by yeast and subsequently converted to acetic acid by aerobic bacteria [17,18].As shown in equation (1), two moles of carbon are lost during this process in the form of CO 2 , leading to low carbon conversion efficiency. 2H2O In contrast, among the acetogenic bacteria, Moorella thermoacetica can directly convert glucose and fructose to acetic acid as a single product in a stoichiometric manner [19,20], according to the equation provided below. Notably, this fermentation process does not emit any CO 2 , and can therefore achieve a higher conversion efficiency than the traditional process described above. Although M. thermoacetica has been investigated extensively for the conversion of model compounds [21][22][23][24], it has not been investigated for its conversion of natural sugar sources such as nipa sap because it contains non-fermentable sucrose [19,21].With this in mind, the aim of the current study was to evaluate the potential use of nipa sap for the efficient bio-derived production of acetic acid. Materials The nipa sap used in this study was collected from Sarawak in Malaysia.For conservation and transportation purposes, the sap was concentrated by heating to give a viscous liquid.Prior to the experiments described in this study, 200 g of concentrated nipa sap was diluted with deionized water to a total volume of 1 L, which gave a sugar concentration similar to that of the original sap.Invertase solution (EC No.3.2.1.26)with a minimal activity of 4 units/mL was purchased from Wako Pure Chemical Industries Ltd. (Osaka, Japan).Freeze-dried cultures of M. thermoacetica, which is also known as Clostridium thermoaceticum (ATCC 39073), were obtained from the American Type Culture Collection (Manassas, VA, USA). Acid Hydrolysis Three different acids, including acetic acid, oxalic acid and hydrochloric acid were evaluated as catalysts for the hydrolysis of sucrose in the pretreatment step. For the hydrolysis reactions conducted with acetic acid, 0-20 g/L of the acid catalyst was added to the nipa sap in a 20 mL vial, and the resulting mixture was heated in an autoclave at 121 °C for various reaction times.The reaction mixtures were then removed from the autoclave and rapidly chilled in a refrigerator to room temperature before being neutralized (pH 7.0 ± 0.1) by the addition of an appropriate buffer solution.The hydrolysis of sucrose in nipa sap was conducted in a similar manner with 1.5-4.5 g/L oxalic acid and 3.0 g/L hydrochloric acid. Enzymatic Hydrolysis The enzymatic hydrolyses were performed in a metal bath at 25 and 60 °C with a 3.92 v/v% invertase solution.The reactions were subsequently treated with boiling water for 5 min to destroy the invertase activity. Acetic Acid Fermentation The revival of M. thermoacetica from a freeze-dried state and the subsequent preparation of the corresponding inoculum were conducted according to the procedures reported by Nakamura et al. [22]. M. thermoacetica was used to produce acetic acid in a pH-controlled batch fermentation process, which was conducted in a 500 mL DPC-2A fermentation system (Able Corporation, Tokyo, Japan) at 60 °C with a stirring rate of 300 rpm.The nutrients in the broth were similar to those used by Rabemanolontsoa et al. [25], and consisted of the following chemicals: 5 g/L yeast extract, 0.1 g/L cysteine•HCl•H 2 O, 1 g/L (NH 4 ) 2 SO 4 , 0.25 g/L MgSO 4 •7H 2 O, 0.04 g/L Fe(NH 4 ) 2 (SO 4 ) 2 • 6H 2 O, 0.24 mg/L NiCl 2 •6H 2 O, 0.29 mg/L ZnSO 4 •7H 2 O, 0.017 mg/L Na 2 SeO 3 and 1 mg/L resazurin.The nutrients were weighed and dissolved in 166 mL of Milli-Q water.The fermentation systems and nutrient solutions were autoclaved at 121 °C for 20 min to allow for their sterilization.An anaerobic environment was then prepared by filling a glove box with N 2 gas and a working volume of 200 mL was created in each of the fermentation systems by pouring 14 mL of hydrolyzed nipa sap, 20 mL of M. thermoacetica inoculum and 166 mL of nutrient solution into each fermentation system.The fermentation process conducted under an atmosphere of N 2 was maintained at pH 6.5 ± 0.1 by automatic titration with 2 N NaOH.Samples were collected from the fermentation broth at specific time points through a sampling port with a sterile syringe, and then stored at -31 °C prior to being analyzed by high-performance liquid chromatography (HPLC) on an LC-10A HPLC system (Shimadzu, Kyoto, Japan).Blank samples consisting of Milli-Q water and the catalysts instead of the hydrolyzed nipa sap were also run under the same conditions to correct for the acetic acid produced by the nutrients, inocula and catalysts. Analyses The concentrations of the individual sugars and ethanol in the samples were analyzed by HPLC using a Shodex sugar KS-801 column (Showa Denko, Kanagawa, Japan).The column was eluted with water at a flow rate of 1.0 mL/min with a column temperature of 80 °C.The concentrations of the organic acids present in the samples were determined by HPLC analysis over an Aminex HPX-87H column (Bio-rad, Hercules, CA, USA), which was eluted with a 5 mM aqueous H 2 SO 4 solution with a flow rate of 0.6 mL/min at a column temperature of 45 °C.The inorganic elements in the nipa sap were analyzed by ion chromatography (IC) and inductively coupled plasma mass spectrometry (ICP-MS) at AU Techno Services Co., Ltd.(Osaka, Japan). To evaluate hydrolysis efficiency, sucrose conversion to glucose and fructose and the glucose and fructose yield were estimated using the following equations: (3) (4) Process Simulation The process responsible for the production of acetic acid from nipa sap was simulated using version 9.3 of the Pro/II TM software with a nipa sap fed rate of 750 kg/h, as shown in Figure 1.This simulation was designed as a two-stage process, including (i) the acid or enzymatic hydrolysis of sucrose; and (ii) the fermentation of acetic acid. For the acid hydrolysis reactions, nipa sap was mixed with one of three acids, and the resulting mixture was fed into an autoclave operating at 121 °C for simultaneous hydrolysis and sterilization processes.For the enzymatic treatment process, the hydrolysis and sterilization treatments were conducted separately because they required different operating temperatures.To save energy, the stream derived from the hydrolyzed nipa sap was used to heat the untreated nipa sap prior to it being sent to the autoclave.After the hydrolysis reaction, the hydrolyzed nipa sap was depressurized to atmospheric pressure and fed together with nutrients and an aqueous NaOH solution into a fermentation system at 60 °C and 101 kPa.The aqueous nutrient and NaOH solutions were preheated by the product from the fermentation system to save energy.The non-random two-liquid (NRTL) model was selected as the best model for the thermodynamic calculations conducted in the current study.Energy losses from the pipes and the individual units were neglected and the pump efficiency was set at 80%.Furthermore, the counter-flow heat exchangers were configured to operate with a minimum internal temperature approach (ΔT min ) of 10 °C. Chemical Composition of Nipa Sap The chemical composition of nipa sap from Malaysia is shown in Table 1.The results in the table show that Malaysian nipa sap is predominantly composed of sucrose (78.4 g/L), glucose (32.3 g/L) and fructose (31.4 g/L).Malaysian nipa sap also contains 1.7 g/L of ethanol.The lactic acid found in this nipa sap accounts for 2.1 g/L of its composition and undoubtedly makes a significant contribution to its acidic pH (4.37).According to Nur Aimi et al. [26], the sugars found in nipa sap can be gradually fermented to various products, such as ethanol, lactic acid and/or acetic acid, by several naturally occurring yeasts and bacteria [11].However, acetic acid was not detected in this case.The total inorganic content of Malaysian nipa sap was determined to be 6.3 g/L, with Na, K and Cl identified as the major elements.A trace amount of Mn was also found in this sap.This result contrasts with those of several previous studies reported by Tamunaidu et al. [3,11,27], where no heavy metals were detected in the sap or any other parts of the nipa palm including the frond, shell, husk and leaves. Several other palm saps have been analyzed to date and found to contain similar quantities of inorganic constituents to those found in nipa sap, such as the Na, Mg, K and P contents in saps of palmyra, coconut and date palms [28]; the K, Mg, Ca, Na and P contents in sap of R. hookeri palm [29]; and the Cl, Ca, Mg, Na, Mn and P contents in sap of oil palm trunk [30]. The concentrations of Mg and Ca in nipa sap were 0.049 and 0.007 g/L, respectively.These values were similar to those found in palmyra, coconut and date palm saps, which are 0.051, 0.022 and 0.030 g/L for Mg and 0.011, 0.016 and 0.013 g/L for Ca, respectively [28].However, nipa sap has uniquely high Na and Cl contents of 0.9 and 2.6 g/L, whereas all of the other palm saps contain only 0.043-0.054g/L of Na and no Cl. Although the inorganic elements found in sugarcane sap [3] are also present in nipa sap, the latter has much higher Na and Cl contents.These results therefore explain the high tolerance of nipa palm to seawater salts (e.g., NaCl) [31], and highlight nipa palm as the species of choice for seawater agriculture to initiate coastal rehabilitation and restore degraded wetlands [11]. Acid Hydrolysis Autoclaving at 121 °C is the preferred method for sterilizing nipa sap and preventing contamination during the acetic acid fermentation process.Autoclaving can destroy all organisms and their endospores within 15-20 min [32].According to Steven et al. [33], the high temperatures used in an autoclave can result in the hydrolysis of sucrose, the extent of which is directly proportional to the hydrogen ion concentration (pH).The combination of sucrose hydrolysis and sterilization with the acid hydrolysis of nipa sap was therefore investigated in an autoclave at 121 °C. We investigated the addition of an acid catalyst to facilitate the hydrolysis of the sucrose present in nipa sap.We initially evaluated the use of acetic acid as a catalyst for the hydrolysis of sucrose because this acid is the desired product of the anaerobic fermentation of nipa sap by M. thermoacetica.In this way, it was envisaged that the use of acetic acid as a catalyst would avoid the need to add and subsequently remove another chemical to the formation process and product stream, respectively. Figure 2 shows the sucrose conversions achieved using a variety of different acetic acid concentrations as a function of time.When the hydrolysis was performed without a catalyst (Figure 2a), the conversion of sucrose in nipa sap reached 86% within 120 min.As shown in Table 1, the lactic acid present in nipa sap may account for its acidic pH and could also act as an acid catalyst for the hydrolysis of sucrose.Although the different components of nipa sap could exhibit catalytic activity towards the hydrolysis of sucrose, it was not possible to achieve the complete hydrolysis of sucrose under these conditions after 120 min.The hydrolysis of more than 98% of the sucrose in nipa sap (corresponding to a concentration of 78.4 g/L) required a treatment time of at least 120 or 60 min with 3.0-5.0or 10-20 g/L acetic acid, respectively.The sucrose conversion improved with increasing acetic acid concentration.However, the use of a high concentration of acetic acid could inhibit the downstream fermentation of nipa sap by M. thermoacetica [34], and the use a prolonged hydrolysis reaction would require much more energy for autoclaving.Hence, the use of a stronger organic acid was explored to improve the efficiency of this process. Oxalic acid is one of the strongest of all the known organic acids, with pKa values of 1.27 and 4.28 [35].Based on its strong acidity, oxalic acid has recently been suggested as an alternative to mineral acids such as sulfuric acid and hydrochloric acid for the hydrolysis of lignocellulosic biomass because it shows a higher catalytic efficiency [36,37].With this in mind, we investigated the use of oxalic acid as catalyst for this process. Figure 2b shows the sucrose conversions achieved for the hydrolysis of nipa sap using various oxalic acid concentrations.The results revealed that 3.0 g/L of oxalic acid could hydrolyze 98% of the sucrose in nipa sap within 20 min, corresponding to the time required for the complete sterilization of nipa sap.These hydrolysis conditions are therefore very practical.The results also showed that the speed of the sucrose conversion increased with the augmentation of the oxalic acid concentration, which can be explained in terms of the associated decrease in the pH of the nipa sap, as shown in Table 2. Based on these results, oxalic acid was selected as the optimum catalyst for the hydrolysis of sucrose because it allowed for a considerable decrease in the treatment time.We also investigated the hydrolysis of the sucrose in nipa sap with hydrochloric acid, a strong mineral acid, and compared the results with those of acetic acid and oxalic acid.As shown in Table 2, hydrochloric acid showed a higher level of activity towards the hydrolysis of sucrose compared with oxalic acid and acetic acid.However, the glucose and fructose yields (92%) achieved following the treatment of nipa sap with 3.0 g/L of hydrochloric acid for 10 min were inferior to the sucrose conversion (98%), which indicated that hydrochloric acid could be triggering the decomposition of the monosugars produced during this process.Hydrochloric acid is well known to be highly corrosive [38], and would therefore need to be neutralized or removed from the mixture prior to the acetic acid fermentation process. Based on the disadvantages associated with the use of hydrochloric acid, we concluded that this acid was unsuitable for the hydrolysis of sucrose compared with oxalic acid. To clarify the kinetics of the acid hydrolysis reaction in the autoclave at 121 °C, it was assumed that the reaction followed first order kinetics as a function of the initial (C 0 ) and remaining (C t ) sucrose concentrations over time (t): (5) Figure 3 shows the effects of various acetic acid and oxalic acid concentrations on the reactions kinetics for the hydrolysis of sucrose.When the hydrolysis was conducted without catalyst (0 g/L), the reaction showed good linearity and therefore appeared to obey first order kinetics.At higher catalyst concentrations, the hydrolysis reactions showed reasonable linearity for the first few min, but not for longer treatment times (e.g., 60 min for 3.0 g/L of acetic acid and 10 min for 3.0 g/L of oxalic acid).The lack of linearity in these cases can be explained in terms of the side reactions occurring under the high temperature conditions, as presented earlier. According to Khajavi et al. [39], the hydrolysis reactions of monosugars at high temperatures do not obey first order kinetics because of the acceleration in the rates of the hydrolysis and decomposition reactions with decreasing pH during the hydrolysis.In the case of the hydrolysis of nipa sap without a catalyst, the hydrolysis of sucrose followed first order kinetics, most likely as a result of the relatively slow reaction rate. Enzymatic Hydrolysis To compare the acid hydrolysis with an enzymatic hydrolysis, we investigated the pretreatment of nipa sap with invertase.As shown in Figure 2c, invertase proved to be an effective catalyst for the hydrolysis of sucrose even at room temperature, with 3.92 v/v% invertase hydrolyzing 98% of the sucrose within 30 min at 25 °C and only 5 min at 60 °C.Both of these temperatures could therefore be used for this process.In addition, the optimum temperature for the fermentation of acetic acid by M. thermoacetica is 60 °C [19].This result therefore indicated that it could be possible to perform the enzymatic hydrolysis together with the fermentation of acetic acid at 60 °C. Comparison of the Different Catalysts Table 2 shows the different catalysts used for the hydrolysis of the sucrose in nipa sap to glucose and fructose, and the resulting pH values.The glucose and fructose yields increased during the initial phase of the hydrolysis, but decreased with prolonged treatment periods and higher catalyst concentrations.For instance, the glucose and fructose yields for the hydrolysis of nipa sap with 3.0 g/L of oxalic acid decreased from 98 to 94% when the hydrolysis treatment time was extended from 10 to 60 min.Furthermore, increasing the concentration of oxalic acid from 3.0 to 4.5 g/L led to a reduction in the yields of glucose and fructose to 91% after 60 min.For an ideal hydrolysis reaction, the glucose and fructose yields should be equal to the sucrose conversion.However, prolonging the treatment time in the autoclave led to a slight decrease in the total sugar content of the nipa sap.Previous studies involving superheated water have shown that monosaccharides can decompose to give other compounds, such as 5-hydroxymethylfurufral (5-HMF), furfural and organic acids (e.g., formic acid, levulinic acid, acetic acid and lactic acid) [39][40][41] or isomerize to give different monosugars [42].According to Nakamura et al. [22], organic compounds such as 5-HMF, furfural, lactic acid and formic acid can be fermented by M. thermoacetica to acetic acid.Consequently, even if these decomposed products were present in the nipa sap hydrolyzate, they would be fermented to acetic acid by M. thermoacetica. The results presented above for the acid-catalyzed hydrolysis of the sucrose in nipa sap revealed that oxalic acid was more effective than hydrochloric acid because the latter led to the rapid decomposition of the monosugar products.Furthermore, acetic acid was found to be too weak to achieve the rapid and complete hydrolysis of sucrose.It is noteworthy that the pretreatment of nipa sap with oxalic acid has been reported to be less toxic to subsequent biological steps than acetic or sulfuric acids, and does not produce poisonous odors [35].The optimal conditions for the acid-catalyzed hydrolysis of the sucrose in nipa sap were determined to be 3.0 g/L of oxalic acid within 20 min, which gave 98% yields of glucose and fructose.In the case of invertase, the glucose and fructose yields were equivalent to the sucrose conversion because of the relatively low temperature required for the hydrolysis (25 and 60 °C). These results revealed that the hydrolytic activities of oxalic acid and invertase were similarly good.Table 3 provides a comparison of oxalic acid and invertase in terms of their price and energy consumption characteristics for the hydrolysis of nipa sap. Given that the process configurations for the two different types of hydrolysis reaction were different, as shown in Figure 1, the different units of these processes Hydrolysis lime (mm) would have different energy demands.In the case of the acid-catalyzed hydrolysis, the nipa sap was simultaneously sterilized and hydrolyzed in the same unit.In this way, the heat generated during the exothermic hydrolysis of sucrose led to an increase in the temperature of the nipa sap inside the autoclave.Consequently, the autoclave consumed heat energy at a rate of 94.3 MJ/h.However, in the case of the enzymatic hydrolysis, the hydrolysis and sterilization steps were conducted in different units.The hydrolysis step therefore required cooling energy to remove the excess heat formed by the reaction, whereas the sterilization required much greater heat energy than that required for the acid-catalyzed hydrolysis because the heat of the hydrolysis reaction was not recovered. The results presented in Table 3 show that the hydrolysis and sterilization steps consumed energy at rate of -1.9 and 100.6 MJ/h, respectively, for the enzymatic hydrolysis.The negative value in this case shows that heat removal was required.The oxalic acid-and invertase-catalyzed hydrolysis steps consumed 94.4 and 100.7 MJ/h of energy, respectively.Consideration of the whole process including the fermentation step after the acid-and enzymecatalyzed hydrolysis reactions revealed that their total energy consumptions were 378.7 and 383.7 MJ/h, respectively.There were therefore no major differences in the energy consumptions of these two hydrolysis methods.However, as shown in Table 3, invertase is more expensive than oxalic acid (approximately 775fold more expensive).Based on these considerations, the oxalic acid-catalyzed hydrolysis of nipa sap represents the most economically efficient of these two methods.The effects of these two catalysts on the subsequent acetic acid fermentation process will be described in the next section. Acetic Acid Fermentation of Hydrolyzed Nipa Sap by M. thermoacetica Nipa sap was hydrolyzed in the presence of 3.0 g/L of oxalic acid at 121 °C for 20 min or 3.92 v/v% invertase at 25 °C for 30 min.The resulting hydrolyzed mixtures were used as substrates for acetic acid with M. thermoacetica.Like all known acetogens, the activity of M. thermoacetica may be inhibited by high concentrations of acetate ions or protons [45,46].With this in mind, the total organic concentration of the hydrolyzed nipa sap used in the acetic acid fermentation broth in the current study was decreased to approximately 10 g/L. Figure 4a shows the batch fermentation profiles for the consumption of glucose and fructose, as well as the production of acetic acid for the nipa sap hydrolyzed by oxalic acid.The results revealed that the substrates had completely fermented after 72 h.However, the acetic acid concentration only increased slightly during the first 24 h, before rapidly increasing to 6.11 g/L over the next 24 h with the highest rate of acetic acid production reaching 0.24 g/L/h.After this point, the production of acetic acid occurred at a much slower rate of 0.02 g/L/h and reached its highest concentration of 10.03 g/L at 217 h.It is noteworthy that the lactic acid and ethanol found in nipa sap may be fermented to acetic acid by M. thermoacetica [47], suggesting that these compounds could also be making a small contribution to the final acetic acid concentration.Furthermore, oxalic acid was not detected in the fermentation broth after 72 h, which indicated that this material was a substrate for M. thermoacetica.According to Daniel et al. [23], oxalic acid not only supports the growth of M. thermoacetica cells in the absence of supplemental CO 2 , but is also converted to acetic acid by the bacterium according to the following reaction: This equation shows that 4 moles of oxalic acid can be converted to 1 mole of acetic acid.A theoretical maximum of 0.04 g/L of acetic acid could therefore be produced from a fermentation broth containing 0.21 g/L of oxalic acid. The acetic acid yield achieved following the hydrolysis of nipa sap with invertase (Figure 4b) was similar to the yield achieved using oxalic acid as a catalyst for the hydrolysis.The results revealed that all of the glucose and fructose had been completely consumed within 71 h.It is noteworthy, however, that a lag period was observed in the production of acetic acid for the first 23 h, after which time the acetic acid concentration increased considerably to 9.88 g/L over 96 h, with an average acetic acid production rate of 0.10 g/L/h.The maximum acetic acid concentration obtained from 10.22 g/L of organic compounds in the original nipa sap (5.49g/L sucrose, 2.26 g/L glucose, 2.20 g/L fructose, 0.15 g/L lactic acid and 0.12 g/L ethanol) was 10.05 g/L at 216 h. Neither of the two catalysts evaluated in the hydrolysis step showed any inhibitory effects towards the subsequent fermentation step with M. thermoacetica and gave similar acetic acid yields of approximately 10 g/L.Research towards the development of a fed-batch fermentation process based on these results is currently underway in our laboratory in an attempt to improve the product concentration. A comparison of the conversion efficiencies of oxalic acid and invertase for the production of acetic acid revealed that they both gave similarly high values (98%), although the fermentation was slightly faster for the nipa sap hydrolyzed with invertase.It is important to mention that the traditional vinegar production processes used in the Philippines typically produce as little as 4.5-5.5% acetic acid from 15-22% nipa sap because of the CO 2 generated during the fermentation process [8].These traditional processes are much less efficient than the process developed in this study because they only deliver an average conversion efficiency of 27%.The conversion efficiency of the new process described in this study is 3.6 times greater than that of the traditional methods at 98%.The hydrolytic pretreatment of nipa sap followed by its anaerobic fermentation with M. thermoacetica therefore repre- sents a considerable improvement in the production of acetic acid from nipa sap compared with traditional methods. The acetic acid generated using our new method could be used directly as vinegar, solvent or preservative in the food industry [15,16,48].This material could also be used as a key intermediate for the production of other value-added products, such as renewable fine chemicals, pharmaceutical products, plastics, synthetic fibers [16], deicers [49], ethanol fuel via its hydrogenation over a metal catalyst [50][51][52] and microbial oils for biodiesel production [53]. CONCLUSIONS In this study, we have investigated the hydrolysis and fermentability of nipa sap to give acetic acid.The results revealed that acetic acid acted as an extremely weak catalyst for the hydrolysis step, whereas hydrochloric acid acted as an extremely strong acid catalyst.Oxalic acid and invertase both showed good hydrolytic activities and similar energy requirements for the hydrolysis step, but oxalic acid was determined to be advantageous in terms of its price.Furthermore, oxalic acid may be fermented by M. thermoacetica, thereby avoiding the need to separate the catalyst after the fermentation process.The solution generated following the hydrolysis of nipa sap with oxalic acid or invertase was efficiently fermented to acetic acid by M. thermoacetica.Compared with traditional methods for the production of vinegar, much higher conversion efficiencies to acetic acid were obtained from nipa sap using the conditions developed in this study.The findings of this study demonstrate that nipa sap and several other sucrose-containing biomass materials can be readily used for the production of acetic acid by acid-or enzyme-catalyzed hydrolysis, followed by fermentation with M. thermoacetica, without releasing any CO 2 .It is therefore envisaged that this process will not only enhance the economic value of nipa palm in rural coastal communities, but also mitigate environmental burdens associated with this material. Glucose and . ._ Total glucose and fructosc produced (g/L) _ fructose yield Total theoretical max.glucose and fructose pr oduced (g/L) Figure 3 : Figure 3: Relationship between the C/Co ratio of sucrose and the treatment time for the hydrolysis of sucrose in nipa sap by (a) various acetic acid concentrations (0, 3.0, 10 and 20 g/L), and (b) various oxalic acid concentrations (0, 1.5 and 3.0 g/L). Figure 4 : Figure 4: Batch fermentation profiles for the production of acetic acid by M. thermoacetica from nipa sap following the hydrolysis of the sap with (a) 3 g/L of oxalic acid at 121 °C or (b) 3.92 v/v% invertase at 25 °C.
2019-04-07T13:05:12.879Z
2016-01-07T00:00:00.000
{ "year": 2019, "sha1": "fd8b5e9934f17b6326c5f46448e415c8f733a9a3", "oa_license": "CCBYNC", "oa_url": "https://osf.io/dhr84/download", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "59b8a0368eceacde9fcaea9a2a1742be70e67d96", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
246678023
pes2o/s2orc
v3-fos-license
Similar risk of cancer in patients younger than 55 years with or without a total hip arthroplasty (THA): a population-based cohort study on 18,771 exposed to THA and 87,683 controls Background and purpose Concerns related to a potentially increased risk of cancer after total hip arthroplasty (THA) have frequently surfaced, especially since the novel EU medical device regulation classified cobalt as carcinogenic. We assessed the risk of cancer after THA in a nationwide cohort of patients younger than 55 years at surgery. Patients and methods In this population-based longitudinal cohort study, 18,771 individuals exposed to THA were identified in the Swedish Hip Arthroplasty Registry (SHAR) and compared with 87,683 unexposed individuals who were matched by age, sex, and residence. Diagnoses, socioeconomic background, and dates of death were obtained from the Swedish Cancer Register, the National Patient Register, and Statistics Sweden. Primary outcome was the adjusted risk of any cancer after the first THA; secondary outcomes were specific cancer forms. Results We found no enhanced adjusted risk of developing any cancer, either in exposed females compared with unexposed females (hazard ratio [HR] 1.1, 95% confidence interval [CI] 0.95–1.2), or in exposed males (HR 1.1, CI 0.99–1.2). When analysing specific cancers, increased adjusted risks were found for thyroid and pancreas cancer in exposed females, and for cancer of the stomach, skin melanoma, and prostate cancer in exposed males. Interpretation This study indicates that there is no statistically significant increased overall risk of cancer in young THA-exposed patients. The potentially slightly enhanced risk for specific cancers may be due to residual confounding resulting from risk factors not accounted for and merits further investigation. Background and purpose -Concerns related to a potentially increased risk of cancer after total hip arthroplasty (THA) have frequently surfaced, especially since the novel EU medical device regulation classified cobalt as carcinogenic. We assessed the risk of cancer after THA in a nationwide cohort of patients younger than 55 years at surgery. Patients and methods -In this population-based longitudinal cohort study, 18,771 individuals exposed to THA were identified in the Swedish Hip Arthroplasty Registry (SHAR) and compared with 87,683 unexposed individuals who were matched by age, sex, and residence. Diagnoses, socioeconomic background, and dates of death were obtained from the Swedish Cancer Register, the National Patient Register, and Statistics Sweden. Primary outcome was the adjusted risk of any cancer after the first THA; secondary outcomes were specific cancer forms. Results -We found no enhanced adjusted risk of developing any cancer, either in exposed females compared with unexposed females (hazard ratio [HR] 1.1, 95% confidence interval [CI] 0.95-1.2), or in exposed males (HR 1.1, CI 0.99-1.2). When analysing specific cancers, increased adjusted risks were found for thyroid and pancreas cancer in exposed females, and for cancer of the stomach, skin melanoma, and prostate cancer in exposed males. Interpretation -This study indicates that there is no statistically significant increased overall risk of cancer in young THA-exposed patients. The potentially slightly enhanced risk for specific cancers may be due to residual confounding resulting from risk factors not accounted for and merits further investigation. Although THA surgery is generally considered a both safe and efficient intervention, concerns around a potentially increased risk of cancer have been raised repeatedly (1)(2)(3). THA implants release cobalt, chromium, and nickel ions or nanoparticles, which may have cancerogenic effects, and the Medical Device Regulation issued by the European Union classifies cobalt as a carcinogenic substance (4)(5)(6). Low concentrations of these metals are found after both cemented and cementless THA with conventional metal-on-polyethylene bearings, and even higher concentrations can be measured in tissues and blood of patients with large metal-on-metal bearings (7,8). Metal ions may cause chromosomal aberrations in both peripheral blood and bone marrow of patients with THA (9,10). Additionally, cemented THA exposes patients to the potentially toxic polymer polymethyl-methacrylate with various additives constituting the bone cement used to fix THA implants to bone. Observational studies show small increases in the risk of developing hematological or lymphatic malignancies (1,11,12) and an enhanced incidence of solid tumors in the prostate or skin after THA or knee arthroplasty (13)(14)(15)(16)(17), but these findings are contradicted by others (14,(17)(18)(19). Most of the cited studies base their findings on "average" THA populations with a mean age between 60 and 70 years at the time of surgery, but none specifically addresses the cancer risk in younger THA patients who are exposed to their implants and potentially toxic derivatives for much longer periods of time than elderly patients. We therefore explored the risk of cancer in patients younger than 55 years at the time of primary THA in a population-based study comparing a THA-exposed cohort to an age-, sex-and residency-matched, unexposed cohort, with adjustment for the confounders comorbidity and socioeconomic background. Study design and study population Swedish Hip Arthroplasty Register The Swedish Hip Arthroplasty Register was established in 1979, but only from 1992 were implant data linked to the patient's individual identification number (PIN). Participants who received at least 1 THA at an age below 55 years between 1992 and 2012 were identified through their PIN in the Swedish Hip Arthroplasty Register. Patients with a THA were grouped by bearing type into those with conventional and those with metal-on-metal bearings (classical resurfacings or stemmed, large metal head THA included). The conventional bearing types included metal-on-polyethylene, which, in accordance with Swedish practice, represented the overwhelming majority of conventional bearings, followed by some ceramic-onpolyethylene and ceramic-on-ceramic bearings. The cohort of THA patients with conventional bearing THA was secondarily subdivided by fixation type into cemented, uncemented, and hybrid fixations, with the latter category comprising both classical and inverse hybrids. Statistics Sweden Statistics Sweden provided the control cohort for the individuals exposed to THA. Each was matched by age, sex, and place of residence to 5 unexposed individuals from the general population. Unexposed individuals had to be alive at the date of the first THA surgery of their respective case, the index date. The matching variables (age, sex, and region of residence) were considered appropriate to ensure an equal distribution among the exposed und unexposed individuals (20). Region of residence at time at surgery was matched to ensure that environmental factors that could be associated with cancer incidence did not confound the estimates. Age-matching was performed by year of birth; thus some unexposed individuals had died prior to the index date of their respective case and were excluded (Figure 1). Individuals not exposed to a THA at the index date could undergo THA surgery at a later time point, and such individuals (0.3% of the unexposed cohort) were censored at the date of their first THA surgery. Statistics Sweden also provided information on personal incomes, subdivided into 4 categories along quartiles, and levels of education of the entire cohort. Level of education was separated into 4 categories: a base category including either no school education, less than 9 years of school, or an unknown level of education; this was followed by the 3 categories minimum 9 years of school education, high school education, or university education. Swedish Cancer Registry We obtained cancer diagnoses, but disregarded non-melanoma skin cancers and carcinoma in situ, on all participants from the Swedish Cancer Registry, with cancers defined along established main categories (Table 1, see Supplementary data). Participants who had a cancer diagnosis prior to the index date were excluded from the analyses. Registration of a first cancer diagnosis after the index date defined the occurrence of cancer. Time between the index date and the date at which this cancer diagnosis was registered defined the time to onset of the first cancer. When participants suffered from several cancers, we used the time to the occurrence of the first cancer to calculate the risk of developing "any" cancer. Subsequent new cancer forms occurring after the first cancer were used only for the estimation of the risk of developing this specific cancer form. Swedish Population Register The Swedish Population Register provided information on age, sex, death, and emigration on all participants. Comorbidities were assessed by collecting diagnosis codes on all participants from the Swedish National Patient Registry (ICD versions 9 and 10) and calculating the Charlson comorbidity index modified by Quan (21). The Charlson comorbidity index was categorized into 3 levels: 0: absence of comorbidities, 1-2: mild comorbidities, and > 2: severe comorbidities. Observation time Follow-up started on the index date and ended on the day of death, emigration, censoring, or December 31, 2012, whichever came first. The registers used as sources of our data are previously described and validated (22)(23)(24). Characteristics of study population Linking register information on the selected individuals based on their individual PIN enabled us to select a final study population consisting of 18,771 individuals exposed to a THA with a median follow-up time of 7.9 years, and 87,683 matched, unexposed individuals with a median follow-up time of 8.1 years. Mean age was 47 years (SD 7; Table 2, see Supplementary data). The exposed individuals were considerably more comorbid than the unexposed, with 87% of exposed individuals having a Charlson comorbidity index = 0, as compared with 98% among the unexposed. The level of education was lower among the exposed while income distributions were similar in both groups ( Table 2, see Supplementary data). Among the exposed, conventional bearings were used in the vast majority, whereas metal-on-metal devices were inserted in 8% of all exposed individuals. The most common type of implant fixation among conventional bearing THAs was cementless, followed by entirely cemented and hybrid or inverse hybrid fixation (Table 3, see Supplementary data). The most common diagnosis underlying THA surgery among the exposed was primary OA (71%), followed by sequelae after pediatric hip disorders (12%; Table 3, see Supplementary data). Statistics Continuous data were described using means, medians, minima, maxima, and standard deviations, as appropriate, and differences between observed and expected counts of categorical data were investigated by the Chi-square test. Cumulative cancer incidences were defined as the number of incident cancers per 100,000 person years. Cox multivariable regression models were fitted to calculate hazard ratios (HR) with 95% confidence intervals (CI), either unadjusted or adjusted for the matching variables age, sex, and region of residence, and for the confounders Charlson comorbidity index, personal income, and level of education. We adjusted also for matching variables (age groups, sex, and region of residence at time of surgery) to avoid bias in the presence of the additional confounders (25,26). Because we have a large population with access to the Swedish Population Register, the number of matching variables is not a limitation in finding appropriate controls, which otherwise can be a disadvantage of matching (25). The assumption of proportionality of hazards was investigated by plotting unadjusted cumulative incidence curves for each specific cancer and for each covariate for exposed and unexposed individuals, and by calculating Schoenfeld residuals. A major deviation from the assumption of proportionality was found for the covariate sex, therefore all further analyses were stratified by this variable. The matching variable "place of residence" contained > 300 levels and was therefore included in the analyses as a stratum variable. Within the exposed cohort, 5,567 individuals received a second, contralateral THA at a later stage. These individuals were analyzed without consideration for subject dependency (27,28). The primary outcome was defined as the occurrence of any cancer after THA surgery. For secondary outcomes, the adjusted risk of cancer was stratified by the diagnoses underlying surgery or by the type of bearing in separate analyses. In order to analyze the adjusted risk of cancer in the entire population not stratified by sex we performed an additional sensitivity analysis for females and males grouped together, but with "sex" as a stratum variable in order not to violate the assumption of proportional hazards. The level of significance was set at p < 0.05 in all analyses. R software (R version 3.6.3 (2020-02-29); R Foundation for Statistical Computing, Vienna, Austria) was used. Sensitivity analyses The question of whether exposure to large metal-on-metal bearings confers an increased cancer risk has been debated, and we therefore undertook a sensitivity analysis excluding individuals with metal-on-metal devices. We estimated the adjusted risk of developing cancer for the individuals exposed to THA compared with their matched unexposed individuals after excluding all individuals who were exposed to large metal-on-metal bearings, together with their respective unexposed cohort. The exposure to THA might not confer an immediately enhanced risk for cancer, and we therefore performed an additional sensitivity analysis investigating the risk of any cancer divided into different time periods after the index date (0-1, 1-5, 5-10, > 10 years). A final sensitivity analysis was performed on the entire population of males and females grouped together, but by using sex as a strata variable (command "strat" in function "cph" in R package "rms") in order not to violate the assumption of proportionality. Ethics, registration, data sharing plan, funding, and potential conflicts of interest Ethical approval for this study was obtained from the Regional Ethical Review Board in Gothenburg (2013: 360-13). In Sweden, no individual written consent is required for collection of data in the registries mentioned above but in consistency with the Swedish Patient Data Law of 2009 and the Personal Data Act of 1998 everyone has the option to have collected data erased at any time. We are restrained in sharing the underlying dataset as the study was approved on the grounds of ensuring the confidentiality of sensitive patient data, owing to national regulations. However, data can be obtained from the register authorities upon reasonable request. This study was supported by a grant from the Swedish Research Council (VR 2018-02612) to NPH. The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. All authors declared no potential conflicts of interest. Results Cancer after THA 5,227 incident cases of cancer occurred in the entire study population. Breast cancer was by far the most common cancer among females, as was prostate cancer among males, followed by colorectal cancer, cancers of the lung, and skin melanoma. The cumulative, unadjusted cancer incidence among exposed females was 625 incident cases per 100,000 person years, as compared with 579 incident cases per 100,000 person years among unexposed females, amounting to an HR of 1.1 (CI 0.95-1.2; Table 4, see Supplementary data) adjusted for age, sex, region, Charlson comorbidity index, education, and income. In exposed males, 584 incident cases of cancer per 100,000 person years occurred, whereas the corresponding number for unexposed males was 517. After adjustment for the same confounders, this resulted in an HR of 1.1 (CI 0.99-1.2) for exposed males. When investigating specific cancer forms, exposure to THA in females was associated with a statistically increased adjusted risk of developing 2 specific cancer forms, pancreatic (HR 2.4; CI 1.4-4.3) and thyroid (HR 2.4; CI 1.1-5.1) cancer ( Figure 2, Table 4, see Supplementary data). The adjusted risk of developing leukemia was slightly elevated in females, whereas their adjusted risk of developing myeloma was attenuated, but none of these risk modifications was statistically significant. Exposed males had a statistically significantly increased adjusted risk for cancer of the stomach (HR 1.9; CI 1.0-3.6), skin melanoma (HR 1.6; CI 1.1-2.3), and prostate cancer (HR 1.2; CI 1.0-1.4; Figure 2, Table 5, see Supplementary data). The adjusted risk of developing leukemia or myeloma was slightly increased in exposed males, but none of these risk modifications were statistically significant. Different indications for THA are associated with different comorbidity patterns that could have an impact on the risk of developing cancer, and we therefore investigated the risk of cancer stratified by the diagnoses underlying surgery. Females exposed to a THA due to avascular necrosis of the femoral head had an increased adjusted risk of developing any cancer (HR 1.8, CI 1.1-2.9). Also, males had an increased adjusted risk with the same underlying hip disease, but this finding was not statistically significant (adjusted HR 1.3, CI 0.8-2.0). Males who received their THA due to a fracture of the femoral neck had an increased adjusted risk of cancer (HR 1.9, CI 1.1-3.2). In females who received THA due to fracture, the adjusted risk was not statistically significantly elevated (HR 1.2, CI 0.7-2.0). Patients who received their THA due to primary osteoarthritis, secondary osteoarthritis, sequelae after pediatric hip diseases, or inflammatory joint disorders had some elevated adjusted risks of developing cancer, but none of these was statistically significant (Table 6, see Supplementary data). In a similar pattern, we analyzed whether the type of THA fixation was associated with an increased cancer risk. This was the case in males who were exposed to cemented THA, in whom a slightly elevated adjusted risk of cancer (HR 1.2, CI 1.0-1.4) was found ( Table 7, see Supplementary data). Sensitivity analyses In order to exclude a subgroup with a potentially enhanced cancer risk all individuals exposed to a large metal-on-metal bearings were excluded in a sensitivity analysis. The remaining females exposed to THA with conventional bearings (n = 8,784) had an adjusted risk of developing any cancer of 1.1 (CI 0.95-1.2), and the adjusted HR in males exposed to THA with conventional bearings was 1.1 (CI 0.99-1.2). As the development of cancer after exposure to any carcinogenic agent may be expected to occur only after a certain period of exposure, we performed an additional sensitivity analysis by dividing the observation period after the index date into 4 time periods, estimating the risk of any cancer for each time period. This analysis revealed a slightly increased cancer risk only for males during the time period of 1 to 5 years after the exposure to THA (1.2, CI 1.0-1.5), Table 8), see Supplementary data. We finally performed a sensitivity analysis for the entire study population of females and males grouped together, but by use of a stratum variable. In these analyses, we found no overall increased adjusted risk of cancer among the exposed individuals. Principal findings Performing THA in patients younger than 55 years at the time of index surgery was not associated with an overall increased risk of cancer. We did, however, observe minor risk increases not quite reaching the threshold of statistical significance in males and in certain subgroups of exposed individuals. Exposed females had no overall increased risk of developing any cancer, but their risk of developing specific cancer types, i.e., pancreatic and thyroid cancer, was slightly and statistically significantly increased. Similarly, exposed males had no increased statistically increased adjusted risk of developing any cancer, but when we analyzed specific cancer types in exposed males stomach cancer, melanoma, and prostate cancer were cancer forms with a statistically significant risk increase. Large metal-on-metal bearings conferred no increased risk of overall cancer in this young population, but the investigated subgroup was small and estimation uncertainty large. Inherent strengths and limitations The main strengths of this study are its nationwide matched cohort design with a large number of individuals and, thus, a large number of incident cancers. Our access to comorbidities and socioeconomic data offers additional strengths because we were able to account for these potentially important confounders. Unfortunately, we lack information on body mass index (BMI), alcohol consumption, and smoking. We cannot rule out incompleteness and misclassifications in the underlying register data, but both the Swedish Hip Arthroplasty Registry and the Swedish Cancer registry are well-validated registers (22)(23)(24). A longer follow-up than a median of 8 years would have been desirable, especially in this young cohort exposed to THA. However, individual patient data from the SHAR were available only from 1992, and younger patients were historically treated reluctantly with THA because of the uncertain long-term revision rates. This, together with the fact that the follow-up ended in 2012, limits our observation times. Even though ethical approval for the study was obtained in 2013, the administrative burden to coordinate the data-merge of the different registerers by ensuring data protection and the confi-dentiality of sensitive patient data was immense. This resulted in a delay of the delivery of the final database of 6 years. Bearing this in mind makes it even more important to report potential adverse effects such as risk of cancer in a younger cohort in order to prevent future patients from these risks. The sample size of the entire exposed population seemed appropriate for the estimation of cancer risk, which is reflected by the narrow confidence intervals of the results. The sample sizes after stratification by type of bearing or by the underlying diagnoses for surgery are much smaller, with a corresponding decrease in precision and an increased risk of type-II errors. Nonetheless, following epidemiological methodology, we believed it important to explore cancer risks in selected subgroups of patients by performing such stratified analyses, always being aware of the limitations conferred by reduced sample sizes. Methodologically, an observational study such as ours is open to many levels of confounding, and, also related to study design, we considered our study exploratory and performed no formal multiplicity adjustments. These issues must be remembered when interpreting risk estimates that may be inflated by residual confounding and muddled by type-I errors. In accordance with recommendations on studies with a matched cohort we adjusted for the matching variables (25,26). However, as this approach has been debated, we performed a sensitivity analysis with adjustment only for the confounders socioeconomic background and comorbidity, but without adjusting for matching variables. This did not notably alter the risk estimates obtained. It has been suggested that "fitter" individuals are selected for THA, leading to selection bias, which might attenuate cancer incidence among exposed individuals compared with the background population (29,30). Contradicting this assumption, our analysis of comorbidities in exposed and unexposed individuals rather indicates the opposite, with more comorbidities present among young THA patients than among the general population. This finding is consistent with previous descriptions of cardiovascular and endocrine disorders being more common in patients with Legg-Calvé-Perthes' disease or slipped capital femoral epiphysis, 2 of the more frequent pediatric hip disorders (31)(32)(33). Thus, our estimated cancer risk would, rather, be inflated, which is consistent with a "worstcase" scenario. Detection bias could amplify the chances of the exposed cohort being diagnosed with cancer due to repeated contacts with healthcare related to the underlying hip disease. However, time spans from index surgery to the detection of various cancers were mostly between 5 and 10 years (data not shown), a time at which regular follow-up after THA has long ceased. We thus believe detection bias to be of minor importance. Another caveat is the issue of causality. The increased risk for some specific cancer forms might not be due to the THA procedure itself but rather to diagnoses underlying THA surgery or shared risk factors, such as obesity, which confers an increased risk for both cancer (34) and osteoarthritis (35). Strengths and limitations in relation to other studies Our study included only individuals younger than 55 who have not been specifically addressed in other studies exploring the risk of cancer after THA. Our age selection was based on published research in the field, where 55 years seems broadly accepted as defining the upper age limit of "young" arthroplasty patients (36)(37)(38)(39), but we are fully aware that this dichotomization is arbitrary, and other age limits would render different results. The hitherto largest observational study on cancer after THA includes 403,881 individuals, but the median age in the 3 subgroups of patients investigated in that study, divided by different types of THA, ranges from 55 to 70 years (40). The second largest cohort of 126,276 THA includes osteoarthritis patients with a mean age of 71 years (17), and the mean age in other studies on populations of similar sizes as that presented in our study is centered on 68-70 years (12,18,(41)(42)(43). Although even longer observation times would be desirable when investigating cancer risks in young individuals, a strength of our study is its median observation time of around 8 years, which is longer than in most other studies on the topic of cancer after THA. Only 2 other studies report similar observation times (12,43), and only 1 has a longer observation time of 14 years (17). Our study design with individuals exposed to a THA at the beginning of the study compared with unexposed individuals who may be exposed to a THA at a later time point resembles a prospective cohort study. All other studies on cancer after THA, except our previous study (17), either have no comparison group, or compare with individuals among whom no one was ever exposed to a THA, a scenario that is highly improbable in real life. Some previous studies compare cancer incidences in cohorts exposed to THA populations with a standardized incidence ratio (12,16,41) a measure that has its strengths while on the other hand opening up for selection bias, as patients scheduled for THA deviate from the general population in terms of comorbidity, mortality, and socioeconomic factors (29,44). Several previous studies on the topic of cancer after THA had no access to the important confounders of comorbidity and socioeconomic background, both of which are associated with the overall risk of cancer and the development of specific cancer forms (45,46). Due to the restrictive use of large metal-on-metal bearings in Sweden the subgroup of individuals exposed to this type of bearing is small in our study, whereas other investigators have investigated much larger cohorts with such devices (18,43,47). We concede that our cohort contains some individuals with small diameter metal-on-metal bearings. Such devices, for instance the "Metasul" bearing, became popular around the turn of the century, but they generate much lower concentrations of cobalt, chromium, and nickel than large diameter metal-on-metal bearings (6,48), and their use was restricted to very limited numbers of patients (49). We cannot reconstruct their exact number in our database, but based on a detailed analysis of the cup types used in our cohort we estimate that less than 3% of our exposed individuals received such small diameter metal-on-metal bearings, and therefore believe this parameter has only limited influence. Accord and discord with other studies We find an about equal risk of cancer in younger patients exposed to THA when related to a sample from a comparable population, which is in agreement with previous studies on older populations (17,18,40). The increased melanoma risk in exposed males confirms data from a meta-analysis in which a 1.4-fold risk of developing melanoma was described for arthroplasty patients, though of higher mean age (14,16). The increased risk of prostate cancer in exposed males agrees with previous studies that describe risk increases of similar magnitudes (12,15). These findings-such as an increased risk of thyroid cancer in females-may reflect that the exposed individuals are on average more likely to seek healthcare, a diagnostic bias not fully adjusted for by introducing socioeconomic status and educational level into the analysis. To our knowledge, our findings of increased risks of developing cancer of the stomach in exposed males and pancreatic cancer in exposed females are novel, but, given the large number of risk estimates and the limited study population these findings could represent type-I errors. Nonetheless, these observations may be linked to the exposure of younger individuals to potentially cancerogenic derivatives of THA, but further studies on young THA patients need to replicate these findings before conclusions can be drawn. Conclusions Some previous reports indicate an increased risk of cancer after THA, but this has been contradicted by others. None of the previous studies on the risk of cancer after THA specifically investigated younger cohorts, although younger patients are exposed to their implants for longer periods of time. In this first large-scale investigation on the risk of cancer after THA in young patients we find similar overall risk of cancer compared with individuals without THA exposure after adjustment for comorbidities and socioeconomic background. Within the limitations of our study, we believe that THA surgery can be considered a safe procedure regarding early cancer risk even in younger patients, a reassuring finding that can be communicated during preoperative counselling. Whether the risk of cancer is enhanced after longer exposure to THA remains to be investigated further. NPH and JK designed the study and obtained ethical approval and access to the data. NE constructed the database and YDH and NPH conducted the data analysis with the help of Eva Freyhult. YDH and NPH drafted the initial version of the manuscript. All authors contributed to the interpretation of the data and critically revised the manuscript. All authors had full access to the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. YDH is the guarantor. The corresponding author attests that all listed authors meet authorship criteria. HR (CI) = Hazard ratio (95% confidence interval). Table 5. Risk of any cancer and specific cancers in exposed compared with unexposed males, adjusted for age, region of residence, comorbidities, and socioeconomic background HR (CI) = Hazard ratio (95% confidence interval) Table 7. Risk of developing any cancer, separated by the type of total hip arthroplasty (THA), stratified by sex, adjusted for age, region of residence, comorbidities, and socioeconomic background
2022-02-10T06:17:09.609Z
2022-02-08T00:00:00.000
{ "year": 2022, "sha1": "359880403e646db23ef753405fc039d4fe3c01ff", "oa_license": "CCBYNC", "oa_url": "https://actaorthop.org/actao/article/download/2044/2637", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fe0406607a39cbc9f62df02cd019d97295cc7a8d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
26166655
pes2o/s2orc
v3-fos-license
A Framework for a Competency Based Medical Curriculum in Saudi Arabia Background: We recently adopted a competency based curriculum based on the CanMEDs model. This shift required the cross-mapping of all key CanMEDs competencies with the competencies for higher education in Saudi Arabia as per the Saudi National Commission for Academic Accreditation & Assessment (NCAAA) guidelines. Objectives: To formulate competencies for our curriculum and to create a framework aligned with NCAAA, CanMEDs and Saudi Meds. Methods: After finalization of program outcomes, the program goals were cross-mapped with CanMEDs and Saudi Meds competencies and then the CanMEDs competencies were reverse mapped with our outcomes. Finally benchmarking of outcomes with the programs of the Universities of Manitoba and Toronto was done. Results: We were able to cross-map and match major outcomes of our program with both the CanMEDs and the Saudi Meds frameworks, ensuring that the outcomes are in line with NCAAA, CanMEDs and Saud Meds. Also, our program objectives were bench marked with two of the Canadian medical schools. Conclusion: We propose that our framework can be a model for other universities in Saudi Arabia to consider when shifting to a competency based curriculum. INTRODUCTION Our College of Medicine under the King Faisal University, Hofuf, Saudi Arabia, has recently adopted a competency based, problem-based curriculum from the University of Groningen, Netherlands, which in turn has based its basic framework on the CanMEDs model. The shift from the traditional teaching model to the new competency based model required the cross-mapping of all key CanMEDs competencies with the competencies for higher education in Saudi Arabia as per the National Commission for Academic Accreditation & Assessment (NCAAA) guidelines (1). This is essential as per the regulations regarding higher education in the Kingdom of Saudi Arabia (KSA). The last few years have seen an effort to create a uniform competency and outcome pattern for undergraduate medical education in Saudi Arabia. Two recent key events for this were The draft on "Learning outcomes for bachelor degree programs in medicine" developed by the NCAAA in 2010 (1). The national competency framework -Saudi Meds, developed as a joint effort by five medical schools in KSA (2). A meeting of deans of Saudi medical colleges held in November 2012 gave final endorsement for the Saudi Meds as a pre requisite for all new medical colleges to frame their curricula. It was decided in this meeting that Saudi Meds will henceforth be the primary tool to cross map all medical curricula in Saudi Arabia. Our college started in 2001 with its traditional discipline based Flexnarian curriculum. The College started the process of updating its curriculum since the graduation of the first batch in 2007. Apart from the improvement in the educational methodology and reforming of educational environment for the traditional curriculum, the college started to search for a newer curriculum that would cover all the recent challenges. The college studied all the relevant innovative curriculum modification trials in Saudi Arabia (e.g.: Al Qassim University, Al Imam University and King Saud bin Abdul Aziz University). The college also studied the Arabian Gulf University (Bahrain) Curriculum. After that the college did a further needs assessment and explored various international curricula. After deliberations and discussions, the college endorsed the medical curriculum of the University of Groningen, Netherlands to be its adopted curriculum. A long term plan of cooperation was accepted and the adopted curriculum took the name "Groningen Medical Curriculum Adoption 2012" or GMCA 2012, and started the first batch in September 2012. Our initial challenge was to ensure that we create a competency framework which is well aligned with NCAAA, CanMEDs and the Saudi meds framework. This manuscript attempts to correlate the competencies under the NCAAA, CANMEDS, Saudi Meds and GMCA 2012. We hope that our work will act as a model for many other Saudi medical schools which are in the process of shifting from a traditional curriculum to a competency based curriculum. METHOD Settings: This activity was conducted at Al Ahsa Medical College, King Faisal University, Saudi Arabia. At the very outset it was clear that the broad competencies of our curriculum would be aligned with the CanMEDs competencies as the curriculum of the University of Groningen is modified based on the CanMEDs framework. After a detailed literature review and review of competency frameworks of other educational bodies and universities, a rough draft was prepared as per the NCAAA format. This was further refined during the period of March-September 2012. The entire process was carried out meticulously by senior faculty under the guidance of the dean of the college of medicine and with support from the higher university administration. The process: During an international conference of medical education held at Riyadh 2012 (SIMEC 2012), a lot of themes of innovative curricula were discussed and a lot of workshops were implemented. One of these workshops was cross-mapping of ACGME competencies into medical curricula. Two College members attended this workshop to have a better understanding of the cross-mapping procedure. The session on cross-mapping focused mainly on the ACGME competencies with stress on the following points: Understanding the ACGME 6 competencies in an international context, applying a process for integrating these competencies at the undergraduate level, selecting appropriate measurement methods for assessing two of the six competencies and applying the concepts in their own settings. The experience gained from this session was very useful in formulating our competencies and crossmapping with other frameworks. Opinions were also taken from other internal as well as external experts. The main frameworks studied included the CanMEDs framework (3) ACMGE (Accreditation Council for Graduate Medical Education), USA, 2001 (4), General medical council 2009 (5), Saudi Meds (2), the framework for undergraduate medical education in Netherlands (2009) (6), AAMC (Association of American Medical Colleges (1998) (7) and the competency frameworks used by the universities of British Columbia, Manitoba and Toronto (8,9,10). Instruments CanMEDs competency framework was tabulated as a standard sheet for comparison NCAAA guidelines of writing the program specifications were utilized RUG-G2010 'attainment of competencies' document was explored to use its methodology A master table was prepared by the authors where comparison and cross mapping was done using the previously Program goals were cross-mapped with CanMEDs and University of British Columbia (exit competency matching with program outcomes, UBC 2011) key competencies and program outcomes were cross-mapped with CanMEDs enabling competencies and Saudi Meds competencies and then the CanMEDs competencies were reverse mapped with our program outcomes. This was further followed by benchmarking of our program outcomes with two other Canadian Universities, namely the universities of Manitoba (Christodolou 2012) and Toronto (Faculty of medicine, university of Toronto 2011/2012. This was part of the requirement for national and international accreditation. Table 1 shows the detailed mapping of Key Competencies (Program Goals) in our medical program GMCA 2012 under each major CanMEDs Competency; Medical expert (ME), Communicator (COM), Collaborator (COL), Manager (MGR), Health Advocate (ADV), Scholar (SCH) and Professional (PRF). Table 2 shows the cross-mapping of our curriculum goals and outcomes under seven broad roles (in the NCAAA format, in three domains) to the CanMEDs key and enabling competencies and the UBC exit competencies. RESULTS We were able to cross-map and match most major outcomes of our program with both the CanMEDs and the Saudi Meds frameworks, ensuring that our program outcomes are in line with NCAAA, CanMEDs and the NCAAA frameworks. DISCUSSION Standardization of medical education frameworks is essential in this age of globalization. Like in many countries over the world Saudi Arabia is also in the process of developing a standard framework for medical education. Many countries across the globe have adopted the CanMEDs framework with or without modifications. The curriculum we have adopted from the University of Groningen has also based its competencies on the Can-MEDs model with some modification (6,11). In Saudi Arabia however, as per present regulations, the program outcomes and competencies have to be expressed in terms of the NCAAA pattern. Hence we wanted to ensure that our curriculum is compatible with both the NCAAA and the CanMEDS framework. In the process we also attempted to compare and match our outcomes with another proposed framework for medical education developed in Saudi Arabia, namely the Saudi Meds framework. To the best knowledge of the authors, this is the first publication in Saudi Arabia and in the Arabian Gulf region showing the medical curriculum competencies mapped against CanMEDS, Saudi Meds and NCAAA competencies. Most major educational bodies have formulated competency frameworks for medical education. These include the CanMEDs framework (3) and enabling competencies -a total of 28 key and 126 enabling competencies. ACMGE (4) defined six main competencies for a medical graduate -Patient care, medical knowledge, practice -based learning, and improvement, interpersonal and communication skills, professionalism and systems-based practice. All these broad areas are further divided into 28 outcomes. The General Medical Council (5) defines three major competencies for a medical graduate -doctor as a scholar and scientist, doctor as a practitioner and doctor as a professional. A further 16 competencies and 106 detailed objectives are derived from these three broad roles. The Saudi Meds (2) provides a framework that reflects the principles of professional medical practice in Saudi Arabia. This has culminated from a long standing national need to define the competencies of medical graduates in Saudi Arabia and in the Arabian Gulf area in general (12,13). This includes the general competencies expected of medical graduates and the essential learning outcomes for undergraduate medical education. Saudi Meds was not meant to be a unified national curriculum, just a framework on which individual universities can work thereby guaranteeing equivalent standards without compromising on autonomy. Saudi Meds covers seven broad domains centered on 'Patient care and social accountability'. The seven broad domains include -'Approach to daily Practice', 'Dr and patient', 'Dr and community', 'communication skills', 'professionalism', 'Dr and information technology' and 'Dr and research'. The finalization of these broad domains was the main agenda of the first phase of the project and further phases will cover defining specific outcomes and competencies in each domain and development of a structured program to ensure that the graduates achieve the specified outcomes by the end of their internship. It was observed that the CanMEDs competencies were originally destined for postgraduate medical specialties and that there was less consideration give to the basics of medical foundation skills and knowledge in the CanMEDs framework. Internationally recognized medical schools in Canada therefore modified these frameworks for undergraduate programs by adding more foundational knowledge and skills in their curricula as an adaptive addition to the CanMEDs model. We attempted to do the same for our curriculum. We adapted a lot of the information from RUG medical curriculum objectives while given due consideration to the Saudi Meds framework and NCAAA guidelines. We chose three universities for benchmarking our program outcomes -namely, the universities of British Columbia, Toronto and Manitoba (8,9,10). The universities of Toronto and Manitoba had adapted CanMEDs competencies to be more applicable at the level of a medical graduate level rather than a medical post-graduate level. They especially stressed on including more foundation skills in all the roles. Similar the University of British Columbia has also adapted the basic CanMEDs framework to develop a detailed list of 'exit outcomes', which are more in line with our requirement as per the NCAAA stipulations. While most frameworks have similar broad roles, the specific outcomes and competencies tend to differ. It is also obvious that to completely match competencies across frameworks is virtually impossible, especially considering the fact that local cultural contexts need to be given due importance while formulating these outcomes and competencies. The new curriculum we are adopting is from a Dutch university which has its competencies based on the CanMEDs frameworks with modification as per the Dutch undergraduate medical education framework. Since the broad roles of the CanMEDs framework suited our needs we decided to broadly formulate our competency framework on the broad roles of the CanMEDs framework -namely medical expert, communicator, collaborator, manager, health advocate, scholar and professional. However in the enabling competencies we felt that modifications were required as per the NCAAA stipulations. We considered this and the cultural contexts and other requirements unique to Saudi Arabia to write the program outcomes under each broad CanMEDs role falling into the three broad domains namely-Cognitive (knowledge and understanding), affective (including attitude, communication and interpersonal skills) and psychomotor domains. Like the Saudi Meds framework our outcomes also meet the main criteria for a competency framework as described by Harden (14). The competencies describe the competencies of a doctor in general and also take into account the local cultural context of Saudi Arabia. It also provides an integrated picture and is relatively simple and clear. CONCLUSION After the cross-mapping process we found that we were able to establish that our program outcomes were in line with NCAAA, CanMEDs and the Saudi Meds frameworks. We propose that our framework can be a model for other universities in Saudi Arabia to consider when shifting from a traditional to a competency based curriculum. We added more objectives to cover the foundational medical knowledge and skills in the domain of medical expert competency, and this was cross-mapped with the Saudi Meds, NCAAA and was bench marked against Toronto and Manitoba Canadian schools of Medicine. Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article. The research was not funded, not published before and not presented elsewhere. No funding/ No conflict of interest to declare/Not published before nor presented elsewhere
2018-04-03T00:42:06.994Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "e6ad14e72cd676a85c0b69b20f8c2a265454393d", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3804434?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e6ad14e72cd676a85c0b69b20f8c2a265454393d", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119239204
pes2o/s2orc
v3-fos-license
Losing the IR: a Holographic Framework for Area Theorems Gravitational area laws are expected to arise as a result of ignorance of"UV gravitational data". In AdS/CFT, the UV/IR correspondence suggests that this data is dual to infrared physics in the CFT. Motivated by these heuristic expectations, we define a precise framework for explaining bulk area laws (in any dimension) by discarding IR CFT data. In (1+1) boundary dimensions, our prescribed mechanism shows explicitly that the boundary dual to these area laws is strong subadditivity of von Neumann entropy. Moreover, such area laws may be of arbitrary (and mixed) signature; thus our framework gives the first entropic explanation of mixed signature area laws (as well as area laws for certain dynamical causal horizons). In general dimension, the framework is easily modified to include bulk quantum corrections, thus giving rise to an infinite family of bulk generalized second laws. Introduction The thermodynamic properties of macroscopic systems, described by effective IR theories, are typically emergent from some underlying statistical mechanical description of more fundamental UV degrees of freedom. For this reason, the thermodynamics of gravitational systems [1][2][3][4][5][6][7][8][9][10][11] can provide tantalizing insights into the UV completion of gravity. In fact, our own familiar classical spacetime itself may be emergent via the same coarse-graining mechanism of UV degrees of freedom that also results in gravitational thermodynamics (see [12] for a review). A thorough understanding of this process would be an invaluable asset in the quest towards an understanding of nonperturbative quantum gravity. Of the various thermodynamic relations of gravitating systems, the correspondence between area and entropy is the most well-understood. The Bekenstein-Hawking entropy of a surface σ, was initially studied in the context of the black hole event horizon [3]; however, it has since become clear that the relationship between S BH [σ] and some coarse-graining 1 associated to σ is applicable far more generally than for event horizons, in keeping with expectations about the holographic nature of gravity [13][14][15][16][17][18][19][20][21][22][23]. This leads to a general expectation that area monotonicity theorems in General Relativity [1,[24][25][26] are manifestations of the Second Law of Gravitational Thermodynamics. Understanding this connection precisely in the context of a particular quantum theory of gravity thus requires an appropriate notion of what it means to "coarse-grain" over gravitational degrees of freedom, and what constitutes an appropriate measure of the data lost in such a coarse-graining. Our purpose in this work is to provide such a definition. We work in the context of the AdS/CFT correspondence [27][28][29], where it is widely believed that a consistent quantum theory of gravity is defined by the boundary CFT. By relying on certain key properties of the holographic dictionary relating the boundary and the bulknamely, the UV/IR correspondence [30,31], quantum error correction [32], and subregion/subregion duality [33][34][35][36][37][38] -we motivate a coarse-graining framework in the boundary theory which in general gives rise to a large class of gravitational bulk area laws (of arbitrary and sometimes mixed signature). Moreover, in (2+1) bulk dimensions, these area laws are an immediate consequence of the strong subadditivity (SSA) of von Neumann entropy in the boundary theory: their entropic significance is manifest. To motivate our framework, let us begin with our key question: how should we think of coarse-graining in quantum gravity? In the context of AdS/CFT, there are two existing approaches: Wilson-like holographic RG [30,31,[39][40][41][42][43][44][45][46] and the Jaynes maximization of entropy subject to contraints [47,48]. The former is very well-understood, having been developed only shortly after the advent of AdS/CFT itself; moreover, it is very precise, as it can be defined purely in the (non-gravitational) boundary field theory. However, it is primarily tasked with understanding the structure of the RG flow of the boundary theory: given a holographic (deformed) CFT, holographic RG constructs a bulk geometrization of the RG flow of the boundary theory. The portion of this bulk geometry inside of some "radial cutoff" is the dual to an effective low-energy theory in the boundary. It is immediately clear that this is precisely the opposite of what we should want from a gravitational perspective: if gravitational area laws are to arise from coarse-graining away "quantum gravitational degrees of freedom", roughly speaking we must coarse-grain away the interior of the bulk, not the asymptotic region. On the other hand, the Jaynes-like maximization of entropy subject to constraints initially appears much more promising. This approach defines a coarse-grained entropy of a state as S (coarse) = max ρ∈H S vN [ρ], (1.2) where H is a subspace of the CFT Hilbert space consisting of density matrices ρ that all satisfy some constraints, and S vN [ρ] = − Tr(ρ ln ρ) is the usual von Neumann entropy of ρ. Clearly, the coarse-grained entropy is expected to increase under a reduction in the number of constraints (and thus an increase in the size of H). This observation was recently used by [23] to give an entropic explanation of an area law: in AdS/CFT, the area law for spacelike holographic screens [19,26,49] is a thermodynamic second law of a coarse-grained entropy as defined in (1.2), with H the subspace of states on which the exterior of a surface is fixed to some specified geometry (but the interior is arbitrary up to constraints) 2 . However, this approach raises a philosophical dilemma: if classical spacetime is not fundamental to a quantum theory of gravity, then why should we expect that general area laws should arise from coarse-graining over spacetime regions (such as the interiors of holographic screens)? Indeed, we should expect that more fundamentally, we must coarse-grain over information 3 . We are thus presented with a puzzle: on the one hand, the Wilsonian RG notion of coarse-graining can be phrased in terms of fundamental degrees of freedom, but coarse-grains over the wrong kind of bulk data (i.e. the near-boundary rather than the deep bulk). On the other hand, the Jaynes prescription for spacelike holographic screens coarse-grains over the bulk interior as desired, but relies on specifying spacetime regions, which cannot in general correspond to precise quantum gravitational degrees of freedom. The framework that we present here is constructed by drawing only the best features from each of these two approaches: like the Wilsonian RG approach, we will phrase the framework in terms of fundamental degrees of freedom, but like the Jaynes approach of [23], we will make sure to coarse-grain over the bulk interior, not exterior. We now outline the key ingredients of framework, which will be discussed in detail in Section 2. First, since the boundary theory defines the bulk quantum gravity theory, the fundamental quantum gravity degrees of freedom are just the boundary field theoretic degrees of freedom. Thus in order to phrase the coarse-graining prescription purely in terms of fundamental degrees of freedom, we formulate it entirely in the · · · · · · R λ Figure 1. Access to only the reduced states {ρ R λ } of some family of regions {R λ } on the boundary removes IR data such as long-distance correlators and entanglement on the boundary. boundary theory. Which boundary data do we want to coarse-grain over? Here the intuition comes from the UV/IR correspondence: the removal of UV gravitational degrees of freedom should correspond to the removal of infrared degrees of freedom of the boundary theory. Subregion/subregion duality gives us a clue as to how to accomplish this: since the reduced state ρ R associated to some boundary region R is dual to the entanglement wedge W E [R] in the bulk, restricting ourselves to access only the reduced density matrices ρ λ ≡ ρ R λ of some family of regions F = {R λ } (to be defined precisely below) ensures that we lose all information about the "deep bulk". From the boundary perspective, in coarse-graining from a full state ρ to the set of reduced density matrices {ρ λ }, we lose the IR information about long-range correlators and entanglement, as desired; see Figure 1. In some sense this can be thought of as a highly non-standard Wilsonian RG: since the Callan-Symanzik equation relates RG scale to the separation of points in n-point functions, our procedure removes knowledge of n-point functions at low energy scales but keeps knowledge of arbitrarily high-energy ones. How do area laws arise from this prescription? Here the understanding that AdS/CFT is a quantum error correcting code [32] provides insight. Coarse-graining from ρ to the {ρ λ } introduces errors that cannot be corrected: some bulk regions become inaccessible. If each ρ λ is reduced to some ρ λ with R λ ⊂ R λ , we introduce further errors; that is, fewer messages can be decoded. These errors are irreversible, and thus the process of continuously coarse-graining to smaller and smaller R λ is a continuously irreversible process. But such irreversible processes often result in some monotonicity property; indeed, we will see that the bulk manifestation of this monotonicity is an area theorem. As we show in Section 3, in the special case of an (arbitrary) threedimensional bulk this connection can be made completely precise. In the boundary theory, SSA implies that the so-called differential entropy of the family of regions {R λ } is non-decreasing as the R λ are shrunk. Regardless of the physical interpretation of differential entropy (which is still lacking), its monotonicity therefore simply is SSA. But by the hole-ographic prescription of [52][53][54], differential entropy calculates the area of certain bulk surfaces constructed from the entanglement wedges W E [R λ ]; thus SSA (in the guise of monotonicity of differential entropy) manifests in bulk as area increase theorems. Moreover, these area laws may be spacelike, null, or of mixed signature, yielding the first statistical entropic explanation for mixed-signature area laws and nonstationary causal horizons. It is particularly curious that our mixed-signature area laws satisfy the same geometric properties as those of [49]: they are constrained to flow outwards and towards the past (and the time reverse). This suggests that both area laws are a result of the same underlying mechanism. The case of general dimension is discussed in Section 4; though not as precise as in three bulk dimensions, we nevertheless find that for an appropriate choice of {R λ } and { R λ } with R λ ⊂ R λ , there exist bulk surfaces σ and σ constructed from the entanglement wedges Moreover, this area law is robust under perturbative quantum corrections: as we show in Section 5, in such a regime this area law becomes a Generalized Second Law (GSL) where for the uninitiated reader, the generalized entropy S gen [σ] will be introduced with greater detail in Section 5. This natural extension under quantum corrections is strong evidence that the area laws obtained via our coarse-graining procedure are not accidental artifacts of the classical limit, but do indeed arise from some fundamental quantum gravitational features. The present paper only scratches the surface of what can be done with our new coarse-graining framework; in Section 6 we conclude with a discussion of a number of future directions to pursue. Note: The recent paper [55] may prima facie appear to have some similarity with our results in Section 4. This similarity is superficial, and in fact the motivation presented here is completely opposite to that of [55]. Here we are interested in coarsegraining away UV gravitational degrees of freedom, which we heuristically interpret as the "interior" of the bulk; in [55], on the other hand, the focus is on the more conventional coarse-graining away of UV boundary field theory degrees of freedom, which correspond to the asymptotic region of the bulk (although a precise coarse-graining procedure is not prescribed in [55]). Because of these complementary interpretations, the area laws obtained in [55] happen to coincide with ours in special cases, but only ours admit a precise entropic interpretation (in three bulk dimensions). Preliminaries By a QFT, we will always mean a relativistic unitary quantum field theory. We denote the bulk spacetime manifold by M and its boundary (on which the boundary theory lives) ∂M . We assume that the bulk has a good causal structure (e.g. AdS hyperbolicity [56]) for interpretational purposes, although our results are valid without this assumption. We use R to denote globally hyperbolic regions of ∂M , which we sometimes call causal diamonds for simplicity. The maximal development of a region Σ is denoted by D[Σ]; we leave it clear from context whether this development is done in M or ∂M . Overlines (e.g. R) will denote the complement of regions, while overlines with left subscripts will denote spatial complements: that is, s R denotes the set of all points spacelike-separated from all points in R. As with D[Σ], we will make clear from context whether these complements are taken in M or ∂M . We only assume the Null Convergence Condition to guarantee extremal wedge nesting [57,58]. Other conventions are as in [59]. The von Neumann entropy of a globally hyperbolic region R ⊂ ∂M in a state ρ is where ρ R = Trs R ρ is the reduced density matrix on R. By the HRT proposal [20,21], this entropy can be computed in a holographic state at leading order in 1/N (equivalently, in G N ) as where X R is the (bulk) minimal area extremal surface homologous to a Cauchy slice Σ R of R λ . The homology constraint by definition requires the existence of a (highly nonunique) achronal hypersurface H R whose boundary is ∂H R = X λ ∪ Σ R . The entanglement wedge is then defined as the AdS domain of dependence of this hypersurface: (1.7) The Coarse-Graining Prescription Our goal is now to define the coarse-graining prescription motivated in Section 1. This procedure should be defined in the boundary theory, and it should behave in such a way that as we coarse-grain over more CFT data, we recover progressively less of the deep bulk interior. The holographic intuition comes from subregion/subregion duality · · · · · · · · · · · · Figure 2. Examples of families of regions {R λ } that we exclude from our coarse-graining: none of the R λ should be a subset of any of the others and should not lie entirely in the future or past of any of the others. and quantum error correction, but for purposes of generality, we define the procedure without reference to a holographic dual. Consider, then, a QFT on a d-dimensional, maximally extended spacetime manifold ∂M . As discussed in Section 1, we will coarse-grain away information by introducing a continuous family of globally hyperbolic regions {R λ } (parametrized by some set of parameters λ) and then restricting a state ρ on ∂M to the set of reduced states {ρ λ }. Of course, in principle we may perform such a procedure for any set of regions {R λ }, but in order to sensibly think of our procedure as coarse-graining away "independent" data, we would like to exclude situations like those shown in Figure 2, where some of the R λ lie in the interior or in the future of others. It is easy to require that none of the R λ be contained in the future of any other; this can be accomplished by requiring that the union of the R λ be bounded by two Cauchy slices Σ ± and that the boundary of each R λ intersects both, as shown in Figure 3. More specifically, we would like to impose some notion of the R λ being "spacelike-separated" from one another. However, since the R λ are a continuous family, they cannot be disjoint. What, then, can we mean? To develop some intuition, consider the case where the R λ can be written as R λ = D[I λ ], where I λ ⊂ Σ are regions on some acausal slice Σ; see Figure 4(a). If none of the I λ are contained in any others, the resulting family {R λ } is one intuitively appropriate to our coarse-graining procedure. In fact, we will allow even more general families, motivated as follows (readers who are willing to accept the definition without motivation may wish to skip ahead). First, note that since each R λ is globally hyperbolic, it must admit at least one Cauchy slice Σ λ ; the boundary ∂Σ λ must in fact be independent of the choice of Cauchy slice, : such regions may be said to be spacelike-separated because any deviation vector η a from one I λ to an infinitesimally displaced one is everywhere spacelike. (c): even when the I λ no longer all lie on Σ, we may still think of them as "spacelike separated" and, thus appropriate for our coarse-graing, if all η a are on average spacelike. so we will use the notation ∂Σ λ without ever explicitly invoking a choice of Σ λ 4 . Now, note that for each ∂Σ λ , an arbitrary variation of λ defines a deviation vector field η a λ normal to ∂Σ λ (roughly speaking, η a λ encodes the "infinitesimal deviation" from ∂Σ λ to ∂Σ λ+dλ ). In general the family {R λ } will be (d − 1)-dimensional, and so we may introduce (d − 1) linearly independent deviation vector fields on each ∂Σ λ ; here we will simply treat one at a time 5 . For the family {R λ } constructed from the regions I λ as above, we have ∂Σ λ = ∂I λ , and thus the acausality of Σ implies the acausality of η a λ . In other words, η 2 λ ≥ 0 with equality only if η a λ vanishes. If we further require that no two regions in the family {R λ } coincide, then η a λ cannot be everywhere-vanishing. Thus we conclude that η 2 λ must be strictly positive when integrated on ∂Σ λ (with respect to the natural volume form): ∂Σ λ η 2 λ > 0 for any η a λ . This implies that for sufficiently small deformations of the I λ which move them away from a common hypersurface Σ, the relation ∂Σ λ η 2 λ > 0 still holds for any η a λ , as shown in Figure 4(b). In other words, we may still say that the deviation vectors η a λ are "on average" spacelike. We will take this property, which can be invoked on any family {R λ }, as the defining feature of what we mean by a continuous family of "spacelike-separated" regions. We thus define: Definition 1. Let F be a (d − 1)-parameter continuous family 6 of connected causal diamonds {R λ } in ∂M parametrized schematically by a set of parameters λ. Define ∂Σ λ for each R λ as above. We will call F a coarse-graining family if the following are true: • Each ∂Σ λ is everywhere spacelike; • ∂ ∪ λ R λ consists of two Cauchy slices Σ ± of ∂M with ∂R λ ∩ Σ ± = ∅ for all λ; • For any deviation vector field η a λ normal to ∂Σ λ , ∂Σ λ η 2 λ > 0 (with the integral taken with respect to the natural volume element on ∂Σ λ ). Next, recall the definition of the reduced density matrix: where ρ is the state on ∂M . We would now like to restrict access to data (observables) which are computable from the state restricted to the causal diamonds in the coarsegraining family F ; that is, we would like to discard data that cannot be recovered from any of the ρ λ . To do this, we declare two states ρ and ρ to be equivalent under IR coarse grainings associated to the family F ("F -equivalent" for short) if ρ λ = ρ λ for all λ. The coarse-grained state ρ F of ρ is defined as the equivalence class of ρ under this equivalence, or correspondingly as the set {ρ λ } of reduced density matrices on F . 5 To be completely explicit, let {λ i } be the d − 1 parameters parametrizing the family {R λ }. Taking ∂Σ λ to be spacelike, we may define the d − 1 deviation vector fields η a λ i ≡ P a b (∂ λ i ) b , where P a b is the projector normal to ∂Σ λ . We schematically use η a λ to refer to any one such deviation vector field. 6 Strictly speaking this continuity requirement implies that a coarse-graining family can only exist if ∂M is connected. The generalization to disconnected ∂M can be performed by introducing a coarse-graining family on each connected component of ∂M . Before discussing the ramifications of this definition, it is worth asking: is this in fact a coarse-graining? That is, can two distinct states yield the same coarse-grained state ρ F ? It is easy to see that in discrete physical systems this is so: in the case of e.g. a spin chain, we can take λ to index spin sites and each R λ to be a collection of adjacent spins; then it is easy to find examples of states which are different on the whole spin chain but whose density matrices agree when reduced to any of the R λ . Similarly, at any finite order in 1/N in AdS/CFT, this procedure is also clearly a coarse-graining: for instance, at leading order in 1/N , we may consider two states whose dual bulk geometries on some Cauchy slice Σ agree near the boundary but differ deeper in the bulk. If the R λ are chosen sufficiently small, they can only sample the geometry in the asymptotic region, and thus the reduced states must agree 7 . Perturbative corrections in 1/N (adding quantum bulk fields, gravitational dressing, etc.) proceed similarly. We do not know if non-perturbative states of continuum QFTs also admit nontrivial equivalence classes. But since we are working perturbatively in 1/N anyway, it is sufficient for our purposes that our equivalence relation be nontrivial in AdS/CFT only to finite order in 1/N (that is, we may replace the condition that ρ λ = ρ λ with equality only to finite order in 1/N ). Thus for our present purposes, the map from ρ to ρ F really does provide a coarse-graining operation. This map has a clear interpretation as discarding IR data; for instance, a longdistance two-point correlation function of a local operator O(x 1 )O(x 2 ) is inaccessible to ρ F whenever the R λ are sufficiently small that no single one of them contains both x 1 and x 2 . By comparison, for arbitrarily close x 1 and x 2 , any coarse-graining family F covering them will retain knowledge of O(x 1 )O(x 2 ) . Put differently, for any smeared-out observable O, there will be some coarse-grained ρ F with no knowledge of O; however, one-point functions of local operators can always be computed no matter what the family F is. In what some readers may see as abuse of nomenclature, we shall thus refer to the map from ρ to ρ F as an IR coarse-graining. Recall that our goal is not just to define a coarse-graining, but also to compare coarser and finer data sets. To do this, we need to be able to compare coarse-graining families: 7 Note the importance of the family F only sampling some time strip of the boundary. In the present example, if two states agree on a bulk Cauchy slice outside of some fiducial radial cutoff r * but differ inside of r * , time evolution would eventually cause their geometries to differ all the way to the boundary once causal signals from within r * propagate out. The coarse-grained states are then guaranteed to agree only if the size of the time strip containing the family F is smaller than this propagation time. · · · · · · R λ R λ Figure 5. A coarse-graining family F (light gray, solid lines) and an IR coarser family F (dark gray, dashed lines). Note that each causal diamond R λ lies inside (or perhaps may coincide with) R, so the reduced states ρ λ can access less data than the states ρ λ . An example of such families is shown in Figure 5. In fact, ultimately we are really interested in a continuous notion of coarse-graining: Then F (r) is IR coarser than F for any r > 0; we call F (r) a continuous IR coarse-graining. With the coarse-graining prescription now defined, let us analyze its bulk interpretation when the QFT is holographic. Consider states of the CFT that describe a (semi)classical bulk dual (M, g ab ). Subregion/subregion duality asserts that there is an isomorphism between the algebra of operators in the entanglement wedge W E [R] and the algebra A[R] in the boundary causal diamond R. Here we will invoke subregion/subregion duality in a strong form, where we assume that all fields, including the metric, in W E [R] are fixed by A[R] (perturbatively in N , and more than a Planck length away from the boundary of W E [R]). This statement is known to be true at the level of quantum fields on a fixed background spacetime, as proven in [32] and expanded upon in [37,38]: in this regime, A[R] and the reduced density matrix ρ R can compute any observable in W E [R] and cannot compute any observable in W E [ s R] 8 . For readers skeptical of the strong version of subregion/subregion duality that we assume here, we , which are respectively completely specified and unconstrained by a coarse-grained state ρ F . We also show one of the causal diamonds in the coarse-graining family F along with its corresponding extremal surface X R . note that our results can be restricted to work within this weaker version known to be true. As a result of subregion/subregion duality, the bulk duals (if they exist) of two Fequivalent states must agree in the region ∪ λ W E [R λ ] defined by the union of the entanglement wedges of the coarse-graining family F , while F -equivalence implies nothing (perhaps up to constraints) about the region ∩ λ W E [ s R λ ]; see Figure 6 for sketches of these regions. Note in particular that the latter region is a "deep bulk" region: thus as desired, the coarse-graining from ρ to ρ F removes data in the deep bulk. Indeed, we may interpret this feature by borrowing intuition from quantum secret sharing properties of AdS/CFT [32]: if we consider some bulk field φ(x) a message to be decoded and {R λ } as the qubits available, then having access to any of the individual R λ may not be sufficient to decode φ(x), while having access to sufficiently large unions R λ 1 ∪ · · · ∪ R λn will indeed be sufficient. This is precisely the intuition on which we rely: as we move along a continuous IR coarse-graining F (r) to larger r, we recover progressively less of the bulk. We may think of a continuous IR coarse-graining as a quantum error correcting code which is becoming progressively weaker 9 . Let us now illustrate the bulk implementation and interpretation of the coarse-graining prescription described in Section 2. In this section we will focus on the case of a twodimensional boundary theory with a three-dimensional bulk dual, where the connection to monotonicity properties and area laws can be made most precise and explicit. In fact, the precise results that we present here also hold in higher-dimensional setups with sufficient symmetry to be essentially three-dimensional; the generalization to generic higher dimensional spacetimes will be presented in Section 4. Monotonicity from Strong Subadditivity The coarse-graining prescription presented in Section 2 was designed to discard IR CFT data: indeed, it rendered long-range correlators inaccessible and removed long-range entanglement. Typically, under coarse-graining operations it is often useful to identify a number that roughly measures "how much" information is being made inaccessible. For example, in going from a full state ρ to a particular reduced state ρ R associated to a region R, the entanglement entropy provides such a measure of information "loss". Indeed, entropic inequalities often play a role in quantifying coarse-graining: for instance, SSA of von Neumann entropy, which states that the von Neumann entropies of any regions A, B, and C must obey is interpreted as a statement on the irreversivility of removing subsystems. This is easily seen by rewriting In our case, we wish to find an object constructed from a state ρ and a coarsegraining family F = {R λ } which can be interpreted as the amount of information lost in coarse-graining from ρ to ρ F . Fortunately, for (1 + 1)-dimensional field theories, a candidate already exists: the differential entropy [52], which we define first for a discretized family of regions as Definition 4. In any (1 + 1)-dimensional QFT on a spacetime with compact spatial slices, let {R i } with i = 1, . . . , n be a discrete family of causal diamonds such that R i ∩ R i+1 = ∅. Then in any state ρ, the discrete differential entropy of {R i } is Figure 7. A sketch of the discrete family of regions {R i } that defines the discretized differential entropy. In the case that {R i } is a discretized coarse-graining family, the R i are defined from the curves γ L , γ R , which must always be spacelike and such that γ L and γ R point into and out of R λ , respectively. where it is understood that R n+1 = R 1 . An illustration of the regions used to construct the discrete differential entropy is shown in Figure 7. Also note that although differential entropy is computed from entanglement entropy, which is UV-divergent, these singularities cancel out in the differences in (3.2), so the differential entropy is in fact UV-finite. In [52] it was suggested via holographic arguments that S (n) diff is a measure of the ignorance of a family of observers confined to make measurements only in the causal diamonds R i . More precisely, [60] showed that, at least in certain contexts, differential entropy can be interpreted in an information theoretic sense as the optimal cost of sending a state between two observers under a constrained merging protocol, with the constraint that the observers involved may only act on one of the R α at a time. Independently of the general applicability of these interpretations, however, we claim that when the R i are obtained from a coarse-graining family, SSA ensures that the differential entropy obeys a monotonicity property under progressive IR coarsening. To establish this result, let us first set up some convenient notation. In (1 + 1) dimensions, a coarse-graining family F = {R λ } is parametrized by a single parameter λ. We will say that Next, note that each region R λ is a causal diamond which can be defined just by the positions of its left and right endpoints γ L (λ) and γ R (λ), or equivalently by its past and future endpoints γ ± (λ); as λ is varied, these trace out the curves γ L , γ R , γ + , and γ − . These curves obey some useful properties: Proposition 1. Let F = {R λ } be a coarse-graining family in a (1 + 1)-dimensional spacetime, and let the curves γ L , γ R be defined as above. These curves are nowhere timelike, with one of the tangent vectors γ L , γ R (with = ∂/∂λ) always pointing towards R λ and the other always away from R λ . Proof. In addition to γ L and γ R , also define the curves γ ± . In (1 + 1) dimensions, the boundary ∂Σ λ in the definition of coarse-graining families is just the two points γ L (λ) and γ R (λ) and the deviation vector field η a consists of the union of γ L (λ) and γ R (λ). Thus the requirement that ∂Σ λ η 2 > 0 implies that at least one of γ L , γ R must be spacelike. But if one is spacelike and the other is timelike, then at least one of γ ± must be timelike, violating the requirement that γ ± be Cauchy slices. Thus neither can be timelike. Moreover, if they ever both point into or out of R λ , it also must be the case that at least one of γ ± is timelike. Thus one must always be pointing towards R λ and the other always pointing away. This result allows us to unambiguously differentiate "left" from "right": we will choose left and right so that γ L points into R λ and γ R points out. Moreover, it also ensures that any discretized version F (n) of F looks as shown in Figure 7. It is this geometric restriction that allows us to obtain the desired monotonicity property: Theorem 1. Let F (n) and F (n) for i = 1, . . . , n be discretized coarse-graining families such that (i) F is IR coarser than F and (ii) the unions R i ∩ R i+1 and R i ∩ R i+1 are non-empty. Then in any state of any QFT, Proof. We may think of obtaining the R i by "pulling in" the endpoints of each of the R i . To show that S diff is non-decreasing under this "pulling in" process, it is sufficient to show that S diff is non-decreasing when only one endpoint is pulled in a small amount; symmetry guarantees that S diff is non-decreasing if the other endpoint is pulled in as well. Consider therefore the family F (n) = { R i } obtained from the R i by keeping the left endpoint unchanged but pulling the right endpoint in, as shown in Figure 8. Now for each i, we apply strong subadditivity to the regions A i , B i , and C i defined as Cauchy slices of 10 . In terms of these regions, we have that (3.4) 10 Note our abuse of notation: if A and B are causal diamonds such that there exists a Cauchy surface Σ with the property that with the inequality following from strong subadditivity (3.1) applied to each term in the sum. But clearly by repeating this process, we can continue to shrink the intermediate regions R i to get all the way to F (n) , with S This result is quite remarkable, yet not unexpected: under IR-coarsening, we trace out over more and more subregions to the obtain the coarse-grained state ρ F . As mentioned above, the irreversibility of removing more subregions is captured by SSA, and thus it is quite reasonable to expect that there should exist an object constructed from the entanglement entropies of the ρ λ which behaves monotonically under IRcoarsening. Indeed, SSA played a key role in the various entropic proofs of the c, F , and a-theorems [61][62][63][64][65] -which are statements of the irreversibility of coarse-graining under Wilsonian RG. In fact, in the special case of a Poincaré invariant vacuum state, the discretized differential entropy (3.2) is just a sum over Casini-Huerta c- , so the monotonicity of S diff really does arise in the exact same way as the entropic c-theorem (though here we consider arbitrary states). We therefore interpret the monotonicity of differential entropy as confirmation that our coarse-graining procedure does what it was designed to do. For the holographic analysis in the following section, it will be useful to note that the differential entropy is well-defined in the continuum limit n → ∞. In this case, it takes a very simple form if we interpret the entropy S(R λ ) of a region R λ as a function of its left and right endpoints: S(R λ ) = S(γ L (λ), γ R (λ)). With this interpretation, we define the (continuum) differential entropy of the family {R λ } as [52,54]: Definition 5. In any (1 + 1)-dimensional QFT on a spacetime with compact spatial slices, let F = {R λ } be a coarse-graining family with left and right endpoints γ L (λ), γ R (λ). Then the differential entropy of {R λ }, obtained in the n → ∞ limit of the discretized differential entropy (3.2), is the two expressions are equal as can be seen via integration by parts. This continuum expression conveniently makes clear that differential entropy is not a positive quantity: for instance, since for pure states S[ However, it is also clear that if the regions R λ are sufficiently small relative to any other scales, S(γ L (λ), γ R (λ)) will behave similarly to how it does in the vacuum, and thus it will increase as γ L (λ) and γ R (λ) are moved apart; this implies that for sufficiently small R λ , S diff is positive. We may interpret this result result heuristically as follows: starting with small R λ , our monotonicity result implies that S diff decreases as the size of the R λ increases. Eventually, S diff may vanish and become negative; this is an indication that the regions R λ have become large enough that no IR data is lost in the coarse-graining to ρ F (note that since we are assuming compact spatial slices, the volume of these slices imposes a natural IR cutoff). Further increasing the size of the R λ morally does not recover any new information. This heuristic interpretation of negative differential entropy is pleasantly consistent with that of [60], in which a negative differential entropy corresponds to a distillation (rather than consumption) of entanglement in the constrained merging protocol. An Abundance of Area Laws Although Theorem 1 makes no reference to holography, a remarkable consequence of it is that in a holographic setting, the bulk dual to it is an area law! In fact, Theorem 1 immediately gives rise to an infinite family of area laws in the bulk, consistent with the expectation laid out in Section 1. This observation follows from the hole-ographic interpretation of differential entropy: S diff computes the area (or more generally, a local geometric functional) of a particular curve (or in general, curves) in the threedimensional bulk dual. The full details of this connection can be found in [54]; here we offer a brief review of the salient features for convenience to the reader. Assume the existence of an asymptotically AdS dual to the two-dimensional field theory state ρ, and consider a one-parameter family {Γ λ } of geodesics in the AdS spacetime such that each Γ λ is anchored at the AdS boundary at the points γ L (λ), γ R (λ). Figure 9. An illustration of the construction of the bulk curves σ ± B from the boundary regions R λ and corresponding family of bulk curves Γ λ . Here we show a single member Γ of the family {Γ λ }; the two bulk curves labeled σ ± B have the property that where they intersect Γ, the tangent vectors to Γ and to σ ± B span a null plane. The family {Γ λ } defines a deviation vector h a λ on each Γ λ ; we take h a λ to be normal to Γ λ (concretely, h a λ = P a b (∂ λ ) b , where P a b is the orthogonal projector to Γ λ ). As discussed in detail in [54], for the families of boundary regions we consider (where γ L (λ) points into R λ and γ R (λ) points out of R λ ), we are guaranteed that h 2 λ will vanish somewhere on Γ λ , corresponding to h a λ becoming null or, non-generically, vanishing entirely. As illustrated in Figure 9, now consider a bulk curve σ B defined by taking the union of such points on Γ λ ; that is, letting s be a parameter along each Γ λ , define s * (λ) such that h 2 λ (s * (λ)) = 0, and then define σ B (λ) = Γ λ (s * (λ)). Geometrically, this construction ensures that σ B intersects each Γ λ , and where it does the tangent vectors to σ B and to Γ λ span a null plane. (In general there may be more than one choice of s * (λ), and thus more than one such σ B (λ) may be defined; what follows is true for any particular choice as long as σ B (λ) is connected.) The main result of [54] is then that in the regime where the lengths of the Γ λ compute the boundary entanglement entropies S(γ L (λ), γ R (λ)) via the HRT formula, as long as σ B is differentiable and everywhere spacelike, (The absolute value is required because as noted above S diff need not be positive.) In other words, given a family {R λ }, we can define a bulk curve σ B ; if σ B is everywhere smooth and spacelike, its length is computed from the differential entropy of {R λ }. More generally, σ B may become null somewhere or have cusps; in such cases, S diff [R λ ] in fact computes a signed length of σ B , where portions of different sign are joined wherever σ B fails to be smooth and spacelike. Indeed, this sign ambiguity is the need for the absolute value in (3.6): even when σ B is everywhere spacelike and differentiable, S diff [R λ ] could compute the negative of its length. Such signed lengths may be physically interesting (for instance, they contribute to the change in sign of S diff , which we argued heuristically above can potentially be understood as an indication that no IR data is being lost), but for simplicity we will hereafter restrict to the case where σ B is everywhere spacelike and differentiable. We now immediately obtain an infinite class of area laws. Consider any coarsegraining family F such that S diff [F ] > 0, and introduce an IR-coarser family F . If the state has a classical geometric dual and the curves σ B , σ B constructed from F and F are everywhere differentiable 11 and spacelike, then monotonicity of the differential entropy (3.3) combined with the hole-ographic formula (3.6) implies that Recall that the inequality is simply strong subadditivity: the removal of long-range entanglement in the boundary maps precisely to an area law in the bulk! In fact, this construction can be slightly generalized to higher dimensions. Assume that instead of being geodesics, the curves Γ λ extremize a geometric action ds L(γ(s), γ (s)) with γ(s L,R ) = γ L,R (λ), (3.8) such that L λ [Γ λ ] = S(γ L (λ), γ R (λ)), L(γ(s), γ (s)) is positive and depends only on γ and its tangent vector γ , and L λ is invariant under reparametrizations of s. The more general result of [54] is that where σ B is a bulk curve constructed from {Γ λ } in the same way as above. Now, in higher-dimensional geometries, if the family {Γ λ } obeys a symmetry property dubbed generalized planar symmetry in [54], then the problem of computing codimension-two extremal surfaces essentially reduces to solving for curves that extremize an action of the form (3.8). In these (very restricted) higher-dimensional setups, differential entropy then computes the area of a codimension-two extremal surface in the bulk, and we again obtain an area law. However, because the requirement of generalized planar symmetry is so strong, we will continue to restrict only to the case of two boundary dimensions for the remainder of this section. As a final note, it is worth remarking once again that since the physical role of S diff [R λ ] is not well-understood, we do not purport to give a physical interpretation to the curves σ B or their area. Rather, we have derived an entropic understanding of the monotonicity in the area of a family of curves. To illustrate this construction, we now turn to some explicit examples that explain old and novel area laws in AdS 3 . Spacelike Area Laws in Pure AdS We will make use of global coordinates (t, r, φ) in terms of which the metric of pure AdS 3 is (3.10) It will sometimes be useful to convert to a compactified coordinate r * = arctan(r/ ), in terms of which the metric becomes Clearly, null radial geodesics are just given by lines of constant t ± r * . First, consider working on a static time slice t = const. This is just the Poincaré disk, on which the construction of the bulk curves σ B has been studied extensively; see for instance [66,67]. Spacelike geodesics on this slice are given by where the endpoints of the geodesics lie at φ = φ 0 ± ∆φ/2 on the boundary. Thus the minimum r reached by a geodesic whose endpoints have angular separation ∆φ is Therefore, consider first a set of boundary intervals all of the same angular extent ∆φ ≤ π; the bulk curve σ B defined by these intervals is just a circle of radius r min (∆φ), and the differential entropy just computes the circumference of this circle 2πr min (∆). Now, as ∆φ is decreased, so that the boundary intervals all become smaller, the circumference of the corresponding bulk circle clearly increases: this gives I λ σ B Figure 10. From a general bulk curve σ B , we can always find a family {I λ } of boundary intervals whose differential entropy computes the length of σ B . If σ B is convex, these intervals define a coarse-graining family and thus yield a monotonicity property of differential entropy; this corresponds to the fact that only if σ B is convex are we guaranteed that any "outwards" deformation of σ B will increase its length. a spacelike area law. On the other hand, it is worth noting that for ∆φ > π, the differential entropy of the intervals of size ∆φ computes the negative circumference of the bulk circle; in this case, decreasing ∆φ initially decreases the circumference of the bulk circle, which is consistent with monotonicity of the differential entropy: the negative circumference still increases. More generally, given any closed differentiable bulk curve σ B with no self-intersections on the Poincaré disk, a family of spatial boundary intervals I λ whose differential entropy computes the length of σ B can be found by just firing tangent geodesics off of σ B , as shown in Figure 10. However, only if σ B is convex are the causal diamonds R λ = D[I λ ] of the resulting boundary intervals a coarse-graining family; this means that only if σ B is convex are we guaranteed by SSA that it obeys an area law. The reason for this matches beautifully with geometric intuition: if the boundary intervals are shrunk, then σ B moves towards its exterior. Now, if σ B is convex, its outwards-directed expansion is non-negative, and thus any deformation of it towards its exterior cannot decrease its area. On the other hand, if σ B is concave, it must have at least some portions where its outwards-directed expansion is negative: deforming just these regions outwards would decrease its area, so it cannot obey a general area law. Thus we see that the definition of a coarse-graining family automatically excludes concave curves, which would violate a potential area law. Finally, let us note that although here we focused on the Poincaré disk, we could of course consider slightly modifying all the intervals I λ so that they don't all lie on the same time slice. As long as the resulting causal diamonds R λ still constitute a coarse-graining family, it is clear that generically the corresponding bulk curve σ B will no longer lie on some slice of time symmetry. Nevertheless, as long as the modifications to the I λ are sufficiently small, the resulting area laws obtained by shrinking the causal diamonds will still be spacelike. If the intervals are deformed sufficiently far from all lying on a constant time slice, the signature of the area law may change; this leads us to the next section. Null Area Laws in Pure AdS Consider again a family of intervals {I λ } of angular size ∆φ = π on the slice t = 0. The corresponding bulk geodesics pass through the bulk point (t, r) = (0, 0), and thus the bulk "curve" σ B degenerates to a point; the length of σ B vanishes. Next, consider the intervals defined by the intersection of the causal diamonds D[I λ ] with slices of constant time t > 0; these define new spatial intervals of size ∆φ(t) = π − 2t/ . From (3.13), the corresponding bulk curves σ B are circles at r * (t) = t. But since outgoing radial bulk null geodesics correspond to lines of constant t − r * , these circles correspond to constant-t slices of the future lightcone of the point (t, r) = (0, 0): in other words, the family of intervals of size ∆φ(t) generate bulk curves σ B (t) which trace out a lightcone, as shown in Figure 11(a). We may consider more general slices of this light cone as follows. This light cone is generated by radial geodesics fired from the point (t, r) = (0, 0); since they are radial, these generators can be labeled by φ. Any (spatial) slice γ of the light cone intersects each of these generators only once, and thus we may parametrize any such slice by the time t(φ) at which the generator at angle φ intersects γ. Now consider a family of intervals parametrized by φ, with each interval centered at φ lying on the time slice t(φ) and with angular size ∆φ(φ) = π − 2t(φ)/ . By construction, each geodesic anchored to these intervals will intersect the light cone precisely on the slice γ, and thus at this point, the tangent vector to γ and to the boundary-anchored geodesic span a null plane. By the hole-ographic construction, this ensures that the bulk curve σ B constructed from these boundary intervals will correspond precisely to γ. Thus differential entropy can be used to compute the area of any slice of the light cone, not just symmetric ones; see Figure 11(b). As the slice γ is moved to the future, the corresponding boundary intervals shrink into themselves, and thus their differential entropy must be nondecreasing. Thus the monotonicty of the area of any slice of the light cone corresponds directly to the monotonicity of differential entropy. Moreover, note that this light cone also happens to be a causal horizon, and therefore we have an explanation for the Hawking area law along a causal horizon. This may be a simple case, but to our knowledge it is the first entropic understanding of the Hawking area law for non-stationary causal horizons. Mixed-Signature Area Laws in Pure AdS To obtain the spacelike and null area laws above, we modified the boundary intervals generating the bulk curves σ B by changing their size and by translating them in time. Here we explore the final degree of freedom -boosts -and show that we can obtain area laws for mixed-signature surfaces: that is, surfaces with timelike, null, and spacelike components. To that end, let us introduce boundary null coordinates u = t/ + φ, v = t/ − φ, and consider boundary causal diamonds defined by left and right endpoints with null separations ∆u, ∆v. In order to ensure that these points are spacelike separated, we require 0 < ∆u < 2π and 0 > ∆v > −2π. Consider now a family of such causal diamonds with ∆u and ∆v fixed, but centered on the time slice t = 0, as shown in Figure 12. In fact, since the differential entropy of regions with ∆φ > π is just the negative of that of regions with ∆φ → 2π − ∆φ, we may simply restrict to ∆φ ≤ π, implying ∆u ≤ 2π − |∆v|. Now by keeping ∆v fixed while varying ∆u (or vice versa), · · · · · · ∆u ∆v Figure 12. The boosted intervals used in the construction of mixed-signature area laws. For fixed ∆v, decreasing ∆u shrinks the intervals in the u null direction while leaving their extent in the v null direction unchanged. we may stretch or contract these intervals in one null direction while leaving their extent in the other null direction unchanged: it is this deformation that we will exploit to construct a mixed-signature area law. We leave details of the construction to Appendix A; the punchline is that such a family of intervals generates two curves σ ± B lying at When ∆u = 2π −|∆v|, it is clear from the above that σ ± B degenerate to a point at r σ B = 0, and thus have zero length. It is also clear that as ∆u is decreased (with ∆v fixed), σ ± B move monotonically to increasing r, and therefore have monotonically increasing areas: we obtain area laws, as required by positivity of differential entropy. What is more interesting is the signature of these area laws. Indeed, consider the surfaces H ± swept out by σ ± B as ∆u is varied between 0 and 2π − |∆v|; these surfaces are shown in Figure 13. As can be seen in the plot and as we discuss in more detail in Appendix A, H ± are always spacelike at small and large r; however, if ∆v > −2 arcsin( √ 2/3) ≈ −0.31π, then H ± will also be timelike at some intermediate r. Thus for sufficiently small |∆v|, we have constructed area laws for surfaces of mixed signature; this is the first entropic explanation for such area laws. It is tempting to compare this area law to the area law for holographic screens [26,49], which also have mixed signature. In fact, future holographic screens may have spacelike and timelike components, but on the timelike components the area must in- Figure 13. The surfaces H ± swept out by σ ± B as ∆u is varied with ∆v held fixed. In (a), ∆v = −π/3, and H ± are everywhere spacelike. In (b), ∆v = −π/6, and H ± are timelike between the surfaces shown as dotted lines. crease towards the past: this behavior is reproduced by the surface H + (the surface H − behaves like past holographic screens, in which the area of timelike portions increases to the future). Indeed, this behavior is general: for a general family of intervals in any geometry, shrinking the size of the intervals must result in a smaller (that is, nested) entanglement wedge. Since the differential entropy curves σ B must be tangent to the entanglement wedges of the boundary intervals that generate them, the "futuremost" differential entropy curve σ + B can only move in spacelike or past-timelike directions when the boundary intervals are shrunk. (In non-generic cases like the null area law in pure AdS, it may also move in the future-directed null direction, but even there it may never move in a future-directed timelike direction.) This connection between the behavior of area laws constructed from differential entropy and those of holographic screens is tantalizing, and is worth exploring further. Finally, note that although we derived this area law in pure AdS, it is robust under sufficiently small perturbations of either the boundary intervals or of the bulk geometry, since sufficiently small perturbations cannot change the signature of a nonnull hypersurface. -25 -Our analysis and interpretation in the previous section relied heavily on differential entropy. The definition of differential entropy and its relation to the area of bulk surfaces do not have known generalizations to more than three bulk dimensions (except in the special case of generalized planar symmetry [54]). Nonetheless, as the coarsegraining procedure itself is well-defined in any dimension (and indeed, in any field theory) we can still make progress in higher dimensions. Subregion/subregion duality and in particular the reconstructibility of the entanglement wedge W E [R] from the reduced density matrix ρ R immediately imply that any coarse-graining family F in a holographic CFT (with a semiclassical bulk dual) gives rise to natural bulk geometric constructs associated to the information recovery limit of the coarse-grained state. Such regions were alluded to at the end of Section 2 and we now treat them in more detail. First, let us define two regions of interest: Definition 6. Let F = {R λ } be a coarse-graining family. In any state ρ with a semiclassical gravitational dual, we define the reconstruction region L By entanglement wedge reconstruction, the reconstruction region L[F ] consists precisely of all points in the bulk at which local bulk operators are known to be recoverable from the reduced states ρ λ constituting ρ F . While the nomenclature may suggest otherwise, not all operators in L[F ] are recoverable: sufficiently smeared operators will require access to reduced states over unions of the R λ , which are not data accessible to ρ F 13 . Conversely, the unreconstructible region U [F ] (which in pure states can be written as consists of all points in the bulk at which no local operators can be recovered from ρ F . A natural expectation from the holographic nature of gravity is that there exists a special surface (of some codimension), defined geometrically from the boundaries of these regions, which characterizes the amount of information lost in the coarse-graining. Intuition from the differential entropy in three bulk dimensions as well as from general gravitational thermodynamics suggests that such a surface should be codimension-two, and that its area is a measure of the data rendered inaccessible by our coarse-graining. In general, how might such a surface be obtained? Since U [F ] is a domain of dependence, one option is the "rim" of U [F ]. Another option, motivated by the construction 12 The region U [F ] was recently considered in [55]; below we will comment on the connection of the area laws of [55] to ours. 13 As an example, consider a bilocal operator O(x 1 , x 2 ) evaluated any two points that do not live in a single W E [R λ ] for some λ. . We sketch all these surfaces in Figure 14(a). With a plethora of options to choose from, it is not immediately clear which surface encodes the missing data in general dimension. It is also certainly possible that more than one surface is relevant 14 . To develop a complete understanding, we would need a monotonic field-theoretic object constructed from our coarse-graining family F in any dimension; in Section 6 we will comment more on possible approaches towards finding such a construct (these are clearly related to the a, c, and F -theorems). For now, to make some progress, we will take a conservative approach: we focus on the case where all these candidates are coincident on a single surface σ, defined as 14 Indeed, it's not too difficult to see that even in three bulk dimensions, in many cases the sur- is not irrelevant, as it is constructed by the intersection of null congruences fired from the differential entropy surfaces σ ± B . These null congruences have negative expansion towards ∂J + [L[F ]] ∩ ∂J − [L[F ]], and thus its area is bounded above by the differential entropy. Definition 7. If the intersection is a nonempty, compact, codimension-two spacelike surface, then we call σ[F ] the reconstruction edge of L[F ] (the "edge of L[F ]" for short). In such a case, we also have σ Intuitively, an edge separates the degrees of freedom removed by our IR coarse-graining procedure from those removed by Wilsonian-like holographic RG; this is the reason that, as noted in Section 1, the area laws presented below and those of [55] occasionally coincide (this does not apply to the more general area laws of Section 3). However, due to the differences between the two constructions, it is not clear in general when exactly the two classes of area laws -the ones presented in this section and those of [55] -agree. Let us now prove our area laws, corresponding to a monotonicity along continuous IR coarse-grainings. To do so, we first prove some preliminary properties about σ[F ] itself. Proof. Item (i) follows immediately from Proposition 6.3.1 of [69] and the fact that σ[F ] is the intersection of a past set and a future set (see [70] for the desired result on sparseness We will now see that the defining feature of an edge of a coarse-graining family prevents it from "bending" too much in the spacetime, as shown in Figure 15. More precisely, this means that edges are so-called normal surfaces: ∂M σ Figure 15. Edge surfaces are prohibited from sampling different "bulk depths" as shown here; more precisely, they must be normal surfaces. Now we show directly that N has positive expansion whenever p lies on a cusp. If p lies on a cusp, by Lemma 1 there exists an HRT surface X[R λ ] ∈ L[F ] for some R λ ∈ F such that p ∈ X[R λ ]. Since σ[F ] and X[R λ ] are achronally separated, there exists a Cauchy slice Σ containing them both. Now consider a neighborhood U p of p sufficiently small so that the geometry of Σ ∩ U p is approximately flat. Since the set of points on which σ[F ] is not C 1 is sparse, U p contains points at which σ[F ] is C 1 , and therefore at which σ[F ] has a well-defined normal (directed towards L[F ]) in Σ. As shown in Figure 16, the fact that U p is small allows us to discuss the convergence or divergence of these normals near p. Now, in order for σ[F ] to lie outside of Int[L[F ]], it must lie to one side of X[R]; this means that the normals to σ[F ] on Σ must be diverging. Thus the null congruences fired off the C 1 portions of σ[F ] ∩ U p do not intersect each other, implying that the null congruence N must also consist of generators originating at p. Since these generators originate at a caustic, they must have positive expansion. Finally, it is clear that where σ[F ] is C 1 , N must locally be C 1 as well. Where σ[F ] has a cusp, the above implies that the generators of N are diverging, so N is locally C 1 as well. We now obtain the final result: Proof. Let H be the hypersurface foliated by {σ(r)}, and let h a be the outward-directed normal vector field in H to the {σ(r)}. Note that h a cannot be timelike anywhere, since under a coarse-graining the edges σ(r) must move outwards (i.e. for any r 2 > r 1 , σ(r 2 ) ⊂ L[F (r 1 )], so σ 1 and σ 2 cannot be timelike-separated). Since the σ(r) need not be everywhere C 1 , h a may be singular (on a set of measure zero); however, Lemma 2 guarantees that outward-directed integral curves of h a only start and never end at such singularities of h a . Now, by Lemma 2, the expansions of the outwards-directed null congruences from each of the σ(r) are non-negative. But the expansion of the integral curves of h a is just a linear combination of these two null expansions (with non-negative coefficients). Therefore the expansion along the h a congruence is non-negative as well, and so the area of C 1 portions of the σ(r) is non-decreasing. Moreover, since generators of the h a congruence never leave H as they flow outward, singularities of h a where new generators appear can only increase the area of the σ(r). Thus the areas of the σ(r) are nondecreasing under a flow along h a . This result is in fact an infinite family of area laws. These area laws apply to nontimelike foliations, but in particular they can include causal horizons: in certain cases, the Hawking area law for causal horizons is a special case of these general area laws (e.g. the early stages of AdS-Vaidya collapse). As an aside, note that it is simple to show that the so-called outer entropy of slices of causal horizons is bounded from above by their area [68], so that our area law immediately also suggests an outer entropy increase theorem as well; indeed, such a result was found in [72]. In this section, we had to resort to bulk arguments to prove an area law, in contrast with our results in three dimensions, in which the area law simply manifested from SSA in the boundary theory. Moreover, while in the three dimensional case we could understand SSA as the dual of the area law, we have no such interpretation here. This is due to the absence of an appropriate generalization of the differential entropy to higher dimensions. The area monotonicity property, however, coupled with the obvious significance of these degenerate surfaces to our coarse-graining procedure, suggests that there exists a higher dimensional analogue of the differential entropy. In fact, our coarsegraining procedure and the area monotonicity theorem are sufficiently constraining that we expect to be able to use them to find the requisite quantity. Quantum Generalization We have so far neglected the backreaction of bulk quantum fields on the geometry. While this limit is instructive, any results derived in it should be robust under quantum corrections to the geometry in order to be physically significant. In this regime, a fluctuation to the spacetime metric is viewed as an operator whose expectation value is well-approximated by an expansion in powers of G N . The holographic computation of the von Neumann entropy therefore incorporates quantum corrections to the HRT surface; consequently the Dong-Harlow-Wall argument for entanglement wedge reconstruction includes quantum corrections as well. Since the edge surface σ[F ] is defined via HRT surfaces, it too changes under quantum corrections. This is consistent with the rule of thumb that perturbative quantum gravity effects can violate area monotonicity theorems: the quantity which is monotonic is a "quantumcorrected area" known as the generalized entropy of a surface σ 16 , defined as where S out (σ) is the von Neumann entropy of the propagating quantum fields on a Cauchy slice of the exterior of σ. S gen was first defined by Bekenstein [3,4,6], and the approach of using it as a "quantum-corrected area" has since then been applied with remarkable success to generalize classical theorems to the semiclassical regime (see e.g. [6,71,[73][74][75]). While a comprehensive justification for the replacement of A → 4G N S gen is beyond the scope of this paper (see e.g. [74] for a recap), we cannot resist pointing out that there is significant evidence that the combined quantity S gen is UV-finite; this provides evidence that the appropriate generalization of the area in perturbative quantum gravity is the generalized entropy. Further evidence is provided by the appropriate quantum generalization of the HRT prescription: HRT surfaces (which classically extremize the area functional) are replaced by quantum extremal surfaces, which extremize S gen [73], proven recently in [76] (the earlier work of [77] computed the corrections at first order, where the generalized entropy of the classical and quantum extremal surfaces agrees We should thus expect that the natural generalization of the area theorem in Section 4 is a Generalized Second Law: a monotonicity theorem for S gen . This is indeed the case, as we shall now show. First, let us note that because S gen is not locally defined, the appropriate quantum generalization of the classical expansion -the so-called quantum expansion [71,73,74] -requires a functional derivative of S gen under local deformations of σ. That is, the quantum expansion of a surface σ at p ∈ σ in the null direction k a is defined as where δS gen /δσ k (p) schematically refers to a deformation of σ at p in the k a direction of area δA; see [74] for the precise definition. Bekenstein's famous GSL for causal horizons is equivalent to the statement that Θ is non-negative on future causal horizons (and nonpositive towards the future on past causal horizons). Likewise, the quantum extremal surfaces X quant [R] mentioned above have vanishing quantum expansion in all null directions. We may now examine the quantum generalizations of the results of Section 4 (we will save comments on quantum generalizations of Section 3 for later). First, note that Definition 7 remains unchanged: σ[F ] is still the edge of the union of entanglement wedges L[F ], although now these entanglement wedges are obtained from quantum rather than classical extremal surfaces. Lemma 1 then remains unaltered under the mild assumption that quantum extremal surfaces are C 1 . 18 Similarly, Lemma 2 remains unchanged, except that the the null expansion of σ is replaced by the quantum expansion. The reason for this straightforward modification is that the crucial result, which compares the classical expansion of tangent null hypersurfaces, admits a quantum generalization in terms of their quantum expansion [71]. This establishes the result wherever σ is C 1 , while at cusps the classical expansion must be strictly positive, and thus perturbative quantum corrections will not alter its sign. Finally, it then follows that Theorem 2 is replaced by a GSL: The proof is essentially the same as for the classical case, except that the nonnegativity of the quantum expansion of σ(r) in both null directions towards L[F (r)] guarantees that the generalized entropy increases rather than the area. The technology is essentially the same "zigzag argument" used in [75]. Thus we have found that the area law, Theorem 2, associated to our IR coarsegraining admits a quantum generalization as a GSL. A natural question is whether the precise entropic connection to SSA via differential entropy exists in a three-dimensional perturbatively quantum bulk: after all, the monotonicity of S diff from SSA is a general, purely field theoretic statement that makes no requirement on the bulk (or even the existence of one). However, the key dictionary entry used to translate the monotonicity of S diff to a bulk area law -that is, the mapping of S diff to the area of the differential entropy surfaces -must receive quantum corrections whose behavior is at this point unclear. We plan to investigate these corrections in future work. Discussion Coarse-graining is expected to be the fundamental aspect of quantum gravity that permits the emergence of semiclassical spacetime, i.e. the regime in which the UV data of quantum gravity decouples from the low-energy degrees of freedom. A significant challenge in any attempt to understand this process is the lack of a precise notion of UV gravitational data. In this paper, we have used AdS/CFT to investigate this question via the boundary theory, circumventing this problematic issue. The governing principle under which we operate is that, by the UV/IR correspondence, this data is encoded in the CFT IR. This motivates a precise IR coarse-graining, which through the lens of quantum error correction can be viewed as the erasure of bulk data. Explicitly, by restricting our knowledge of a state ρ to reduced density matrices of a set of regions (i.e. a coarse-graining family), we remove IR data such as long-range correlation functions and long-range entanglement. The AdS/CFT dictionary entry of subregion/subregion duality automatically translates this into an erasure of a region of the bulk interior. Erasing larger bulk regions corresponds to discarding a larger sector of the boundary IR. Regardless of how well-motivated our procedure may be, without evidence that it makes contact with the actual coarse-graining mechanism built into quantum gravity, it is nothing more than a framework for removing IR information in a quantum field theory. We find that this is not the case, as our coarse-graining procedure passes a highly nontrivial test: it gives rise to holographic area monotonicity theorems in the classical regime, and Generalized Second Laws in perturbative quantum gravity. Does every area law have a statistical significance in quantum gravity, and if not, why should we expect that ours do? The coincidence of a well-motivated coarse-graining procedure and its realization as an area monotonicity property (which behaves correctly under quantum corrections) is too strong to ignore. However, we accept that some of our readers may remain skeptical at this stage. Not to fear; the connection goes deeper. In three bulk dimensions, the increase of area of a family of surfaces corresponding to a particular (continuous) coarsening is precisely a result of strong subadditivity of the von Neumann entropy. Since strong subadditivity is a measure of the irreversibility of the removal of a subsystem, our area laws in three bulk dimensions are exactly the gravitational statement of irreversibility of the coarse-graining. Whether or not these particular (three-dimensional) area laws were suspected of having statistical quantum gravity significance prior to our work, the conclusion is now inevitable: they are a result of statistical coarse-graining. Moreover, our mechanism is a generalized version of the one which gives rise to the Casini-Huerta version of the c-theorem [61,62], again indicating the connection with the irreversibility of this IR coarse-graining. This roughly summarizes the framework and its justification; we now briefly comment on some interesting applications beyond the existence of a dual area theorem. Mixed Signature Area Law: Our construction of mixed-signature area laws is particularly intriguing due to their relation to holographic screens [19], which are es-sentially local analogs of event horizons and which exist in general spacetimes and can have mixed signature. As shown in [26,49], holographic screens obey an area law irrespective of their signature: the area of so-called future holographic screens is always increasing towards the past on timelike portions and outwards on spacelike ones, and the time-reverse is true of past holographic screens. An entropic explanation of this law on spacelike portions 19 was given in [23] in the context of AdS/CFT, but an explanation for the general mixed-signature case is lacking. It is striking, however, that the directions of area growth (and by extension, of the signature changes) of past and future holographic screens are identical to those of the hypersurfaces H ± constructed from differential entropy. We emphasize that while we only constructed explicit examples of such hypersurfaces in pure AdS, this behavior is general. The universality of such mixed-signature area laws indicates that the same mechanism may give rise to them all. We hope that this observation can be used to explain a mysterious aspect of holographic screens in general spacetimes. Further Work on Differential Entropy in Two Dimensions: In three bulk dimensions, differential entropy provided a remarkably crisp entropic interpretation of bulk area laws. However, while monotonicity of the differential entropy has a clear interpretation in terms of SSA, a precise physical interpretation of differential entropy itself has yet to be provided. There are some hints that such an interpretation may exist. For instance, in the special cases we have studied, the vanishing of S diff indicates that the coarse-graining family from which it is constructed is not a coarse-graining at all, i.e. it contains essentially all IR data of the boundary theory. Moreover, when evaluated in Poincaré invariant vacuum states, S diff is effectively an integrated version of the Casini-Huerta c-function. It is therefore desirable to develop an understanding of the differential entropy beyond the classical bulk regime. When the bulk is perturbatively quantum (and thus the von Neumann entropies of boundary regions are computed from quantum rather than classical extremal surfaces in the bulk), does S diff correspond to any interesting bulk object? Such a bulk object would be monotonic by virtue of the fact that S diff is as well (recall that S diff is monotonic in any unitary relativistic QFT, regardless of the existence or regime of a dual bulk). It is therefore natural to expect that the quantum-corrected bulk object dual to S diff should be a bulk generalized entropy S gen , which would provide tantalizing evidence that S diff really is computing some fundamentally important object. However, it is conceivable that more generally S diff may compute some other type of quantum-corrected area in the bulk, suggestive of other possible quantum generalizations of area beyond the generalized entropy; we leave an investigation of this question to future work. Higher Dimensional Differential Entropy: The precise interpretation of area monotonicity in terms of SSA in three dimensions immediately calls for some extension to higher dimensions. To our knowledge, except for higher-dimensional configurations with sufficient symmetry to reduce to a three-dimensional problem [54], there is presently no such generalization. To some extent, this is because so far S diff has been understood only in the holographic context as a dual computation of the area of certain bulk surfaces. However, it is manifest from the results in this paper that S diff has crucially important monotonicity properties independent of the existence of any holographic dual. This observation provides an invaluable guide in constructing higherdimensional generalizations of differential entropy. For instance, since in two boundary dimensions we may think of S diff as an integral over a generalized Casini-Huerta cfunction of the regions R λ , natural guesses for higher-dimensional objects would involve integrals over entropic F -functions, a-functions, etc. We might hope that a judicious guess could then produce an object which still computes the area of bulk surfaces constructed from the entanglement wedges of the coarse-graining family used to define it, thus yielding an entropic interpretation of our area laws in higher dimensions. Bulk Extent of Coarse-Graining Procedure: In a previous paper [79], we found that the differential entropy cannot compute areas of surfaces inside a bulk region associated to holographic screens. In particular, the differential entropy is insensitive to a subset of the interior of holographic screens. Since the presence of a single holographic screen implies the existence of an infinite set of them, there is some outer envelope whose interior cannot be probed by the differential entropy construction. For purposes of holeographic bulk reconstruction in dynamical gravity, this posed a serious problem. For our purposes, however, this is instead an interesting feature: our area laws must avoid this region of strong dynamical gravity. This is possibly related to the non-locality of quantum gravity. Interestingly, it is possible in principle for our area laws to still approach close to a singularity by avoiding this hidden region; we plan to determine whether this is indeed the case in future work. thanks the MIT Center for Theoretical Physics, the Stanford Institute for the Theoretical Physics, and Imperial College London for hospitality during various stages of this work. The work of NE is supported in part by NSF grant PHY-1620059 and in part by the Simons Foundation, Grant 511167 (SSG). SF was supported by STFC grant ST/L00044X/1 and thanks the University of California, Santa Barbara for hospitality while some of this work was completed.
2018-05-25T12:26:10.000Z
2018-05-22T00:00:00.000
{ "year": 2018, "sha1": "f62f3daee017345793f86a370daacb6e460b7e67", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1805.08891", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f62f3daee017345793f86a370daacb6e460b7e67", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
258116293
pes2o/s2orc
v3-fos-license
Risk of Pneumococcal Disease in US Adults by Age and Risk Profile Abstract Background Older age and certain medical conditions are known to modify the risk of pneumococcal disease among adults. We quantified the risk of pneumococcal disease among adults with and without medical conditions in the United States between 2016 and 2019. Methods This retrospective cohort study used administrative health claims data from Optum's de-identified Clinformatics Data Mart Database. Incidence rates of pneumococcal disease—all-cause pneumonia, invasive pneumococcal disease (IPD), and pneumococcal pneumonia—were estimated by age group, risk profile (healthy, chronic, other, and immunocompromising medical condition), and individual medical condition. Rate ratios and 95% CIs were calculated comparing adults with risk conditions with age-stratified healthy counterparts. Results Among adults aged 18–49 years, 50–64 years, and ≥65 years, the rates of all-cause pneumonia per 100 000 patient-years were 953, 2679, and 6930, respectively. For the 3 age groups, the rate ratios of adults with any chronic medical condition vs healthy counterparts were 2.9 (95% CI, 2.8–2.9), 3.3 (95% CI, 3.2–3.3), and 3.2 (95% CI, 3.2–3.2), while the rate ratios of adults with any immunocompromising condition vs healthy counterparts were 4.2 (95% CI, 4.1–4.3), 5.8 (95% CI, 5.7–5.9), and 5.3 (95% CI, 5.3–5.4). Similar trends were observed for IPD and pneumococcal pneumonia. Persons with other medical conditions, such as obesity, obstructive sleep apnea, and neurologic disorders, were associated with increased risk of pneumococcal disease. Conclusions The risk of pneumococcal disease was high among older adults and adults with certain risk conditions, particularly immunocompromising conditions. Streptococcus pneumoniae (pneumococcus) is a leading cause of bacterial pneumonia across all ages, especially among young children and older adults. Other serious pneumococcal infections include bacteremia and meningitis. These pneumococcal infections are a substantial cause of morbidity and mortality worldwide [1,2]. Previous research has demonstrated that pneumococcal disease is concentrated in a subset of the population with certain medical conditions [3]. For example, among US adults aged 50-64 years, 67% of invasive pneumococcal disease (IPD) episodes occurred among those with a chronic medical condition (CMC; eg, diabetes mellitus) or an immunocompromising condition (IC; eg, generalized malignancy); this subset comprises 31% of adults aged 50-64 years [3]. Further, adults who had ≥2 CMCs had similar or higher incidence rates of pneumococcal disease compared with those with an IC [4]. In addition to medical conditions, older age, due to immunosenescence and frailty, is a well-recognized risk factor for pneumococcal disease [5]. The prevalence of CMCs and ICs has been shown to increase with age, which compounds the risk of pneumococcal disease [4,6]. In the United States from 2013 to 2015, in adults aged 18-49 years, 50-64 years, and ≥65 years, the prevalence of at least 1 CMC was 11%, 25%, and 39%, respectively. Likewise, the prevalence of at least 1 IC increased from 3% to 6% to 15% in these age groups [4]. Currently, 2 types of pneumococcal vaccines are recommended for adult use in the United States: pneumococcal polysaccharide vaccine (PPSV23) and pneumococcal conjugate vaccine (PCV20 and PCV15). Although PPSV23 has been recommended in older adults since 1983, effectiveness data of PPSV23 against nonbacteremic pneumonia are conflicting [7]. As nonbacteremic pneumonia represents the largest share of pneumonia in adults [8], considerable disease burden remains in this population. The US Centers for Disease Control and Prevention (CDC) has recommended the use of PCV13 for adults ≥19 years with ICs since 2012 and for all adults aged ≥65 years since 2014 based on increased disease risk [9]. In 2021, PCV20 alone and PCV15 followed by PPSV23 were recommended for all adults aged ≥65 years and adults aged 19-64 years with certain medical conditions [10]. Although there have been studies describing the incidence of pneumococcal disease in the United States, with the most recent analysis conducted from 2013 to 2015 [3,6], we evaluated the epidemiology of pneumococcal disease following PCV13 recommendations in older adults and adults with an IC and continued routine use of PCV13 in children. In this study, we used data from a large health care claims database to determine the incidence of pneumococcal disease-all-cause pneumonia, IPD, and pneumococcal pneumonia-by age, risk profile, and individual medical condition among adults. These data can be used to establish a baseline incidence for evaluating epidemiologic changes due to higher-valency PCV recommendations and use in these populations. We also assessed the compounding effect of multiple risk conditions, referred to as "risk stacking," by analyzing disease rates among adults with more than 1 risk condition. Risk conditions included in the analysis were those known to increase the risk of pneumococcal disease and conditions that might predispose patients to disease based on limited data from other studies. Data Source The CDM is derived from a database of administrative health claims for members of large commercial and Medicare Advantage health plans. The de-identified data include patientlevel information derived from all medical and pharmacy health care services. The population included in the CDM is geographically diverse, spanning all 50 states. Study Population Patients who met the following criteria were included in the analysis: (1) enrollment in a participating health plan on January 1st of each calendar year from 2016 to 2019 (index date), (2) ≥18 years of age on the index date, and (3) at least 1 year of continuous enrollment before the index date of each corresponding calendar year ( Figure 1A and B). Episodes of disease (ie, all-cause pneumonia, IPD, and pneumococcal pneumonia) were ascertained over a 1-year period beginning on January 1st of each calendar year and ending on the earliest of: December 31st of that year, the date of death, or the date of disenrollment. Patients who met inclusion criteria in multiple calendar years contributed to the pooled analysis. Patients were excluded from analyses if gender data were missing, if the patient had a death date before January 1st of the index year, or if the patient had overlapping pneumonia inpatient admissions. Risk Profiles Patients were classified into risk profiles (healthy, CMCs, ICs, and other medical conditions) based on the presence of certain medical conditions during the baseline period (ie, a minimum of 12 months before January 1st of the corresponding calendar year). CMCs and ICs were aligned with those indicated by CDC for pneumococcal vaccine recommendations (Supplementary Table 1) [10]. Other medical conditions were selected by the authors based on a literature review of conditions that may be associated with an increased risk of pneumococcal disease (Supplementary Table 1) [6,[11][12][13][14][15][16]. CMCs, ICs, and other medical conditions were ascertained using International Classification of Diseases, Ninth Revision/Tenth Revision, Clinical Modification (ICD-9/ICD-10-CM) diagnosis codes, ICD-9/ICD-10-CM/Healthcare Common Procedure Coding System procedure codes, and National Drug Code Directory drug codes (Supplementary Table 2) and were defined as follows. If an adult had 1 IC, they were counted in the category of ICs even if they had a concurrent CMC, a concurrent other medical condition, or both (Supplementary Figure 1). If an adult had 1 CMC and 1 other medical condition, they were counted in both categories. Adults without evidence of ICs, CMCs, or other medical conditions were classified as healthy. Episode Identification The disease episodes were identified using operational algorithms based on ICD-9-CM and ICD-10-CM diagnosis codes (Supplementary Table 2). Disease episodes separated by ≥30 days were considered independent events. For claims that included episodes separated by <30 days, they were counted as 1 episode ( Figure 1A). Episode Care Settings Disease episodes from inpatient hospital admissions, emergency department visits (but not admitted to a hospital), outpatients, and others (all categories not listed previously) were included in the analysis. Statistical Analyses Crude disease rates were calculated by episode care setting, within each age group and by risk profile, and among adults with a CMC or an IC, by number of medical conditions. Within an age group and risk profile, disease rates of persons with select individual medical conditions were also calculated. Disease rates were reported as the number of events per 100 000 person-years. To maintain patient de-identification, disease rates were only presented when there were ≥10 episodes in that group. Rate ratios and 95% confidence intervals were used to compare rates with healthy adults, using Poisson regression with robust standard errors to account for adults contributing to more than 1 calendar year. Statistical analyses were performed using SAS, version 9.4 (SAS Institute, Cary, NC, USA). Patient Consent This study does not include factors necessitating patient consent. Study Population Adults aged 18-49 years, 50-64 years, and ≥65 years contributed 15.2 million, 8.5 million, and 14.9 million person-calendar years of observation, respectively, to the analysis ( Table 1). The prevalence of medical conditions increased with age. Among adults aged 18-49 years, 50-64 years, and ≥65 years, respectively, approximately 13.0%, 26.3%, and 31.4% had at least 1 CMC, while 10.9%, 25.2%, and 41.6% had at least 1 IC. When the risk groups were combined, the corresponding percentages of adults with at least 1 CMC or IC in the 3 age groups were 23.9%, 51.5%, and 73.0%. Disease Rates and Rate Ratios-All Care Settings Among the study population that had a CDC-indicated medical condition for pneumococcal disease, the incidence rates of all-cause pneumonia (episodes per 100 000 person-calendar years) increased with age and risk profile ( Table 2). Among healthy adults, the rate of all-cause pneumonia increased from 577 in those aged 18-49 years to 944 in those aged 50-64 years to 1983 in those aged ≥65 years. The corresponding rates in the 3 age groups among adults with at least 1 CMC were 1667, 3048, and 6328, and among adults with at least 1 IC they were 2432, 5488, and 10 549. Rates of IPD and pneumococcal pneumonia similarly increased with age and risk profile (IPD in Table 2; pneumococcal pneumonia in Supplementary Table 4). The number of episodes and person-time for each disease end point by age group and risk profile are shown in Supplementary Table 5. The rates of all-cause pneumonia among adults with CDC-indicated medical conditions were higher than those in healthy adults in all 3 age groups ( Table 2). Among adults aged 18-49 years, 50-64 years, and ≥65 years, the rate of allcause pneumonia in those with at least 1 CMC was 2.9 (95% CI, 2.8-2.9), 3.3 (95% CI, 3.2-3.3), and 3.2 (95% CI, 3.2-3.2) times the rate in healthy adults in the same age group. In adults with at least 1 IC, the rate of all-cause pneumonia in the 3 age groups was 4.2 (95% CI, 4.1-4.3), 5.8 (95% CI, 5.7-5.9), and 5.3 (95% CI, 5.3-5.4) times the rate in healthy adults in the same age group. The rates of all-cause pneumonia among adults with other medical conditions were also higher than the rates in healthy adults ( Table 3). The rate ratios of all-cause pneumonia for adults with other medical conditions vs healthy adults ranged from 2.7 (95% CI, 2.6-2.9) to 8.7 (95% CI, 7.2-10.5) in those aged 18-49 years, from 2.9 (95% CI, 2.7-3.2) to 15.8 (95% CI, 13.2-19.0) in those aged 50-64 years, and from 2.8 (95% CI, 2.6-2.9) to 13.3 (95% CI, 7.9-22.7) in those aged ≥65 years. Notably, among adults with certain other medical conditions such as neuromuscular or seizure disorders and Down syndrome, the rate of all-cause pneumonia was higher than the rate among adults with an IC. Similar comparisons were present for IPD and pneumococcal pneumonia (IPD in Table 3; pneumococcal pneumonia in Supplementary Table 4). Disease Rates by Care Setting For all-cause pneumonia and pneumococcal pneumonia, the overall inpatient incidence rate was lower than the overall outpatient incidence rate among younger adults aged 18-64 years; however, these 2 rates were comparable among older adults aged ≥65 years (Supplementary Table 6). Such trends generally persisted among healthy adults and adults with a CMC; however, the inpatient incidence rates were generally higher than the outpatient incidence rates among older adults aged ≥65 years who had an IC. For IPD, compared with the outpatient incidence rates, higher inpatient incidence rates were observed for all age groups and risk profiles. Risk Stacking The prevalence of concurrent medical conditions increased with age (Supplementary Table 2). Among adults aged 18-49 years, 50-64 years, and ≥65 years, respectively, 2.2%, 12.8%, and 28.3% had at least 3 risk conditions. The rates of all-cause pneumonia and IPD in adults with CMCs or ICs increased with the number of concurrent conditions ( Figure 2). For all-cause pneumonia, rates were generally similar between adults with 2 CMCs and adults with 1 IC across the age groups; however, rates among adults with 3 or more CMCs were higher than the rates among adults with 1 IC. The trends noted for all-cause pneumonia and IPD were also observed for pneumococcal pneumonia (Supplementary Table 4). DISCUSSION In this retrospective cohort analysis using a large health care claims database, we assessed the risk of pneumococcal disease by age and risk profile among US adults between 2016 and 2019. We estimated the incidence rates for 3 disease end points, with the primary analysis conducted on all-cause pneumonia and IPD, and additional data supplemented from pneumococcal pneumonia. We chose to focus on all-cause pneumonia and IPD for the primary analysis because all-cause pneumonia is the most common clinical manifestation of pneumococcal infections and IPD is the most specific disease end point as its coding in a claims database generally requires a confirmation from microbiology testing. Our results showed that the burden of pneumococcal disease was concentrated in adults of older age and adults who had a risk condition, and the results were generally consistent across all 3 disease end points. Previous studies by Shea et al. and Pelton et al. also detailed the incidence rates of pneumococcal disease (pneumonia and IPD separately, as in this study) among US adults [3,6]. Similar to the 2 other studies, our study employed a retrospective cohort design using data from health care claims databases; however, differences existed between the studies in the study periods (eg, 2006- cases, our study included disease episodes from all care settings, which allowed us to capture the additional disease burden from outpatient encounters that was not described in other studies. Although it is difficult to directly compare the disease rates between the studies due to the aforementioned differences and others, our observation that the risk of pneumococcal disease increased with age, risk profile, and number of risk conditions is consistent with the previous findings. Further, the IPD incidence rates from our analysis (3.2, 16.0, and 38.7 per 100 000 person-calendar years for age groups of 18-49, 50-64, and ≥65 years, respectively) are comparable to the rates published by the Active Bacterial Core surveillance (ABCs) report in 2019 (corresponding rates for the 3 age groups: 4.4, 15.6, and 23.6 per 100 000 population) [17]. The higher IPD rate in individuals aged ≥65 years in our study may be due to demographic differences in the catchment area between CDM and the ABCs program. In the recently updated recommendations from the CDC, a higher-valency PCV (PCV20 or PCV15) is now routinely recommended for adults aged ≥65 and for adults aged 19-64 years with certain CMCs, such as asthma and diabetes mellitus, on the basis of elevated disease risk [10]. In support of these recommendations, our study showed that adults aged ≥65 had the highest disease rates among those who were healthy or who had a CMC or IC for nearly every condition when compared with younger adults. Our study also showed that among adults with CDC-indicated medical conditions, disease rates were higher among adults with these CMCs or ICs than the rates in the healthy counterparts. Across all age groups, the rate ratios ranged from 2.9 (95% CI, 2.8-2.9) to 24.7 (95% CI, 22.7-26.9) for all-cause pneumonia and from 3.3 (95% CI, 2.9-3.8) to 316.7 (95% CI, 69.6-1441.4) for IPD. Prior research has identified medical conditions-such as autoimmune diseases (including rheumatoid arthritis, systemic lupus erythematosus, and Crohn's disease) and neuromuscular/seizure disorders-to be associated with a higher risk of pneumococcal disease [4,6]. In our study, we expanded the list of medical conditions of Shea et al. and Pelton et al. to include additional risk conditions (eg, obesity and stroke) ( Table 3) and investigated whether these medical conditions could also confer a higher risk of disease. Our analysis showed that the rates of all-cause pneumonia in adults with other medical conditions ranged from 2.7 (95% CI, 2.6-2.9) to 15.8 (95% CI, 13.2-19.0) times the rate in healthy adults, and the corresponding rate ratios ranged from 2.3 (95% CI, 0.3-16.7) to 94.8 (95% CI, 23.8-378.1) for IPD. Thus, our study adds to an existing body of evidence that individuals with these other medical conditions are also associated with increased risk of pneumococcal disease. The list of risk conditions indicated for pneumococcal vaccines is periodically updated by the vaccine technical committees (VTCs); the identified other medical conditions may be considered by the VTCs to further reduce the burden of pneumococcal disease. Our study has limitations. First, inherent with the use of operational algorithms, disease episodes and risk profiles may have been misclassified or incomplete. However, it is likely that this limitation had an impact on disease rates in a nondifferential manner across the age and risk groups, leaving the rate ratios largely unaffected. Second, due to the lack of information on pneumococcal serotypes, it was not possible to assess the proportion of disease caused by serotypes included in pneumococcal vaccines (eg, PCV13, PCV15, PCV20, and PPSV23) in various age and risk groups. However, vaccine effectiveness studies have shown that the preventable fraction of all-cause pneumonia in adults aged ≥65 years is between 6% and 11.4% [18][19][20]. Similarly, lack of data precluded us from stratifying the rates by race/ethnicity. Third, the rate ratios were not adjusted for individual-level covariates. Fourth, although the CDM data are derived from a health claims database for members of large commercial and Medicare Advantage health plans, individuals with public health insurance or no health insurance are not included in the database. Thus, results presented here may not be generalizable to other patient populations. CONCLUSIONS In this large observational study, we show that the risk of pneumococcal disease is high among older adults and adults with risk conditions. Our findings suggest that vaccination against respiratory pathogens including S. pneumoniae, among others, as well as CMC prevention, remain important strategies to prevent and reduce disease. Additionally, our study has identified individuals with other medical conditions, such as obesity, who are at increased risk of disease and could benefit from inclusion of these conditions in pneumococcal vaccine recommendations. Finally, future studies quantifying disease risk will be needed to evaluate the impact of higher-valency pneumococcal vaccines and updated pneumococcal vaccination recommendations. Assessing disease rates in different age groups (eg, ≥50 years) and by race and geography will help focus future prevention strategies. Supplementary Data Supplementary materials are available at Open Forum Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author.
2023-04-14T15:08:25.618Z
2023-04-12T00:00:00.000
{ "year": 2023, "sha1": "83ec33bf4687a60c7aa5d19afc31df4a77b9a209", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ofid/advance-article-pdf/doi/10.1093/ofid/ofad192/49864748/ofad192.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ceb096dd13e1170c2e6838a71a816493e36c63e7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
236332850
pes2o/s2orc
v3-fos-license
Brominated Flame Retardants in Antarctic Air in the Vicinity of Two All-Year Research Stations : Continuous atmospheric sampling was conducted between 2010–2015 at Casey station in Wilkes Land, Antarctica, and throughout 2013 at Troll Station in Dronning Maud Land, Antarctica. Sample extracts were analyzed for polybrominated diphenyl ethers (PBDEs), and the naturally converted brominated compound, 2,4,6-Tribromoanisole, to explore regional profiles. This represents the first report of seasonal resolution of PBDEs in the Antarctic atmosphere, and we describe conspicuous differences in the ambient atmospheric concentrations of brominated compounds observed between the two stations. Notably, levels of BDE-47 detected at Troll station were higher than those previously detected in the Antarctic or Southern Ocean region, with a maximum concentration of 7800 fg/m 3 . Elevated levels of penta-formulation PBDE congeners at Troll coincided with local building activities and subsided in the months following completion of activities. The latter provides important information for managers of National Antarctic Programs for preventing the release of persistent, bioaccumulative, and toxic substances in Antarctica. Introduction Polybrominated diphenyl ethers (PBDEs) are a group of organohalogen compounds used extensively as flame retardants in consumer products during the past 50 years [1].Their environmental behavior of persistence and long-range dispersal, combined with their biological impact of bioaccumulation and toxicity, has led to the inclusion of all commercial PBDE formulations under Annex A of the Stockholm Convention on Persistent Organic Pollutants (POPs) [2].The global ban of commercial penta-and octa-BDEs entered into force in 2009, although not until 2019 for deca-BDE [3]. PBDEs have been reported in Antarctic biota since 2004 [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21], and in case studies of the Antarctic atmosphere since 2012 [22][23][24][25][26][27].Unlike agrichemicals, such as organochlorine pesticides, the Antarctic occurrence of which can be attributed solely to long-range environmental transport (LRET), current and recently used POPs, such as PBDEs, are also finding their way to the remote Antarctic region via in situ usage [8][9][10].Recent studies focusing on Antarctic research stations as emitters of PBDEs to the local environment have evidenced such emissions at: McMurdo station and Scott Base in the Ross Sea Region [10]; Casey Station in the East Antarctic sector [28]; and Julio Escudero and Gabriel de Castilla Stations [29] on the South Shetland Islands [29].These findings in turn indicate that all polar research stations can be potential sources of these compounds to the local environment [8,10,28].In the above-named studies, there were significant differences in both the levels and profiles of PBDEs detected within stations, and in the local surrounding area.Although station population capacity appears to be an indicator of absolute contaminant levels, Hale et al. (2008) proposed that station contaminant profiles reflect the host nation of the station, and thereby national chemical legislation and national chemical usage patterns.Increasingly, as stations age and undergo renovation, profiles are also likely to reflect the chemical constituents of chosen building materials at the time of construction. The published data concerning PBDEs in the Antarctic atmosphere reveals a number of limitations characteristic of POP research in the region, and which have previously been discussed in the literature [30,31].Specifically, they have been derived from predominantly short, one-off sampling campaigns, with an evident strong spatial bias across the continent.Further, resulting data has been obtained by a variety of sampling approaches, and the targeted chemical structures of each study are also not uniform [31].To our knowledge, there have been six case studies that have reported PBDEs in air masses of the Antarctic region.Associated sampling periods of these studies date back to 2001.Four of these studies originate from the Antarctic Peninsula region [23,24,26,27], whereas one originates from Dronning Maud Land [22], and one from the eastern Antarctic sector [25].Four of the six studies implicated local emission sources for one or more of the congeners detected [24,26,27,32]. The Global Monitoring Plan (GMP) [33] was implemented to evaluate the effectiveness of the Stockholm Convention in meeting its goals.It seeks to do so through collection and analysis of comparable samples in UN regions, in order to understand temporal and spatial trends.Data are collected from the core matrices of ambient air, human breastmilk, and blood, in addition to surface water for water-soluble POPs.Although the human matrices monitored under the GMP are not applicable in Antarctica due to the absence of a subsisting human population, the detection of chemicals in the Antarctic atmosphere and surface waters can provide unique insight into hemispheric chemical usage patterns and the global reach of chemical emissions.As the most remote region on the planet, the Antarctic plays a special role under the GMP because it is to the greatest extent removed from manufacturing emissions.Detection of chemicals in Antarctica can therefore serve as direct and empirical evidence of chemical persistence and capacity for LRET, i.e., two of the four requisite criteria for categorization of a chemical as a POP [30].Alternatively, detection may reveal local emissions, in breach of the Protocol on Environmental Protection to the Antarctic Treaty, which prevents the release of prohibited substances [34].In both cases, monitoring plays a pivotal role in accelerated regulatory decision making and effective, evidence-led, chemical policy. In support of the GMP, continuous atmospheric monitoring for POPs was implemented at Troll Station (Norway) and Casey Station (Australia) in Queen Maud Land and east Antarctica, respectively.Air extracts from the two programs were analyzed for PBDE to further investigate long-range hemispheric sources versus local emissions of these chemicals. In addition to PBDEs, naturally occurring brominated compounds have been detected in the global environment, including Antarctica [8,35,36].Many are structurally similar to problematic synthetic equivalents; hence, concerns have been raised with regard to their associated environmental and biological risks.2,4,6-Tribromoanisole (TBA) is a fungal metabolite of brominated phenolic compounds often used as a fungicide or found as contaminants in pesticides.2,4,6-tribromophenol, the chemical structure of which closely resembles PBDEs, was included in analysis for ancillary insight into the presence of, and regional differences in, levels of this organobromine compound. Here, we report the results of continuous atmospheric sampling, combined with targeted PBDE analysis, at two all-year research stations in Antarctica.Parallel sampling and analysis controls for inter-laboratory method variation and analyte repertoire comparison.Similarly, the extended monitoring period (one (Troll) to five (Casey) years) yields the first insight into the seasonal resolution of PBDEs in the Antarctic atmosphere.Finally, TBA is reported for the first time in the Antarctic atmosphere.We describe the dramatic differences in ambient atmospheric levels of PBDEs observed between the two stations.The latter provides important information for the Council of Managers of National Antarc-tic Programs (COMNAPs) for local source identification, and thus mitigation of breaches of the Antarctic Treaty related to release of prohibited substances in Antarctica. Site Descriptions Casey Station is one of Australia's all-year Antarctic research stations.It is located on Wilkes Land in the Australian Antarctic Territory (66 • 16 56"S 110 • 31 32"E) (Figure 1).On a local scale, the station is situated on the Bailey Peninsula.A High Flow-Through Passive Air Sampler (HFTPAS) was installed at an upwind, "background" site across Newcomb Bay, at the abandoned Wilke's station, on the Clark Peninsula, approximately 3 km from Casey station [30]. Atmosphere 2021, 12, x FOR PEER REVIEW 3 of 14 differences in ambient atmospheric levels of PBDEs observed between the two stations.The latter provides important information for the Council of Managers of National Antarctic Programs (COMNAPs) for local source identification, and thus mitigation of breaches of the Antarctic Treaty related to release of prohibited substances in Antarctica. Site Descriptions Casey Station is one of Australia's all-year Antarctic research stations.It is located on Wilkes Land in the Australian Antarctic Territory (66°16′56″S 110°31′32″E) (Figure 1).On a local scale, the station is situated on the Bailey Peninsula.A High Flow-Through Passive Air Sampler (HFTPAS) was installed at an upwind, "background" site across Newcomb Bay, at the abandoned Wilke's station, on the Clark Peninsula, approximately 3 km from Casey station [30].The Norwegian Troll Atmospheric Observatory is located at 72°00′42″ S; 2°32′06″ E, in Dronning Maud Land, Antarctica (Figure 1).It is situated 235 km inland from the Antarctic coast and 1553 m above sea level.The South African SANAE IV station lies 190 km west-north-west of the observatory, and the German Neumeyer station lies 420 km to the east-north-east.Troll station is serviced by a blue-ice airfield on the Jutulsessen glacier, 7 km north of the main station.The Norwegian Troll Atmospheric Observatory is located at 72 • 00 42 S; 2 • 32 06 E, in Dronning Maud Land, Antarctica (Figure 1).It is situated 235 km inland from the Antarctic coast and 1553 m above sea level.The South African SANAE IV station lies 190 km west-north-west of the observatory, and the German Neumeyer station lies 420 km to the east-north-east.Troll station is serviced by a blue-ice airfield on the Jutulsessen glacier, 7 km north of the main station. Air Sampling 2.2.1. High Flow-Through Passive Air Sampler-Casey Station, East Antarctica Atmospheric monitoring in the Australian Antarctic Territory was performed with a High Flow Through Passive Air Sampler (HFPAS) specifically developed for measuring trace contaminant levels encountered in remote regions (Figure S2) [31,37].The sampling equipment has previously been outlined in detail [30,37].In brief, the sampling unit consists of three polyurethane foam (PUF) plugs loaded into a cartridge in series and mounted in an aerodynamically shaped housing on a post with a rotatable joint.The sampler-housing unit is designed to automatically face into oncoming wind, thereby increasing airflow across the sampling media.This serves to increase the sampling rate compared to other non-powered passive air samplers, thus permitting remote sampling away from power sources/inhabited areas.The ambient wind speed is measured via an anemometer mounted on a post, at a similar height to the sampler, two meters from the sampler.At Casey station, each sampling set of cartridges included two PUF disk field blanks (one 7.62 cm and one 2.54 cm), in addition to the three PUF disks (two 7.62 cm and one 2.54 cm) that were used to make up the sampling train.Field blanks were handled in the same manner as the sample PUFs.Upon deployment and retrieval of the sample PUFs, the field blank jars were opened and the PUF disks removed and replaced using pre-cleaned tongs. High-Volume Active Sampling-Troll Atmospheric Observatory, Queen Maud Land Air samples from the Troll Atmospheric Observatory, Queen Maud Land, Antarctica were collected using a High Volume Active Air Sampler (HVAAS) (DHA-80, 5 DIGITEL, Hegenau, CH).Air samples were collected during 2013 on a weekly basis covering seven days per sample, drawing air at 15-25 m 3 /h across a glass fiber filter (GFF) (particulate fractions) and two PUF plugs (gas phase fraction) (target volume of 2500-3500 m 3 ).Flow rates and sampling conditions were digitally monitored and documented. Sample Preparation Prior to deployment, the PUF disks for Casey station were scrubbed under hot water and pre-cleaned by soxhlet extraction for 24 h with petroleum benzene, followed by 24 h with acetone.PUF disks were dried in a dessicator under pure nitrogen flow and sealed in furnaced glass jars with Teflon-lined lids until sampling.All solvents, adsorbents, and gasses used were of the highest standard and selected for ultra-trace analysis.The PUF plugs for Troll station were pre-cleaned by soxhlet extraction for 24 h in toluene, 8 h in acetone, and 8 h in toluene, followed by drying in a dessicator.Glass-fiber filters were baked at 450 • C. Both media were wrapped in foil and sealed in airtight Ziplock bags. Upon collection, the exposed sample media from both Troll and Casey stations were sealed in a gas-tight container for storage and transported to the Norwegian Institute for Air Research (NILU) for processing and quantification. Sample Extraction and Analysis All sample media (glass fiber filters, PUF plugs from the HVAAS and the HFTPAS) were soxhlet extracted for 8 h in hexane/diethylether (9:1) at NILU's laboratory.The glass fiber filter and PUF plugs from the HVAAS at Troll were extracted together and the concentrations from Troll represents the bulk concentrations of gas and particle phase.Prior to extraction, each sample was spiked with 10 ng of 13 C-labelled internal standard.The extract was cleaned by acid treatment and on a preconditioned silica column, topped with sodium sulphate.Once the solvent volume of the cleaned extract was concentrated down to 0.5 mL, 10 ng of tetrachloronaphthalene (TCN) was added as the recovery standard.Further concentration of the sample to ~100 µL was completed by applying a gentle stream of pure nitrogen gas. Quality Assurance To account for inherent PUF contamination or contamination that may have occurred during analysis and/or transportation, lab blanks and field blanks were included in the experimental protocol [38]. The following quantification conditions were fulfilled for all data presented: (i) The retention time of the native was within three seconds of the corresponding 13 C-labelled isomer; (ii) the isotope ratio of the two monitored masses was within +20% of the theoretical value; (iii) the signal/noise was >3/1 for quantification; (iv) the recovery of the added 13 C labelled internal standards was within 30% to 140%; (v) prior to each new series of samples, the blank values of the complete clean-up and quantification procedures were determined.Clean-up of samples only commenced when a sufficiently low blank value was obtained.At least once per year the laboratory participates in an international laboratory inter-calibration exercise. The final reported chemical concentrations (fg/m 3 ) in Casey Station air samples were calculated by adding together the chemical masses extracted from the the individual sample PUF plugs of the sampling train, and subtracting the chemical masses extracted from twice the 7.63 cm field blank and that of the 2.54 cm field blank.Where levels in the blanks were below detection, the reported level of quantification (LOQ) was used.In 2014, all three of the PUF samples of the sampling train were extracted as one sample and all blank PUFs were extracted as a single sample.The final volume of air (m 3 ) for each sampling period was calculated by multiplying the average wind velocity (m/s), by the total sampling time (s), by the cross-sectional area of the flow through the sampler (m 2 ) and adjusting this for standard conditions [30].On six occasions the instrument data logger failed to record wind data for the complete sampling period.On these occasions surrogate wind data obtained from the Bureau of Meteorology Casey Station observatory (30017) was applied for some or all of the sampling period.Finally, the total chemical mass was divided by the sampling volume to give the chemical concentration for each of the sampling periods. Sampling Schedule Continuous sampling was conducted at Casey Station between December 2009 and November 2014.Twenty-six cartridge sample-sets were obtained during this period, each set representing sampling periods from 4 to 17 weeks (average 6.3 weeks) (Table S1). The 39 weekly samples from the Troll station in February-November 2013 were combined into ten monthly mean concentrations and one annual mean concentration.Troll sample IDs, sampling period, and captured wind volumes are presented in Table S2. Statistics A simple linear regression (R 2 ) was performed on homologue groups to evaluate atmospheric concentrations relative to temperature.Similarly, Pearson's correlation coefficients were calculated for breakthrough of key congeners in each of the HFTPS samples, relative to temperature, average wind speed, and total air volume captured. The cosine theta similarity metric (Cosθ) [39] was employed to compare the similarity of BDE congener profiles from the two stations.This metric calculates the cosine of the angle between two multivariate vectors. The cosθ metric is calculated from the formula for a Euclidean dot product of two vectors according to: sk where x ak is the concentration (fg/m 3 ) of congener k in Casey Station air samples, x sk is the concentration of the same congener in Troll station air samples, and n is the number of BDE congeners analyzed.Values of cosθ can range from 0.0 to 1.0, with 1.0 representing a perfect match, and 0.0 indicating perpendicular vectors and no similarity between the congener profiles [40].This approach has been used previously to quantify the similarity of PCB congener profiles in sediment and air [40][41][42]. TBA was measured and detected for the first time in Antarctic air and was found to be a ubiquitous contaminant in the air profiles of both stations, with a range of 30-27,000 fg/m 3 at Troll and 8.6-860 fg/m 3 at Casey.At both stations, TBA and BDE-209 contributed the two highest mean concentrations of 110 and 140 fg/m 3 , and 27,000 and 2100 fg/m 3 , at Casey and Troll stations, respectively.Summary statistics are presented in Table 1, full sample data are presented in Tables S3 and S4, and annual station average concentrations are presented in Table 2 alongside recent atmospheric measurements at the Canadian Arctic Station, Alert [43]. Regional Differences The concentrations of PBDEs in air samples from Troll station throughout 2013 were on average 70 times higher than those observed in Casey air samples during the same time period.The differences were most pronounced for tri-and tetra-BDEs (e.g., BDE-47 being 270 times greater), but comparable for nona-BDE-206.Despite comparatively elevated ambient PBDE air concentrations at Troll station compared to Casey station, recent measurements in the Canadian Arctic remained, on average, eight times higher than at Troll for comparable congeners [44].Dickhut et al. (2012) [24] performed high volume air sampling for PBDEs at three Antarctic locations during four austral summer seasons, between 2001 and 2005.The measured average ambient concentrations of key BDE congeners at these locations, together with Troll and Casey levels, are presented in Table 3. From these comparisons, it is evident that the measured concentrations of tri-and tetra-BDE congeners (BDE-28, -47, -66) at Troll station, are the highest concentrations measured in Antarctic air to date, and even higher than those detected in Marguerite Bay in 2001 following a laboratory fire at Rothera station in spring 2001 (Table 3.By contrast, BDE-100 and -209 concentrations at Troll were not the highest in this comparison (third and second highest of the five stations, respectively).The higher levels observed at Troll station compared to Casey station may be attributed to the closer proximity of the Troll air monitoring observatory to the main station (200 m) compared to the Casey station monitoring site from the main station buildings (3 km).Further, the presence of an ice run-way/flight traffic just 7 km from Troll station remains a plausible source of contamination.Finally, the elevated concentrations of the tri-and tetra-BDEs at Troll station, which are the main constituents of the commercial penta-formulation, may be related to construction of a new sampling container in the vicinity of the Troll Observatory in February-March 2013.Although the original Troll and Casey stations were erected in 1990 and 1988, respectively, local emissions of penta-BDE may originate from reemission from land disturbance, or from materials/products present during construction.This latter possibility is supported by the significant drop in tri-and tetra-BDEs throughout 2013 at Troll station (Figure 2).This observation highlights the constraints of short-term, case-study air sampling in the region in accurately determining background levels, and underscores the need of longitudinal monitoring for the determination of robust temporal trends.Although interpretation of deca-BDE-209 results require caution due to associated analytical challenges, e.g., frequent analytical contamination by this congener, it is interesting to note the dominance of this and other highly brominated congeners in samples in which they were quantified.Penta-formulations containing BDE-47, -99, and -100 were listed under the Stockholm Convention in 2009.The deca-BDE formulation was used as a replacement for octa-and penta-BDEs until its inclusion under Annex A in 2019 [3].We may expect the impact of this global chemical policy action to be reflec\ted in residential and environmental levels, and in the homologue profiles of the different stations (Figure 3).Although interpretation of deca-BDE-209 results require caution due to associated analytical challenges, e.g., frequent analytical contamination by this congener, it is interesting to note the dominance of this and other highly brominated congeners in samples in which they were quantified.Penta-formulations containing BDE-47, -99, and -100 were listed under the Stockholm Convention in 2009.The deca-BDE formulation was used as a replacement for octa-and penta-BDEs until its inclusion under Annex A in 2019 [3].We may expect the impact of this global chemical policy action to be reflec\ted in residential and environmental levels, and in the homologue profiles of the different stations (Figure 3).Although interpretation of deca-BDE-209 results require caution due to associated analytical challenges, e.g., frequent analytical contamination by this congener, it is interesting to note the dominance of this and other highly brominated congeners in samples in which they were quantified.Penta-formulations containing BDE-47, -99, and -100 were listed under the Stockholm Convention in 2009.The deca-BDE formulation was used as a replacement for octa-and penta-BDEs until its inclusion under Annex A in 2019 [3].We may expect the impact of this global chemical policy action to be reflec\ted in residential and environmental levels, and in the homologue profiles of the different stations (Figure 3). Breakthrough Considerations at Casey Station The elevated levels of PBDEs, in addition to the naturally occurring TBA, at Troll compared to Casey station, flag the possibility that the air volumes sampled with the HFTPAS at Casey may have resulted in breakthrough.Breakthrough occurs when the sampling media becomes saturated with an analyte before the sampling period is finished, thus the calculated ambient chemical concentrations are lower than they should be.Analysis of the last PUF in the HFTPAS sampling train found that it contained on average 19% of the bulk analyte mass, ranging from an average of 8% for BDE-154 to an average of 34% for BDE-100.The back PUF (size adjusted), represents one third of the sampling media, so an analyte proportion of 33% or greater suggests that complete saturation of the media has occurred.BDE-28, -47, -49, -99, and -100 approached this threshold, indicating that Casey station measures for these analytes should be considered an under-estimate of ambient concentrations (Table S5), and consequently the differences to Troll station an overestimation. There was little apparent relationship between the level of breakthrough of key compounds (TBA, BDE-28, -47, -49, -100, -153, -154, -206, and -209) and temperature, with the exception of BDE-153 (r = 0.52).All other key compounds were found to have r values ranging from low negative values to 0.19.Similarly, the captured wind volume showed little correlation with the level of breakthrough.Indeed, negative relationships were observed for TBA, BDE-47, -100, -154, -206, and -209.The only apparent positive relationship was observed for BDE-49 (r = 0.4).Wind speeds were likewise found not to impact the level of breakthrough (r = 0.009 to 0.20).Deployment duration impacted breakthrough of the lightest key congeners (BDE-47, -49, -100) in a negative manner (r = −35, −0.42, −0.26, respectively).This is a counterintuitive relationship and may suggest that saturation occurs quickly, and these trace levels are easily influenced by minor fluctuations. Seasonal Trends The Casey station dataset offers the first multi-year data regarding ambient PBDE air concentrations in Antarctica.Lighter PBDE congeners may be expected to be more prone to temperature dependent volatilization due to their lower vapor pressure.This may lead to an increase in atmospheric levels with higher summer temperatures as previously shown for legacy POPs [30].As Casey station measurements for the lighter BDE-congeners were impacted by breakthrough, further interpretation of such trends was, however, not performed.Larger congeners (hepta-, octa-, nona-and deca-) showed no significant relationship with either temperature, season, or wind speed, although correlations may have been obscured by, e.g., temporarily elevated measurements during 2013 (Figure 4). Breakthrough Considerations at Casey Station The elevated levels of PBDEs, in addition to the naturally occurring TBA, at Troll compared to Casey station, flag the possibility that the air volumes sampled with the HFTPAS at Casey may have resulted in breakthrough.Breakthrough occurs when the sampling media becomes saturated with an analyte before the sampling period is finished, thus the calculated ambient chemical concentrations are lower than they should be.Analysis of the last PUF in the HFTPAS sampling train found that it contained on average 19% of the bulk analyte mass, ranging from an average of 8% for BDE-154 to an average of 34% for BDE-100.The back PUF (size adjusted), represents one third of the sampling media, so an analyte proportion of 33% or greater suggests that complete saturation of the media has occurred.BDE-28, -47, -49, -99, and -100 approached this threshold, indicating that Casey station measures for these analytes should be considered an under-estimate of ambient concentrations (Table S5), and consequently the differences to Troll station an overestimation. There was little apparent relationship between the level of breakthrough of key compounds (TBA, BDE-28, -47, -49, -100, -153, -154, -206, and -209) and temperature, with the exception of BDE-153 (r = 0.52).All other key compounds were found to have r values ranging from low negative values to 0.19.Similarly, the captured wind volume showed little correlation with the level of breakthrough.Indeed, negative relationships were observed for TBA, BDE-47, -100, -154, -206, and -209.The only apparent positive relationship was observed for BDE-49 (r = 0.4).Wind speeds were likewise found not to impact the level of breakthrough (r = 0.009 to 0.20).Deployment duration impacted breakthrough of the lightest key congeners (BDE-47, -49, -100) in a negative manner (r = −35, −0.42, −0.26, respectively).This is a counterintuitive relationship and may suggest that saturation occurs quickly, and these trace levels are easily influenced by minor fluctuations. Seasonal Trends The Casey station dataset offers the first multi-year data regarding ambient PBDE air concentrations in Antarctica.Lighter PBDE congeners may be expected to be more prone to temperature dependent volatilization due to their lower vapor pressure.This may lead to an increase in atmospheric levels with higher summer temperatures as previously shown for legacy POPs [30].As Casey station measurements for the lighter BDE-congeners were impacted by breakthrough, further interpretation of such trends was, however, not performed.Larger congeners (hepta-, octa-, nona-and deca-) showed no significant relationship with either temperature, season, or wind speed, although correlations may have been obscured by, e.g., temporarily elevated measurements during 2013 (Figure 4).Interestingly, seasonal analysis of Troll data revealed a strong winter peak in TBA concentrations (Figure 5).This pattern corresponds to that previously found in Norwegian air [35].Although authors of this previous study could not explain the pattern, they emphasized the prerequisites of both the precursor of 2,4,6-TBA, namely bromophenols, and the airborne fungi or bacteria responsible for biotransformation of bromophenols to TBA.Interestingly, seasonal analysis of Troll data revealed a strong winter peak in TBA concentrations (Figure 5).This pattern corresponds to that previously found in Norwegian air [35].Although authors of this previous study could not explain the pattern, they emphasized the prerequisites of both the precursor of 2,4,6-TBA, namely bromophenols, and the airborne fungi or bacteria responsible for biotransformation of bromophenols to TBA. Conclusions Levels of BDE-47 detected at Troll station were higher than those previously detected in the Antarctic or Southern Ocean region [24,42], and in the range of those previously detected in ambient air in Southern Taiwan [45] and the Bay of Bengal [46].Levels of BDE-99, -100, and -209 corresponded well with measurements made previously in the Antarctic in the vicinity of active research stations.Although on-station PBDE sources at both Casey and Troll station remain unidentified, and indeed are likely to be numerous and varied, the atmospheric PBDE levels observed in the vicinity of these active stations emphasize the growing importance of local sources for Antarctic chemical contamination, and represent important quality assurance data for untangling local versus long-range contaminant sources in long-term monitoring studies in the region. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1.Figure S1.Trollhaugen atmospheric monitoring observatory, Figure S2.Photograph of the HFTPAS (rights) and anemometer plus logger unit (left) installed at Casey Station, Table S1.Casey Station sample-sets (A-I1) together with corresponding sampling periods and Sample volume.'-' denotes sample sets in the series for which Brominated compound measure-ments are not available, Table S2.2013 Troll Station samples together with corresponding sampling periods and captured wind volume, Table S3.Casey Station chemical concentrations by sample set (2 sig.figures).Values are travel blank corrected (LOQ).Concentrations are presented in fg/m3 to two significant figures.'ND' denotes non-detected values.In 2010 congeners BDE-49 and BDE-71 were co-quantified (*), Conclusions Levels of BDE-47 detected at Troll station were higher than those previously detected in the Antarctic or Southern Ocean region [24,42], and in the range of those previously detected in ambient air in Southern Taiwan [45] and the Bay of Bengal [46].Levels of BDE-99, -100, and -209 corresponded well with measurements made previously in the Antarctic in the vicinity of active research stations.Although on-station PBDE sources at both Casey and Troll station remain unidentified, and indeed are likely to be numerous and varied, the atmospheric PBDE levels observed in the vicinity of these active stations emphasize the growing importance of local sources for Antarctic chemical contamination, and represent important quality assurance data for untangling local versus long-range contaminant sources in long-term monitoring studies in the region. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10.3390/atmos12060668/s1. Figure S1.Trollhaugen atmospheric monitoring observatory, Figure S2.Photograph of the HFTPAS (rights) and anemometer plus logger unit (left) installed at Casey Station, Table S1.Casey Station sample-sets (A-I1) together with corresponding sampling periods and Sample volume.'-' denotes sample sets in the series for which Brominated compound measure-ments are not available, Table S2 Figure 1 . Figure 1.Antarctic air sampling locations in the current study. Figure 1 . Figure 1.Antarctic air sampling locations in the current study. Figure 2 . Figure 2. Levels of key penta-formulation congeners detected at Troll station throughout 2013. Figure 3 . Figure 3. Relative congener homologue contributions to the respective station air profiles. Figure 2 . Figure 2. Levels of key penta-formulation congeners detected at Troll station throughout 2013. Figure 2 . Figure 2. Levels of key penta-formulation congeners detected at Troll station throughout 2013. Figure 3 . Figure 3. Relative congener homologue contributions to the respective station air profiles.Figure 3. Relative congener homologue contributions to the respective station air profiles. Figure 3 . Figure 3. Relative congener homologue contributions to the respective station air profiles.Figure 3. Relative congener homologue contributions to the respective station air profiles. Figure 4 . Figure 4. Levels of higher brominated PBDEs according season and year. Figure 4 . Figure 4. Levels of higher brominated PBDEs according season and year. Table 1 . Summary statistics for brominated compounds in Antarctic air in the vicinity of Casey and Troll Stations, where the mean concentration (fg/m 3 ) is obtained only from samples in which compounds were detected at >LOQ.The % Detection is the percentage of samples that were >LOQ, and non-detect (ND) denotes concentrations <LOQ. * Summary statistics based upon 2012-2014 data in which these congeners were independently quantified.** Data associated with uncertainties. Table 2 . Annual average brominated compound concentrations (fg/m 3) at Casey and Troll Stations, presented alongside recent measurements from the Arctic.Casey and Troll station annual mean concentrations were obtained only from samples >LOQ.Non-detect (ND) denotes concentrations < LOQ and "-" denotes an analyte not targeted. Table 3 . Comparison of selected average PBDE congeners in Antarctic air in the vicinity of active research stations (fg/m 3 ). . 2013 Troll Station samples together with corresponding sampling periods and captured wind volume, Table S3.Casey Station chemical concentrations by sample set (2 sig.figures).Values are travel blank corrected (LOQ).Concentrations are presented in fg/m3 to two significant figures.'ND' denotes non-detected values.In 2010 congeners BDE-49 and BDE-71 were co-quantified (*), Table S4.Troll Station chemical concentrations by sample set.Concentrations are presented in
2021-07-27T00:05:49.522Z
2021-05-24T00:00:00.000
{ "year": 2021, "sha1": "db12500686d47fce7bb97c88a041674e532ecd0c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/12/6/668/pdf?version=1621853939", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8585781a04b0ffadc4c71d7d047bbd5367a03b35", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
210116517
pes2o/s2orc
v3-fos-license
Vector-like Quark Interpretation for the CKM Unitarity Violation, Excess in Higgs Signal Strength, and Bottom Quark Forward-Backward Asymmetry Due to a recent more precise evaluation of $V_{ud}$ and $V_{us}$, the unitarity condition of the first row in the Cabibbo-Kobayashi-Maskawa (CKM) matrix: $|V_{ud}|^2 + |V_{us}|^2 + |V_{ub}|^2 = 0.99798 \pm 0.00038$ now stands at a deviation more than $4\sigma$ from unity. Furthermore, a mild excess in the overall Higgs signal strength appears at about $2\sigma$ above the standard model (SM) prediction, as well as the long-lasting discrepancy in the forward-backward asymmetry ${\cal A}_{\rm FB}^b$ in $Z\to b\bar b$ at LEP. Motivated from the above three anomalies we investigate an extension of the SM with vector-like quarks (VLQs) associated with the down-quark sector, with the goal of alleviating the tension among these datasets. We perform global fits of the model under the constraints coming from the unitarity condition of the first row of the CKM matrix, the $Z$-pole observables ${\cal A}_{\rm FB}^b$, $R_b$ and $\Gamma_{\rm had}$, Electro-Weak precision observables $\Delta S$ and $\Delta T$, $B$-meson observables $B_d^0$-$\overline{B}_d^0$ mixing, $B^+ \to \pi^+ \ell^+ \ell^-$ and $B^0 \to \mu^+ \mu^-$, and direct searches for VLQs at the Large Hadron Collider (LHC). Our results suggest that adding VLQs to the SM provides better agreement than the SM. I. INTRODUCTION The Standard Model (SM) particle content includes three families of fermions under the identical representation of the gauge symmetries SU (3) c × SU (2) L × U (1) Y . Each fermion family includes a quark sector (up-type and down-type quarks) and a lepton sector (charged leptons and a neutrino). The well-known quark mixing in crossing between the families is an indispensable ingredient in flavor physics. One can rotate the interaction eigenbasis to the mass eigenbasis in the quark sector through a unitary transformation, and it generates nonzero flavor mixings across the families in the charged-current interactions with the W boson. The quark mixing for the three generations in the SM can be generally parameterized by the 3 × 3 Cabibbo-Kobayashi-Maskawa (CKM) matrix V SM CKM [1,2]. Since V SM CKM is composed of two unitary matrices, unitarity of the CKM matrix shall be maintained. The existence of additional quarks beyond the three SM families shall extend the CKM matrix to a larger dimension. In such a case, the unitarity of original 3 by 3 submatrix will no longer hold. The recent updated measurements and analyses of V ud and V us are briefly outlined as follows. The most precise determination of |V ud | is extracted from the superallowed 0 + − 0 + nuclear β decay measurements [3,4] |V ud | 2 = 0.97147 (20) 1 where ∆ V R accounts for short-distance radiative correction. Recently, according to the updated lattice calculation of the inner radiative correction with reduced hadronic uncertainties ∆ V R = 0.02467 (22) [5]. It significantly modified the value of |V ud | = 0.97370 (14) [4]. On the other hand, one can use various Kaon decay channels to independently extract the values of |V us | and |V us /V ud |. Based on the analysis of semileptonic Kl3 decays [6] and the comparison between the kaon and pion inclusive radiative decay rates K → µν(γ) and π → µν(γ) [7], the values of |V us | = 0.22333(60) and |V us /V ud | = 0.23130(50) are obtained in Ref. [4]. As a result, the matrix-element squared of the first row of V SM CKM |V ud | 2 + |V us | 2 + |V ub | 2 = 0.99798 ± 0.00038 , which deviates from the unitarity by more than 4σ [4]. If this deviation is further confirmed, it may invoke additional quarks to extend the CKM matrix 1 . 1 Another explanation for this deviation involves new physics in the neutrino sector with lepton-flavor After the final piece of the SM, Higgs boson, has been discovered in 2012 [11,12], the precise measurements of its properties become more and more important. The SM can fully predict the signal strengths of this 125 GeV scalar boson so that deviations from the SM predictions can help us to trace the footprint of new physics beyond the SM. Recently, the average on the Higgs-signal strengths from both ATLAS and CMS Collaborations indicated an excess at the level of 1.5σ 2 . If one looks more closely into each individual signal strength channel, one would find that mild 1σ excesses appear in the majority of channels. After taking into account of all available data from the Higgs measurements, the average of the 125 GeV Higgs signal strengths was obtained [15] µ Higgs = 1.10 ± 0.05 . One simple extension of the SM with an SU (2) doublet of vector-like quarks (VLQs) with hypercharge −5/6 can be introduced to account for the excess by reducing the bottom Yukawa coupling at about 6% from its SM value [15]. Since the h → bb mode takes up around 58% of the 125 GeV Higgs total decay width, the above extension can reduce the total Higgs width and universally raise the signal strengths by about 10% to fit the data. Finally, the measurement of the forward-backward asymmetry A b F B of the bottom quark at the Z 0 pole has exhibited a long-lasting −2.4σ deviation from the SM prediction [7]. Again, this anomaly can be reconciled by introducing an SU (2) doublet VLQs with hypercharge −5/6. The mixing between the isospin T 3 = 1/2 component of VLQs and the right-handed SM bottom quark with mixing angle sin θ R 0.2 can enhance the right-handed bottom quark coupling with Z boson. Meanwhile, the left-handed bottom quark coupling remains intact [15]. However, the mixing between VLQs and the SM bottom quark is under economical way is to introduce VLQs. The review of various types of VLQs can be found in Ref. [18]. In this study, we need to modify both left-handed and right-handed down-quark sectors in order to alleviate the above three anomalies. In general, both left-handed and right-handed mixing angles are generated and related to each other for each type of VLQs though one may be suppressed relative to another. It means that we need at least two types of VLQs to simultaneously explain these anomalies. We show that the minimal model requires coexistence of both doublet and singlet VLQs, B L,R and b L,R . This paper is organized as follows. In Sec. II, we first write down the general model and study the interactions between VLQs and SM particles, especially the modifications of couplings to W , Z, and h bosons. Then we boil down to the requirements of the minimal model. The various constraints from relevant experimental observables are discussed in Sec. III. In Sec. IV, we perform the chi-square fitting and show numerical results, in particular we discuss the allowed parameter space that can explain all three anomalies. We summarize in Sec. V. II. STANDARD MODEL WITH EXTRA VECTOR-LIKE QUARKS In this work, a doublet and singlet of vector-like quarks (VLQs) are introduced: with hypercharges (Y /2) B L,R = −5/6 and (Y /2) b L,R = −1/3, respectively, under the SM U (1) Y symmetry. The upper component of the doublet and the singlets have the same quantum numbers as the SM down-type quarks, and thus they are allowed to mix with the SM down-type quarks if nontrivial Yukawa interactions exist among them. It was pointed out that the Yukawa interaction between B L and b R will induce a mixing between the righthanded b R and b R , and so reduce the bottom Yukawa coupling. At the same time, it will increase the coupling of the Z boson to the right-handed b quark [15]. The reduction in the bottom Yukawa coupling gives rise to a decrease in the Higgs total decay width, and thus can help alleviate the overall Higgs signal-strength excess, while the increase in the Z coupling to the right-handed b quark can bring the prediction of the forward-backward asymmetry A b F B down to the experimental value. On the other hand, the mixing between b L and b L is suppressed due to the absence of Yukawa interaction between B R and b L , and so the modification of CKM matrix is negligible. However, the Higgs-induced Yukawa interaction between b L,R and the SM down quarks will give a larger left-handed mixing than the righthanded one. Thus, the non-negligible left-handed mixing can further modify the original 3 × 3 CKM matrix and the extra VLQs can extend the CKM matrix to 5 × 5 to restore the unitarity. A. Yukawa couplings and fermion masses The generalized interactions between VLQs, SM quarks, and the Higgs doublet are expressed as where U, D represent the SM up-and down-quarks with i, j = 1, 2, 3 as the flavor indices, and superscript 0 indicates flavor eigenstates, for which the SM Yukawa matrix y u,d have been diagonalized. Note the implicit sum over the repeated indices in the above equation. After the electroweak symmetry breaking (EWSB), H = (0, v/ √ 2) T , the mass matrix of the down-type quarks becomes where Since both MM † and M † M are symmetric matrices, they can be diagonalized as and where the mass eigenstates are related to the flavor eigenstates via the unitary matrices V R,L . Similarly, for the up-type quarks the mass eigenstates are related to the flavor eigenstates by Since the VLQs do not mix with up-type quarks, the up-type quark mass matrix remains the same as in SM. Due to the discrepancies between the mass matrix and Higgs interaction matrix, the Higgs couplings of down-type quarks will be modified from the SM Yukawa couplings, The coupling for b L b R h can be extracted out from the matrix element (Y) 33 , for example. Since we only introduce the vector-like quarks that can mix with the bottom quarks, the Higgs couplings to the up-type quarks will stay the same as the SM ones. B. Modifications to the W couplings with SM quarks The charged-current interactions via the W boson with the SM quarks and vector-like quarks are where P L,R = 1∓γ 5 2 . We define the 5 × 5 CKM matrix as Since the VLQs do not modify the up-quark sector, we simply extend the 3 × 3 matrix W L in Eq. (12) to a 5 × 5 matrix. The exact parameterization of V 5×5 CKM will be shown in Appendix A. We further parameterize the charged current interactions in the following simple form [19], where q includes all SM quarks and VLQs. A L ij and A R ij are summarized as follows where α = 1 to 3, β = 1 to 5, and (U 1 , U 2 , According to T 3f − Q f x w , the Z boson couplings with the SM down-type quarks and VLQs are where Q f (T 3f ) is the electric charge (third component of isospin) of quarks, the gauge coupling g Z = g 2 / cos θ w , x w = sin 2 θ w is the sine-square of the Weinberg angle θ w . Again, the Z boson couplings to the SM up-type quarks are exactly the same as in the SM and are not modified by VLQs. We further parameterize the Z boson couplings with SM down-type quarks and VLQs in the following simple form [19], where X L ij and X R ij are summarized below, D. Minimal models In this subsection, we would like to narrow down to the most relevant couplings to the experimental anomalies. First, we consider non-zero couplings g B 3 , g b 1 , while M 1,2 are at TeV scale. According to Ref. [15], the tensions of Higgs signal strength and A b FB can be alleviated by the g B 3 coupling from the doublet VLQ. Then the CKM unitarity violation mainly due to the |V ud | is relevant to g b 1 from the singlet VLQ. Other parameters in Eq.(5) are set to zero. It simplifies the down-type quark mass matrix and V L,R as where Here we have taken the liberty that the first two generations of the SM down-type quark masses are set at zero. If the couplings g B 3 , g b 1 are about O(1), the parameters follow the ordering M 1,2 > ∆,∆ m. It also implies s L 34 s R 34 , due to the suppression factor O(m/M 1 ) on s L 34 . After diagonalizing the mass matrix, the mass of the bottom quark is According to Eq.(10), the coupling for (h/v)b L b R is given by This gives rise to a reduction factor in the Higgs Yukawa coupling by C hbb ≡ c R 34 / 1 + (∆ 2 /M 2 1 ), and thus the enhancement of Higgs signal strengths. The modification of the CKM matrix is indicated by Eq. (12). The first row of first three elements of V 5×5 CKM violates unitarity as However, the unitarity for the first row of V 5×5 CKM can be restored with the other two elements If s L 15 ∼ s L 34 , we anticipate the contribution from V ub will be dominant. Finally, from Eq.(16) the Zbb couplings are modified as Since s R 34 enhances (g b ) R , it alleviates the tension between A b F B observation and SM prediction. Second, we include one more non-zero coupling g b 3 . Then the mass matrix and unitary transformations matrices are Here we diagonalize MM † via a 4-step block diagonalization procedure. We have used rotation matrices with the order of R(θ 15 ), R(θ 35 ), R(θ 34 ), and R(θ 45 ) to block diagonalize MM † in each step and finally V L and V R can be approximated by Eq. (25). The mass of which is the same as Eq. (21). The first three elements in the first row of V 5×5 CKM violate unitarity as Similarly, the unitarity in the fist row of V 5×5 CKM can be restore by the other two elements Once again, the contribution from V ub is the dominant one. Then the Zdd, Zbb, Zdb couplings are given by The FCNC is generated from (g db ) L and shall be constrained by More details are shown in the following sections. B. Z boson measurements Once the d, s, b couplings to the Z boson are modified, we find that the following observables are modified: 1. Total hadronic width. At tree level, the change to the decay width into dd, ss, or bb is given by With this modification, the total hadronic width is changed to 2. R b . The R b is the fraction of hadronic width into bb, which is given by There is a large tension in the forward-backward asymmetry of b quark production at the Z resonance between the experimental measurement and the SM prediction, The couplings of fermions to the Z boson are basically given by T 3 − Qx w in the SM. For the electron it is simply It was pointed out in Ref. [15] that the interaction term For the second minimal model, where g B 3 , g b 1,2 are non-zero couplings, the modifications of (g b ) L and (g b ) R can be found from Eq. Both s R 34 and s L 35 can reduce the the forward-backward asymmetry A b FB of the quark at Zpole. They are good to fit the measured A b FB at a lower value from the SM prediction. On the other hand, s L 35 reduces R b but s R 34 increases R b . We can use both to maintain R b at the SM value. This is achieved in the leading order by Therefore, we require (s R 34 ) 2 = ( 3 2xw − 1)(s L 35 ) 2 in order to maintain R b at the SM prediction. A rough estimation is possible by setting x W ≈ 1 4 , and so (s R 34 ) 2 ≈ 5(s L 35 ) 2 . Unfortunately, we will see from the Fit-2b in Sec. IV that the B-meson observables are too restrictive to fulfill this relation. Subsequently, mixing angles are chosen to fit the anomaly in A b FB . C. 125 GeV Higgs precision measurements The data for the Higgs signal strengths for the combined 7 + 8 TeV data from ATLAS and CMS [20] and all the most updated 13 TeV data were summarized in Ref. [21]. The overall average signal strength is µ Higgs = 1.10 ± 0.05 [21], which is moderately above the where ∆S and ∆T are defined as We consider the 3σ allowed regions of ∆S and ∆T parameters in our fitting. The general form of S parameter can be represented as [19,22,23] , M q i are the quark masses, and A L,R ij , X L,R ij are defined in Eqs. (14) and (17) respectively. On the other hand, the functions inside S are The contributions from t and b quarks in the SM for the S parameter can be represented as Similarly, the general form of T parameter can be represented as [19,22,24] where the functions inside T are θ + (y 1 , y 2 ) = y 1 + y 2 − 2y 1 y 2 y 1 − y 2 log y 1 y 2 θ − (y 1 , y 2 ) = 2 √ y 1 y 2 y 1 + y 2 y 1 − y 2 ln The contributions from t and b quarks in the SM for the T parameter can be represented as where U 2 std−db is from the SM contribution of top-W box diagram, and −U db ≡ V * L35 V L15 from the Z boson FCNC induced by the singlet VLQ. On the other hand, the FCNC contribution from the doublet VLQ, V * L34 V L14 , is much smaller than that from the singlet VLQ, because the pattern of the mass matrix which suppresses the left-handed mixing angle for doublet VLQ with down and bottom quarks [15] where y t ≡ m 2 t /m 2 W and the loop function [27] f 2 (y) ≡ 1 − 3 4 Taking the most updated experimental values of |V tb | = 1.019±0.025 and |V td | = (8.1±0.5)× 10 −3 [7], the SM reproduces the central value of the current experimental measurement [7] x d | exp = 0.770 ± 0.004 . However, the theoretical uncertainty is much larger than the experimental one. For conservative limit we require the new physics contribution to be less than the SM contribution, which implies that is much weaker than the constraints from B + → π + + − and B 0 → µ + µ − in the next two subsections. In addition, due to large theoretical uncertainties we do not use this data in our global analysis. On the other hand, the mixings between the second generation quarks and new VLQs are irrelevant in this study. In order to avoid the stringent constraints from the mixing of D 0 -D 0 , K 0 -K 0 , and B 0 s -B 0 s mesons, we suppress all the interaction terms between the second generation quarks and new VLQs for simplicity. 5 F. The B + → π + + − The FCNC coupling (g db ) L generated from Eq. (30) contributes to the B + → π + + − [28] through the effective Hamiltonian Incorporating with the SM contribution, the differential branching ratio is given by [28] dBr with the SM Wilson coefficients C t 9,P 3.97 + 0.03i, C u 9,P 0.84 − 0.88i, and C 10 −4.25. Follow the effective operator notations from Ref. [28], the VLQs induced Wilson coefficients are . In the following chi-square fitting, we combine both the experimental error and 30% theoretical uncertainty from the SM [28] to give conservative constraints. operator also contributes to the B 0 → µ + µ − through the expression [29] Br where f B = 225 MeV. In our framework, the (g db ) R = 0 from Eq. (30) guarantees no mixing among the right-handed d and b quarks and thus C 10 defined in Ref. [29] is zero. H. Direct searches for the vector-like bottom quarks The vector-like bottom quarks can be pair produced by QCD processes or singly produced Collaboration can be found in Ref. [34,35], and those constraints are similar to Ref. [33]. On the other hand, the searches for single production of vector-like bottom quarks depend not only on their masses, but also on their mixing with SM down-type quarks. Recently, the ATLAS Collaboration has published their searches for single production of vector-like bottom quark with decays into a Higgs boson and a b quark, followed by H → γγ in Ref. [36]. Again, this constraint is roughly the same as the above ones. Similarly, the searches for pair production and single production of vector-like quark p with electric charge −4/3 can be found in Ref. [37,38]. A lower mass limit about 1.30 TeV at 95% confidence level is set on the p . In order to escape the constraints from these direct searches at the LHC, we can increase m b , m p , and m b to be above the lower bounds of the mass constraints. Therefore, we safely set their masses at 1.5 TeV in the analysis. In fact, the SM does not fit well to the above datasets, as it gives a total χ 2 (SM)/d.o.f. = 88.946/75, which is translated into a goodness of fit only 0.130. Note that during the parameter scan, the unitarity condition of i=d,s,b,b ,b" |V ui | 2 = 1 is always held from our analytical parameterization. The unitary violation only happens on i=d,s,b |V ui | 2 . According to the minimal model of additional VLQs with various options on the parameters in subsection II D, we perform several fittings to investigate if these models can provide better explanations for the data. Without loss of generality we fix the VLQs mass at 1.5 TeV, which is above the current VLQs mass lower bounds from ATLAS and CMS searches [33,36,[38][39][40][41]. It is shown in both Table II and Fig. 1 that the best-fit points prefer a non-zero value of g B 3 = ±1.177 and g b 1 = ±0.335 at a level more than 2.5σ and 4σ from zero, respectively. Furthermore, the bottom-quark Yukawa coupling deviates from the SM prediction by more than 2σ, and the best-fit points give C hbb = 0.98, which is about 2% smaller than the SM value. It helps to enhance the overall Higgs signal strengths. In fact, the Higgs signalstrength dataset prefers bottom Yukawa coupling 6% smaller than the SM value [15]. Since was quite precisely measured and consistent with the SM prediction, the deviation of the bottom-Yukawa coupling cannot exceed more than a couple of percent. From the it does not show correlation between g B 3 and g b 1 . In the (V R34 , ∆S) and (V R34 , ∆T ) panels, they show that the best-fit regions are consistent with the oblique parameters from electroweak precision measurements. In Fit-2, both couplings g b 1 and g b 3 can vary from zero. In this case, according to Eq. (30), flavor-changing coupling (g db ) L is induced and therefore is constrained d mixing is not included in any of the fits.) In Fig.2 for Fit-2a, which has not included these flavor-changing constraints in the global fit, it allows both couplings g b 1 and g b 3 to significantly deviate from zero. Indeed, we see that the bestfit points prefer g B 3 = ±1.651 and g b 3 = ±0.614, and (s R 34 ) 2 5(s L 35 ) 2 are correlated in (V L35 , V R34 ) panel. This is in accordance with our discussion at end of subsection III B, where the VLQs contributions to R b cancel among themselves, meanwhile A b F B anomaly is explained by (g b ) L . Since the VLQs contributions to R b are canceled, the bottom-Yukawa coupling now is allowed to deviate from the SM by more than 6%, and the best-fit points give C hbb = 0.96, which deviates form the SM prediction by more than 3σ. Hence, Fit-2a can further lower the minimal chi-square than Fit-1, and gives χ 2 min /d.o.f. = 59.185/70 and thus a goodness of fit equals to 0.818. Unfortunately, there exist constraints from B 0 d -B 0 d mixing, B + → π + + − and B 0 → µ + µ − , which will restrict simultaneously large non-zero values of g b 1 and g b 3 . In order to study the effects from those B physics constraints, we further include both B + → π + + − and B 0 → µ + µ − in the Fit-2b. In Fig. 3 for Fit-2b, we can understand how the constraints from B + → π + + − and B 0 → µ + µ − affect the allowed parameter region. In the (g b 3 , ∆χ 2 ) panel, the coupling g b 3 is restricted to be small within 3σ, more precisely, it requires |g b 3 | ≤ 0.076. Since g b 3 is restricted close to zero, the best-fit points and the corresponding C hbb of Fit-2b overlap with Fit-1. In the same panel, we can observe there are two local minima at g b 3 ±0.6 at 4σ, which is correlated to g b 1 0 in (g b 1 , ∆χ 2 ) panel. From the (U db , ∆χ 2 ) panel, we know that the flavor constraints from B + → π + + − is more stringent than B 0 d -B 0 d mixing due to more precise theoretical uncertainty in the former. Around the minimum, we can identify the two-tine fork shape structure, and it is due to the interference between VLQs and SM contributions for B + → π + + − from Eq.(50). Finally, comparing with B + → π + + − , the B 0 → µ + µ − gives similar but weaker constraint on (g db ) L . We can also find in Table II that V. DISCUSSION We have advocated an extension of the SM with vector-like quarks, including a doublet and a singlet, in aim of alleviating a few experimental anomalies. An urgent one is a severe unitarity violation in the first row of the CKM matrix standing at a level more than 4σ due to a recent more precise evaluation of V ud and V us . Another one is the long-lasting discrepancy in the forward-backward asymmetry A b FB in Z → bb at LEP. Furthermore, a mild excess in the overall Higgs signal strength appears at about 2σ above the standard We offer the following comments before closing. 1. By extending the CKM matrix to 5 × 5 with the extra VLQs, the unitarity condition in the first row is fully restored. 2. Without taking into account the B-meson constraints the best-fit (see Fit-2a) can allow the bottom-Yukawa coupling to decrease by about 6%, which can then adequately explain the 2σ excess in the Higgs signal strength. At the same time, it can also account for the A b FB without upsetting R b due to a nontrivial cancellation between two contributions. However, the resulting branching ratios for B + → π + + − and B 0 → µ + µ − become exceedingly large above the experimental values. 3. However, including the B-meson constraints the allowed parameter space in g b 3 is restricted to be very small due to the presence of the FCNC in Z-b-d. Last but not least, the extra 5 physical CP phases in V 5×5 CKM matrix can be a trigger for electroweak baryogenesis. In order to generate the strong first-order electroweak phase transition, one needs to add an extra singlet complex scalar [43,44]. On the other hand, adding extra Z boson as in the Ref. [29] would be possible to cancel the FCNC contributions from VLQs. Therefore, a gauge U (1) extension of our minimal model with a singlet complex scalar may simultaneously alleviate the constraints from B meson observables and explain the matter-antimatter asymmetry of the Universe. However, this extension is beyond the scope of this work and we would like to study this possibility in the future. We first parameterize the original 3 × 3 CKM matrix in the usual form with s ij = sinθ ij and c ij = cosθ ij [42]. Then we can further parameterize the full 5 × 5 CKM matrix based on V 3×3 CKM as Notice that there is some freedom to arrange the positions of extra 5 CP phases in those matrices. We assign there is no CP phase in the rotation matrices of θ 34 and θ 35 in this study. On the other hand, since we don't involve the vector-like up-type quarks t , t inside the model, only the measurable 3 × 5 sub-matrix of V 5×5 CKM is corresponding for our study here.
2020-01-09T06:10:49.000Z
2020-01-09T00:00:00.000
{ "year": 2020, "sha1": "aa704ff85745c48f637e4c23f8e1b6398b084dab", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2020)117.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "d06aac8da23329927a3052efc9fc3109c395ba62", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15451752
pes2o/s2orc
v3-fos-license
Spleen Stiffness Correlates with the Presence of Ascites but Not Esophageal Varices in Chronic Hepatitis C Patients Although spleen stiffness has recently been identified as potential surrogate marker for portal hypertension, the relationship between spleen stiffness and portal hypertension has not been fully elucidated. We attempted to determine the relationship between the liver or spleen stiffness and the presence of ascites or esophageal varices by acoustic radiation force impulse (ARFI) imaging. A total of 33 chronic hepatitis C (CHC) patients (median age 68; range 51–84) were enrolled. We evaluated the relationship between the liver or spleen stiffness and indicators of portal hypertension as well as clinical and biochemical parameters. Fourteen healthy volunteers were used for validating the accuracy of AFRI imaging. The liver and spleen stiffness increased significantly with progression of liver disease. A significant positive correlation was observed between the liver and spleen stiffness. However, spleen stiffness, but not liver stiffness, was significantly associated with the presence of ascites (P < 0.05), while there was no significant association between the spleen stiffness and spleen index/presence of esophageal varices in CHC patients. The area under the receiver operating characteristic curve based on the spleen stiffness was 0.80. In conclusion, spleen stiffness significantly correlates with the presence of ascites but not esophageal varices in CHC patients. Introduction In cirrhotic patients with portal hypertension, increase of the spleen size is due mainly to tissue hyperplasia associated with a parallel increase of the splenic blood flow, which probably participates in the pathogenesis of portal hypertension [1]. However, the role of spleen stiffness among the spleen changes associated with portal hypertension has not been fully elucidated. Recently, liver stiffness has been measured easily using noninvasive modalities including transient elastography (TE), MR elastography, and acoustic radiation force impulse (ARFI) imaging. Although a significant correlation between liver stiffness as assessed by TE and portal hypertension as evaluated by hepatic venous pressure gradient (HVPG) measurement has been reported [2][3][4][5][6], only several studies have applied these techniques to measurement of the spleen stiffness. To the best of our knowledge, there are reports [6][7][8][9][10] of evaluation of the relationship between the spleen stiffness and the presence of portal hypertension by different noninvasive modalities, that is, TE, MR elastography, and ARFI imaging. These studies evaluated the relationship between the spleen stiffness and the presence of esophageal varices as an indicator of portal hypertension. However, the results were contradictory. On the other hand, there has been no evaluation of the relationship between spleen stiffness and the presence of ascites as another indicator of portal hypertension. ARFI imaging is a new elastographic technology integrated into conventional B-mode ultrasonography. It is an inexpensive and noninvasive modality, conveniently performed in patients who are not suitable candidates for MRI. ARFI imaging allows quantification of the shear-wave velocity (SWV) (m/sec), which has been shown to be correlated with the liver fibrosis stage [11][12][13]. The higher the tissue stiffness, the greater the SWV (m/sec). The longitudinal waves from the push pulses are transmitted through ascites, whereas the shear waves are measured only in the region of interest. Therefore, SWV assessment is possible in the presence of ascites, differing from the measurement by TE [11]. In addition, liver stiffness by ARFI imaging is comparable to that by TE from the point of view of diagnostic accuracy and is unaffected by the serum ALT levels, again differing from the measurement by TE [12]. In this study, we evaluated the relationship between liver or spleen stiffness as assessed by ARFI imaging and the presence of ascites or esophageal varices as indicators of portal hypertension. Patients and Study Design. Thirty-three consecutive hepatitis C patients (median age 68; range 51-84) with chronic hepatitis or liver cirrhosis seen at our institution were enrolled. The control group, which was used for validating the accuracy of ARFI imaging, was composed of 14 healthy volunteers (median age 33; range 24-52). The median age was lower in the control group than in the chronic hepatitis group or liver cirrhosis group. The inclusion criteria for the controls were at the age of 18 years or older, no history of chronic liver disease, and normal serum liver enzyme levels at the time of enrollment. Laboratory tests, including the serum bilirubin and albumin, platelet count, prothrombin time, and serum markers of fibrosis (type IV collagen, hyaluronate), were performed in all patients within 1 month from the beginning of the study. In all patients, a fiberoptic upper gastrointestinal endoscopy was performed to check for the presence of esophageal varices and abdominal ultrasonography to check for the presence of ascites and determine the spleen index. Esophageal varices were evaluated by the endoscopist without knowledge of the data of liver and spleen stiffness measurement. Calculation of the median liver and spleen stiffness was possible in all subjects. Liver cirrhosis was diagnosed by the predictive model of cirrhosis in patients with CHC based on standard laboratory tests [14]. This study was conducted with the approval of the institutional ethics committee. Informed consent was obtained from all the volunteers and all the patients, in accordance with the principles of the Declaration of Helsinki. Liver and Spleen Stiffness Measurement. Liver and spleen stiffness were measured by ARFI imaging in Virtual Touch Tissue Quantification using an ACUSON S2000 apparatus (Mochida SIEMENS Medical System, Tokyo). One experienced sonographer operated the apparatus. ARFI imaging involves targeting an anatomic region to be interrogated for elastic properties with the use of a region-of-interest (ROI) cursor while performing real-time B-mode imaging. Transmission of longitudinal acoustic pulses leads to tissue displacement, which results in shear-wave propagation away from the region of excitation. The SWV is measured within a defined ROI (central window of 5 mm axial by 4 mm width) by using ultrasound tracking beams laterally adjacent to the single push beam. The shear-wave propagation velocity is proportional to the square root of the tissue elasticity. Results are expressed in meters per second. We used a curved array at 4 MHz for B-mode ARFI imaging. We measured the liver stiffness in the right liver robe, 2-3 cm below the liver capsule, via an intercostal approach, and the spleen stiffness in the lower pole of the spleen, 2-3 cm below the spleen capsule, also via an intercostal approach. In each patient, 5 valid ARFI imaging measurements were performed in the liver and the spleen based on previous reports [12,13,15]. Furthermore, the median values were calculated, the results being expressed as SWV (m/sec). Measurement failure was defined as zero valid shots like TE, and unreliable measurements were defined as an interquartile range (IQR) to median value ratio greater than 30% or a success rate less than 60% [12,16]. Statistical Analysis. All data were presented as median and range. Analysis of variance for comparison of more than two groups was performed using the Kruskal Wallistest followed by Mann-Whitney's -test with post-hoc Bonferroni's correction. Analysis of variance for comparison of two groups was performed by Mann-Whitney's -test. Comparisons of percentages between the ascites and non-ascites groups or between the esophageal varices and non-varices groups were performed using Fisher's exact test. The correlations between the liver stiffness and the spleen stiffness or between the spleen index and spleen stiffness were assessed by calculation of Spearman's correlation coefficient. We assessed the diagnostic performance of ARFI imaging by receiver operating characteristic curve (ROC curve) analysis. The ROC curve represents the sensitivity versus (1-specificity) for all possible cut-off values. Area under the ROC curve (Az) and 95% confidence intervals of the Az values were calculated. Differences were regarded as significant when the values were less than 0.05. Characteristics of Study Subjects. The clinical and biochemical characteristics of the study subjects are listed in Table 1. Among the 24 cirrhotic patients, 8 (33.3%) were classified as Child-Pugh class A, 10 (41.7%) as Child-Pugh class B, and 6 (25.0%) as Child-Pugh class C. Esophageal varices and ascites were present in 12 (50.0%) and 15 (62.5%) patients, respectively. Liver Stiffness in Healthy Volunteers and Patients with Chronic Hepatitis and Liver Cirrhosis. Determination of liver stiffness was possible in all volunteers and patients. There were no subjects who met the definition of unreliable measurements of liver stiffness. Figure 1 shows the liver stiffness as determined by ARFI imaging in the control, chronic hepatitis, and liver cirrhosis groups. The median liver SWV (range) values were 1.17 m/sec (IQR 1.03-1.21 m/sec), 1.36 m/ sec (IQR 1.19-1.76 m/sec), and 2.4 m/sec (IQR 2.07-2.97 m/ sec), respectively. The liver stiffness differed significantly between each two of the three groups. The values of the significant differences between the groups were as follows: control versus cirrhosis ( < 0.001); chronic hepatitis versus cirrhosis ( < 0.001). Spleen Stiffness in Healthy Volunteers and Patients with Chronic Hepatitis and Liver Cirrhosis. Determination of spleen stiffness was possible in all volunteers and patients. There were no subjects who met the definition of unreliable measurements of spleen stiffness. Figure 2 shows the spleen stiffness as determined by ARFI imaging in the groups were as follows: control versus cirrhosis ( < 0.001); chronic hepatitis versus cirrhosis ( = 0.002). Correlation between the Liver Stiffness and Spleen Stiffness in Healthy Volunteers and Patients with Chronic Hepatitis and Liver Cirrhosis. Figure 3 shows the correlation between the liver stiffness and spleen stiffness. In the overall subject population, there was a statistically significant positive correlation between the liver stiffness and spleen stiffness (rs = 0.68, < 0.01). Correlation between the Spleen Stiffness and Spleen Index in Healthy Volunteers and Patients with Chronic Hepatitis and Liver Cirrhosis. In the overall subject population, no significant correlation was found between the spleen stiffness and spleen index. Figure 4). The spleen stiffness was significantly elevated in the group with ascites as compared with that in the group without ascites (Figure 4) ( < 0.05). Relationship between the Presence of Esophageal Varices and the Liver/Spleen Stiffness in Patients with Liver Cirrhosis. The median liver SWV (range) was 2.39 m/sec (IQR Clinicopathological Factors Associated with the Presence of Ascites in Patients with Liver Cirrhosis. The clinical characteristics of the patients with cirrhosis are shown in Table 2. In the univariate analysis, the prothrombin time ( = 0.012) and spleen stiffness ( = 0.012) were found to be significantly different between the cirrhotic patients with and without ascites. Figure 5 shows ROC that is the prediction with complication of ascites with ARFI imaging in the patients with liver cirrhosis. The area under the ROC curves based on liver stiffness (a) and spleen stiffness (b) were 0.62 (0.11 standard error; 95% confidence index 0.39-0.84) and 0.80 (0.09 standard error; 95% confidence index 0.63-0.98), respectively. Discussion The major findings of our study were that the spleen stiffness, but not the liver stiffness, was correlated with the presence of ascites and that the spleen stiffness did not correlate with the presence of esophageal varices in chronic hepatitis C patients. We also showed that ARFI imaging is an excellent modality for measurement of not only the liver stiffness but also the spleen stiffness, even in the presence of ascites. Our study is the first report until now to evaluate the association between spleen stiffness and the presence of ascites. Ascites can be relatively easily detected by conventional abdominal ultrasonography. However, the finding is of potential clinical significance in view of the possibility of predicting the development of ascites in chronic liver disease by measuring the spleen stiffness, even before ascites can be detected by conventional abdominal ultrasonography. Specifically, ARFI imaging might be useful to predict the development of ascites in posttransarterial chemoembolization by measuring the spleen stiffness before transarterial chemoembolization. In addition, in the event of difficulty in detecting the presence of ascites by conventional ultrasonography due to miscellaneous reasons, such as noticeable obesity or posthepatic resection, ARFI imaging may be useful for suggesting the presence of ascites by measuring the spleen stiffness in an inexpensive and noninvasive way. Previous studies reported that the rates of unsuccessful measurements for liver and spleen stiffness using ARFI imaging were 2.8% [17] and 4.8% [8], respectively. In our study, we could detect liver and spleen stiffness in all volunteers and patients. Because the number of our study subjects is relatively small, we believe that our results are consistent with these data [8,17]. 6 BioMed Research International One of the possible reasons for the finding that the liver stiffness was not significantly correlated with the presence of ascites is as follows. Portal hypertension develops as a result of an increase in intrahepatic resistance to portal blood flow due to the profound morphologic changes in the liver characterized by fibrosis and regenerative nodules compressing the sinusoids, which lead to vascular obliteration, activation of the hepatic stellate cells, and vasoconstriction due to intrahepatic nitric oxide deficiency and enhanced vasoconstrictor activity [3]. In the advanced stage of cirrhosis, several extrahepatic factors such as the hyperdynamic circulation, splanchnic vasodilatation, and resistance to portal blood flow posed by the portosystemic collaterals contribute to the rise in the portal pressure [18]. Thus, the liver stiffness may not be sufficient for reflecting the complex hemodynamic changes characteristic of the impending portal hypertension. One of the possible reasons for the positive correlation between the spleen stiffness and presence of ascites is as follows. The anatomic features and microcirculation of the spleen are well characterized. Splenic tissue is composed primarily of red pulp tissue and, to a lesser degree, white pulp. Within the red pulp, blood is received by the penicillar arterioles, which open directly into the venous sinuses and trabecular vein. Blood exits through the splenic vein into the splanchnic venous circulation. White pulp is composed of a central artery surrounded by lymphoid tissue. Penicillar arterioles originate from the central arteries outside the white pulp and drain into the venous sinuses and the red pulp [7]. Thus, portal hypertension leads to pulp hyperplasia, which may increase the spleen stiffness. If this hypothesis were true, the spleen stiffness would be correlated with the spleen index. However, there was no correlation between the spleen stiffness and spleen index in our study. To the best of our knowledge, a consistent relation between splenomegaly and portal venous pressure has not yet been identified [19][20][21]. Splenomegaly is related not only to blood congestion as a consequence of increased portal pressure and augmented resistance to splenic vein outflow but also to multiple other histopathologic changes, such as arterial aneurysms [22] and concern with endothelin [23], and the immunologic pathway from the liver [24]. Therefore, it may not be valid to simply replace the spleen index by the spleen stiffness. No significant correlation was found between the presence of esophageal varices and the liver stiffness or spleen stiffness in our study. This was probably because esophageal varices are only one of the components of collateral circulation and represent only a part of portal hypertension. Thus, the correlation may depend on the collateral circulation dynamics in the study population. In fact, previous reports on the association between esophageal varices and the liver stiffness or spleen stiffness are inconsistent. Some studies showed that the liver stiffness was positively correlated with the presence of esophageal varices [6,10] and that spleen stiffness predicts or is associated with the presence [6,7,9] or grade of esophageal varices [6]. However, no correlation was seen between the grade or size of the esophageal varices and the liver stiffness or spleen stiffness [2,9]. Bota et al. reported that the spleen stiffness as assessed by ARFI imaging could not predict the presence or severity of esophageal varices [8], lending support to our data. These contradictory findings may support our hypothesis; that is, the correlation may depend on the collateral circulation dynamics in the study population. Previous studies with ARFI imaging [7,8] showed a systematic association between the stage of fibrosis and the liver stiffness. Bota et al. [8] showed that the spleen stiffness increased with progression of the stage of liver disease. Talwalkar et al. [7] showed that the liver stiffness was significantly correlated with the spleen stiffness. Our data were consistent with these reports. Our study had limitations. First, we did not measure HVPG as the portal venous pressure; therefore, the correlation between the spleen stiffness and the portal venous pressure could not be evaluated. Another limitation was the relatively small sample size. Conclusions Spleen stiffness, but not liver stiffness, was positively correlated with the presence of ascites as a representative marker of portal hypertension in patients with liver cirrhosis and the spleen stiffness did not correlate with the presence of esophageal varices in chronic hepatitis C patients. AFRI imaging showed the advantage of feasibility regardless of the presence of ascites, differing from the measurement by TE, and was applicable in patients who were unsuitable candidates for MRI, differing from the measurement by MRI elastography. However, our study is only preliminary due to the relatively small sample size, and a larger cohort study to verify our results is warranted in the future.
2016-05-12T22:15:10.714Z
2013-08-01T00:00:00.000
{ "year": 2013, "sha1": "6166b20180a72e52af3070471255041a65f49e47", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2013/857862.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6166b20180a72e52af3070471255041a65f49e47", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256906363
pes2o/s2orc
v3-fos-license
Combined magnetron sputtering and pulsed laser deposition of TiO2 and BFCO thin films We report the successful demonstration of a hybrid system that combines pulsed laser deposition (PLD) and magnetron sputtering (MS) to deposit high quality thin films. The PLD and MS simultaneously use the same target, leading to an enhanced deposition rate. The performance of this technique is demonstrated through the deposition of titanium dioxide and bismuth-based perovskite oxide Bi2FeCrO6 (BFCO) thin films on Si(100) and LaAlO3 (LAO) (100). These specific oxides were chosen due to their functionalities, such as multiferroic and photovoltaic properties (BFCO) and photocatalysis (TiO2). We compare films deposited by conventional PLD, MS and PLD combined with MS, and show that under all conditions the latter technique offers an increased deposition rate (+50%) and produces films denser (+20%) than those produced by MS or PLD alone, and without the large clusters found in the PLD-deposited films. Under optimized conditions, the hybrid technique produces films that are two times smoother than either technique alone. . Combined Pulsed Laser Deposition + Magnetron Sputtering System. The target is exposed to the PLD laser (the dark purple vertical plume in the picture) and to the plasma generated by the sputtering system (in light purple in the picture) at the same time. B represents the magnetic field while E is the electric field. films. Techniques to overcome the limitations of PLD for large surface applications are also being proposed and demonstrated 18,19 . One promising approach is to combine magnetron sputtering and pulsed laser deposition (MSPLD) into a hybrid system. In most implementations, MSPLD comprises the use of two targets (one for sputtering and one for PLD) that can be used simultaneously [20][21][22][23][24] . This technique has shown promise for the preparation of functional gradient transition metal carbides and multilayer structures in the TiC and diamond-like carbon (DLC) material systems as well as for other nanocomposites [25][26][27][28][29] . Another hybrid approach involved combining PLD with radio frequency (RF) sputtering at the substrate, where the applied RF power can be used to exercise some control over film properties such as composition and crystallinity 30,31 . Here, we implement a different hybrid approach that circumvents some of the limitations of traditional MS and PLD by using a single target for both the MS and PLD. Under this hybrid approach, the pulsed laser can be used to trigger and maintain the magnetron discharge at lower pressures, adding another control mechanism for tuning the deposited film structure. To demonstrate the advantages of the hybrid technique we have studied the growth of TiO 2 thin films on Si(100), as well as the growth of complex bismuth-based perovskite Bi 2 FeCrO 6 (BFCO) on LaAlO 3 (LAO) (100). TiO 2 is a widely studied semiconducting material with attractive characteristics such as a high refractive index, wide band gap, and photocatalytic properties. BFCO has been recently widely studied due to its multi-ferroic and photovoltaic properties. We have investigated the effects of substrate temperature, operating pressure, pulsed laser and MS power density on the properties of films, and the process has been optimized for power efficiency. Our results show that the hybrid technique leads to improved quality of the deposited films, as well as increasing film uniformity and deposition rate. Figure 1 shows a schematic diagram of the hybrid system. In a hybrid PLD/MS system, the plasma plume liberated by the laser pulse triggers and maintains the magnetron discharge at a lower pressure than in a standard MS system. This increased pressure leads to an increase in the plasma density and sputtering rate. Both neutral species and ions pass through the confining magnetic field and are deposited on the substrate, directly influencing the deposited film thickness and structure. The combined PLD/MS system increases the deposition rate through two mechanisms: (1) increased plasma sputtering rate and (2) direct PL deposition of neutral atoms and clusters. Figure 2 displays representative AFM images of TiO 2 samples obtained using MS, PLD and the hybrid technique at an operating pressure of 5 mTorr. Regardless of which technique is used, the surface roughness of the TiO 2 films decreases with decreasing argon pressure (Fig. 3a). Samples fabricated at 1 mTorr, corresponding to the minimum operational pressure, have low roughnesses (~0.18 nm RMS) that are identical within uncertainty for all deposition techniques. However, the growth rates were very slow at 1 mTorr. At low power and pressure, sputtering is inefficient since the collisions are limited, reducing the yield, while the low power limits the movement of the ions. At 5 mTorr the hybrid system consistently operated at a higher growth rate (+50%) while achieving roughness values significantly lower than those obtained through PLD or MS alone (0.25 ± 0.01 nm RMS vs. 0.34 ± 0.02 nm RMS and 0.34 ± 0.02 nm RMS). At higher pressure the surface mobility of the deposited species is reduced due to the impinging argon flux, which manifests in the formation of grains with an average lateral size of 35 ± 4 nm ( Figure S2). Figure 4 displays the x-ray reflectivity (XRR) data for TiO 2 samples obtained using different pressures and deposition techniques. The period of oscillation depends on the film thickness, with shorter periods corresponding to thicker films 32 . XRR data also indicate a critical angle, θ c , which is related to the density of the film. For θ < θ c total reflection occurs, whereas above θ c , the XRR signal decreases rapidly with increasing incident angle. The film deposited with the hybrid technique at 5 mTorr exhibits a larger critical angle (Fig. 4b), indicating that it has a higher density compared to the other film. Results and Discussion On average, slower decays in reflectivity were observed for the samples grown with the hybrid technique, indicating that the surface roughness is smaller. An exception is the sample obtained at 40 mTorr with the hybrid technique, which has the largest XRR-determined roughness, consistent with AFM measurements. In general at higher pressure the surface mobility of diffusing species is reduced due to reduced energy of impinging particles, leading to rougher surfaces. Figure 4d shows the Scattering Length Density (SLD) obtained from analysis of the XRR data for TiO 2 layers obtained at 40 mTorr and 5 mTorr. The SLD is directly related to the density of the film, with higher values indicating more tightly packed scattering entities. This analysis reveals that independent of the deposition pressure, the hybrid technique produces films with 15-25% higher SLD than MS alone. At the optimal pressure of 5 mTorr the film has the highest value of SLD, which is similar to the tabulated value for bulk TiO 2 (34.8 vs 34.5 10 −6 Å −2 ) 33 The hybrid technique was also used to fabricate BFCO films on LAO(100), to investigate whether the use of sputtering combined with PLD could enhance the growth rate and/or uniformity of the film. Since epitaxial growth is necessary to control the crystalline phase and cation ordering of BFCO, which are linked to its functional properties 34 , we focused on obtaining epitaxial film growth using the hybrid technique. Through a systematic variation of parameters, we identified the optimum growth conditions (temperature at 750 °C and pressure at 10 mTorr, laser power 4.4 W and MS discharge 30 W) for which the film presented a smooth surface with an AFM-measured RMS roughness of less than 6 nm. To evaluate the crystallinity of the samples, we measured XRD patterns. Diffraction from samples fabricated on Si(100) did not reveal any crystalline phases. Diffraction from films grown on LAO(100) is shown in Fig. 5. The sample made with the hybrid technique at 750 °C on LAO(100) produces only diffraction peaks from the substrate and the BFCO layer. Only the 00l (l = 1, 2, 3) cubic reflections of the films are visible, indicating that the films are highly (001)-oriented. These peaks are consistent with data found in literature [35][36][37] and confirm the epitaxial growth of the film. Rocking curve measurements (Fig. 6c) reveal the highly crystalline quality of the BFCO layer, with a full width half maximum (FWHM) of 0.2° around BFCO(002). The epitaxial ordering of BFCO using the hybrid technique is confirmed by the diffraction pattern analysis (ϕ-scan) reported in Fig. 6b and d, in which we observe a fourfold symmetry of the epitaxial relationship between the BFCO layer and the LAO substrate. The film grown at 560 °C using the hybrid technique (blue line in Fig. 5) exhibits additional reflections that can be attributed to the presence of Bi-rich phases [36][37][38] . The film also has a higher RMS roughness value (~12 nm), and AFM shows large particles 196 ± 50 nm in lateral size and 48 ± 19 nm in height uniformly covering the surface ( Fig. 7b and Figure S3). For films fabricated using only sputtering, the result is similar: non-epitaxial growth with some secondary phases (BFC*), as indicated in the diffractogram (black line in Fig. 5). AFM data obtained from this film show elongated structures with grains ~290 ± 63 nm in lateral size and 14 ± 5 nm in height ( Fig. 7a and Figure S3). The differences between the MS and the PLD processes for BFCO on Si(100) are clearly visible in the SEM images displayed in Fig. 7d and Fig. 7e. The film deposited by PLD contains droplets with a lateral size of ~480 ± 76 nm ( Figure S4) and height of 100 nm, whereas the film grown by MS is uniform and much finer-grained. Figure 7f shows the film growth with the hybrid technique on LAO(100). In general the film is uniform, with occasional grains visible on the surface. To verify that the hybrid technique produced a functional BFCO thin film, we investigated its ferroelectric properties by piezoresponse force microscopy (PFM) in several randomly chosen areas. Representative PFM images of the ferroelectric domains in epitaxial BFCO obtained by the hybrid technique are reported in Fig. 8. The out-of-plane and in-plane components of the PFM signal are simultaneously recorded together with topography (Fig. 8a-c). In general, for the out-of-plane measurement (Fig. 8b), yellow areas indicate regions with the z-components of spontaneous polarization oriented upwards while blue areas denote a polarization component oriented in the opposite direction, downwards. In the in-plane measurement, the different colors indicate opposite lateral components of the spontaneous polarization. Figure 8d reports the piezo-response hysteresis loop of the out-of-plane component recorded from the area highlighted in Fig. 8a and b. In Figure S5, we also show the in-plane hysteresis loop recorded on the same area. The existence of hysteresis loops confirm the presence of a switchable ferroelectric polarization, hence confirming the ferroelectric character of the BFCO film produced by the hybrid technique 36,38 . We carried out a similar investigation on films grown by PLD and MS alone but no piezoresponse was observed ( Figure S6), confirming that the hybrid technique was the only deposition approach capable of synthesizing a functional BFCO film. Conclusions We have demonstrated the efficacy of simultaneously combining PLD and MS in a hybrid deposition system for the synthesis of functional materials. TiO 2 films grown on Si(100) using the hybrid approach exhibit favorable physical properties. When deposited under optimal conditions, they are smoother and denser (~20% more dense for deposition at 5 mTorr) than films produced by either PLD or MS alone. The hybrid technique was also capable of growing epitaxial films of BFCO on LAO(100), suggesting that it could be useful for the controlled synthesis of multifunctional oxides. For both types of materials, the deposition rate of the hybrid technique is 50% higher than either PLD or MS in isolation. Since only minimal changes are required to modify an existing PLD or MS system for hybrid deposition, this approach may be appealing for industrial applications. These initial results indicate that the hybrid PLD/MS approach could be a useful, easily implemented addition to the existing repertoire of film deposition techniques. 4.6 W (measured at 20 Hz), and the sputtering power was in the range 35 ÷ 60 W. The repetition rate was fixed at 10 Hz and the deposition duration at 1 hr. Following previous work 35,36 , BFCO films were deposited on Si(100) and LAO(100) at different argon/oxygen pressures (ratio 3:1, 10 and 70 mTorr) and temperatures (560-750 °C). The laser power (λ = 248 nm) was varied between 0.56 W and 4.6 W (measured at 20 Hz) and the sputtering power was in the range 26 ÷ 60 W. The minimum working pressure for maintaining a stable plasma was found to be 1 mTorr while the minimum laser power was 0.56 W. Under these conditions the minimum power applied on the magnetron for successful generation of a plasma was 35 W. XRD and XRR were carried out to analyze the crystal quality, film orientation and film thickness using a high-resolution Panalytical X'pert Pro diffractometer equipped with a Cu K α source. The XRR data were fitted using the MOTOFIT add-on 39 for IGOR Pro data analysis software (WaveMetrics, Inc.), which uses the Abeles matrix method to calculate reflectance in stratified media, allowing for the extraction of parameters such as the roughness of the interfaces and the thickness and the scattering length density of each of the layers in the sample. The reported uncertainty values for the XRR data were generated by MOTOFIT. For a detailed explanation of the method, refer to Supplementary Info. The average growth rates were calculated from final film thicknesses determined by either XRR, AFM or with a profilometer. At least three measurements were collected using each technique, and the reported uncertainties reflect the spread of values obtained from these measurements. The morphology of the deposited thin films was characterized using SEM (FE-SEM LEO 1525 microscope) and AFM (Veeco Enviroscope). The RMS roughness values were calculated as an average of different areas (at least three) of 5 × 5 μm 2 . For PFM, an oscillating testing voltage of 0.5 V was applied between the tip and the substrate. Hysteresis measurements were obtained using an auxiliary digital-to-analog converter of the computer-controlled lock-in amplifier by sweeping an additional direct current (DC) bias voltage between −30 V and +30 V.
2023-02-17T14:43:21.459Z
2017-05-31T00:00:00.000
{ "year": 2017, "sha1": "27e78aae26dbee43758326f1d0b85413c1a20749", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-02284-0.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "27e78aae26dbee43758326f1d0b85413c1a20749", "s2fieldsofstudy": [ "Materials Science", "Physics", "Engineering" ], "extfieldsofstudy": [] }
250144617
pes2o/s2orc
v3-fos-license
Towards a Sub-percent Precision Measurement of $\sin^2\theta_{13}$ with Reactor Antineutrinos Measuring the neutrino mixing parameter \ensuremath{\sin^2\theta_{13}} to the sub-percent precision level could be necessary in the next ten years for the precision unitary test of the PMNS matrix. In this work, we discuss the possibility of such a measurement with reactor antineutrinos. We find that a single liquid scintillator detector on a reasonable scale could achieve the goal. We propose to install a detector of $\sim10$\% energy resolution at about 2.0~km from the reactors with a JUNO-like overburden. The integrated luminosity requirement is about 150~${\rm kton}\cdot {\rm GW}\cdot {\rm year}$, corresponding to 4 years' operation of a 4~kton detector near a reactor complex of 9.2 GW thermal power like Taishan reactor. Unlike the previous $\theta_{13}$ experiments with identical near and far detectors, which can suppress the systematics especially the rate uncertainty by the near-far relative measurement and the optimal baseline is at the first oscillation maximum of about 1.8~km, a single-detector measurement prefers to offset the baseline from the oscillation maximum. At low statistics $\lesssim 10$~${\rm kton}\cdot {\rm GW}\cdot {\rm year}$, the rate uncertainty dominates the systematics, and the optimal baseline is about 1.3~km. At higher statistics, the spectral shape uncertainty becomes dominant, and the optimal baseline shifts to about 2.0~km. The optimal baseline keeps being $\sim 2.0$~km for an integrated luminosity up to $10^6$~${\rm kton}\cdot {\rm GW}\cdot {\rm year}$. We have assumed that the TAO experiment will improve our understanding of the spectral shape uncertainty, which gives the highest precision measurement of reactor antineutrino spectrum for neutrino energy in the range of 3--6~MeV. We find that the optimal baseline is $\sim 2.9$~km with a flat input spectral shape uncertainty provided by the future summation or conversion methods' prediction. Introduction Since first detected in 1956 by Reines and Cowan at the Savannah River reactor, neutrinos have played an inspiring role in particle physics. Plenty of experiments prove that there are three flavors of neutrinos, and they are massive. Neutrinos are created via electroweak interactions in flavor states ν e , ν µ , and ν τ . The neutrino oscillation phenomenon reveals that the three mass eigenstates ν 1,2,3 with masses m 1,2,3 are non-degenerate from the three flavor eigenstates. The Pontecorvo-Maki-Nakagawa-Sakata (PMNS) [1,2] matrix U is proposed to describe the mixing among massive neutrino states, ν α = U αi ν i , where α indexes flavor states and i indexes mass states. The mixing matrix can be parameterized with three mixing angles, θ 12 , θ 13 , and θ 23 , plus a CP violation phase δ CP . If neutrinos are Majorana fermions, there will be two additional phases irrelevant to the neutrino oscillation. See Ref. [3] for comprehensive reviews and perspectives of neutrino physics. Table 1 summarizes the current precision estimation of the oscillation parameters and the dominant types of experiments, taken from PDG2020 [4], as well as the projected precision in the near future. Most of them are determined with precision within a few percent except for δ CP , which is expected to be determined by the next generation neutrino experiments. The measurement of sin 2 θ 12 and ∆m 2 21 are currently dominated by the solar neutrino experiments SNO [5] and Super-Kamiokande [6,7], and the long-baseline reactor neutrino experiment KamLAND [8]. The accelerator and atmospheric neutrino experiments explore oscillation physics via the same oscillation channels and are sensitive to the same parameters, ∆m 2 32 /∆m 2 31 , sin 2 θ 23 , and δ CP . The dominant measurements are from the Ice-Cube [9], MINOS [10], NOvA [11], Super-Kamiokande [12], and T2K [13] experiments. The precision of the smallest mixing angle sin 2 θ 13 is dominated by the reactor experiments with baselines at ∼ O(1) km, including the Daya Bay [14], Double Chooz [15], and RENO [16] experiments. [19,20] δ CP /π NO, IO 1.36±0. 17 12.5 Acc., Atm. TBD Table 1. Current relative precision estimation of the oscillation parameters from PDG2020 [4] and the projected precision in the near future with corresponding data taking time. The 1σ relative uncertainties and the dominant types of experiments (Exps.) are also listed. The reactor experiments (Rea.) measure sin 2 θ 13 and ∆m 2 21 with the highest precision. The solar experiments (Sol.) contribute a lot to the sin 2 θ 12 and ∆m 2 21 determination. The accelerator (Acc.) and atmospheric experiments (Atm.) dominate the measurements of sin 2 θ 23 , ∆m 2 32 , and δ CP . The estimation for both normal ordering (NO) and inverted ordering (IO) are presented. Towards the sub-percent precision measurements of the oscillation parameters, Table 1 also lists the prospects in the next ten years. The increasing data volume at the NOvA and T2K experiments will further improve the precision of ∆m 2 31 /∆m 2 32 to about 1% [21,22]. The under-construction medium baseline reactor neutrino experiment JUNO will start operation in 2023 and can measure sin 2 θ 12 , ∆m 2 21 , and ∆m 2 31 /∆m 2 32 to the sub-percent precision level within one year [17,23]. The next-generation experiments DUNE [19] and Hyper-Kamiokande [20] are designed to determine the CP-violation phase δ CP , the octant of sin 2 θ 23 , and the mass ordering. After ten years of data taking, they can measure sin 2 θ 23 to the precision from sub-percent to ∼ 3.4%, depending on the true octant. DUNE and Hyper-Kamiokande will also measure ∆m 2 32 /∆m 2 31 to the sub-percent precision level with channels other than JUNO. The DUNE measurement of sin 2 θ 13 can also approach the precision of Daya Bay with high exposure [19]. In the next ten years, we can expect sub-percent precision measurements for most oscillation parameters, sin 2 θ 12 , sin 2 θ 23 , ∆m 2 21 , and ∆m 2 32 /∆m 2 31 . However, for sin 2 θ 13 , there are no more experiments under construction for more precise measurement. The only under active exploration project is the SuperChooz [24], which was preliminary started in 2018 and officialized in 2022. The Daya Bay experiment is shut down in 2020. The precision of sin 2 θ 13 with the full dataset of neutron-capture on Gadolinium (nGd) is expected to be about 2.7% [14]. In the recent Neutrino 2022 conference, the Daya Bay experiment's latest precision of sin 2 θ 13 using full nGd data is about 2.9% [18]. Measuring sin 2 θ 13 to a sub-percent precision level would be important for various research fields, including particle physics, astrophysics, and cosmology [3,17]. For example, it will enable more stringent tests of the standard 3 flavor neutrino mixing picture, such as probing the unitarity of the PMNS matrix [25][26][27][28][29][30] and exploring the physics beyond the standard model. The sub-percent precision knowledge of the leptonic mixing matrix may help reveal its fundamental structure and provide important clues for identifying the theoretical mechanisms behind neutrino mass and mixing generation [31]. It can also reduce the parameter space for searching the leptonic CP violation [32,33] and neutrinoless double beta decay [34][35][36]. Finally, with high-precision oscillation parameters, we can employ the neutrinos as a more reliable messenger in probing the deep interiors of astrophysical objects such as the Sun, supernovae, and Earth. In this work, we discuss the possibility of measuring sin 2 θ 13 to the sub-percent precision level with reactor antineutrinos via the Inverse Beta Decay (IBD) interaction. The electron antineutrinos (ν e ) from nuclear reactors are a very powerful source for measuring the θ 13 mixing angle. We could get large statistics and predict the energy spectrum precisely. The analysis methods used in this work are similar to the classical reactor neutrino analyses [37][38][39]. There are some discussions about sin 2 θ 13 precise measurement, both at the time before determined its value [37,40] and after that [41]. Unlike their proposal to build multiple detectors for suppressing the reactor-related uncertainties, we find a single detector can also measure sin 2 θ 13 to the sub-percent precision. The same single detector methodology is implemented in the Double Chooz experiment before its near detector is online [42][43][44], although the sin 2 θ 13 relative precision is poor due to its setup. This paper is organized as follows. We present the survival probability calculation for ν e , the observed spectrum prediction, and the analysis strategy in Sec. 2. In Sec. 3, the numerical results show that, by installing a single 4 kton detector of ∼ 10% energy resolution at the baseline of ∼ 2.0 km from a reactor complex like Taishan reactor, the experiment could measure sin 2 θ 13 to the sub-percent precision level within about 4 years. We also study the impact of different factors that may increase or decrease the sensitivity in that section. Finally, the summary and conclusions are posted in Sec. 4. Reactor antineutrino detection and statistical analysis The IBD interaction,ν e +p → n+e + , is the typical channel to detect the ν e in the few-MeV range with liquid scintillator (LS) detectors for its large cross-section. In this reaction, the electron antineutrino interacts with a proton (p) in the LS, creating a positron (e + ) and a neutron (n). The e + takes most energy of the original neutrino and quickly deposits its energy and annihilates into gammas, giving a prompt signal. Thus, the experiment can extract neutrino oscillation parameters by looking at the observed positron energy spectrum. While the neutron is thermalized in the detector and captured by a nucleus, it produces a delayed signal. The time, space, and energy correlation between the prompt and delayed signals are powerful to suppress the backgrounds. This section presents the approach to predicting the visible energy spectrum of the reactor antineutrinos with IBD reaction. Then we introduce the statistical method to calculate the sin 2 θ 13 sensitivity. We also carefully estimate the systematic uncertainties based on the experiences of previous and current experiments. Reactor antineutrino spectrum prediction The commercial nuclear power plants are "free" and powerful artificial ν e source for measuring sin 2 θ 13 , which generate electron antineutrinos via subsequent β decays of the fission products of mainly four isotopes, 235 U, 238 U, 239 Pu, and 241 Pu. After creation, the ν e propagates in mass eigenstates on the way to the detector and then is detected with the weak eigenstate ν e . Under the assumption of three flavor mixing, the survival probability P(ν e →ν e ) can be expressed as, P(ν e →ν e ) =1 − sin 2 2θ 13 (cos 2 θ 12 sin 2 ∆ 31 + sin 2 θ 12 sin 2 ∆ 32 ) − cos 4 θ 13 sin 2 2θ 12 sin 2 ∆ 21 , (2.1) where ∆ ji = ∆m 2 ji L/(4Eν), L is the baseline and Eν e is electron antineutrino energy. The terrestrial matter effects can influence the oscillation pattern [45,46]. For a severalkilometer baseline experiment, the matter effects are relatively small. Nonetheless, we include the matter effects in this work [47,48] with a typical constant matter density ρ = 2.6 g/cm 3 . The matter effects may distort the survival probability up to relatively 0.2% with negligible uncertainty. The visible prompt energy (E prompt ) spectrum from reactor r at detector d at time t can be predicted as, where d and N p,d are the detection efficiency and the number of target protons of detector d, respectively. A 12% hydrogen fraction from the Daya Bay experiment [49] is used to calculate the number of target protons with corresponding target mass. P ee (Eν e , L dr ) is the ν e survival probability with energy Eν e and baseline L dr is the distance from detector d to reactor r. W r (t) and f ir (t) are the thermal power and fission fraction of the reactor r at time t. The nuclear reactors release energy by fission reactions, and e i and φ i (Eν e ) are the energy yield and neutrino spectrum per fission of isotope i, respectively. In this work, we set the average fission fraction to be 0.564, 0.076, 0.304, and 0.056 [50] [53], 238 U from Ref. [54]) is a widely used model for calculating the ν e energy spectrum of the isotopes. The measurements from the reactor neutrino experiments such as Bugey-4 [55], Daya Bay [56], DANSS [57], Double Chooz [15], NEOS [58], Neutrino-4 [59], PROSPECT [60], RENO [61], and STEREO [62] reveal that both the measured neutrino flux and shape are inconsistent with the Huber-Mueller model. Nevertheless, We find that the measured antineutrino flux and spectrum from the Daya Bay experiment [50] and the Huber-Mueller flux model give consistent sin 2 θ 13 sensitivities, which is discussed in detail in Sec. 3.5. For simplicity and without losing accuracy, we employ the Huber-Mueller flux model here. The total cross-section of the IBD reaction σ tot (Eν) can be precisely calculated [63][64][65]. The term "⊗G(E e + , σ d )" represents the Gaussian smearing processes to take into account the energy resolution of detector d with resolution σ d . The energy resolution is not a key factor for sin 2 θ 13 measurement at several kilometers. Thus, we set the energy resolution of the detector to be 10%/ E(MeV). A detector with such resolution would be sufficiently sensitive to sin 2 θ 13 and not too expensive. Nonetheless, we study the impact of the energy resolution in Sec. 3.6. Given the prompt energy interval [E prompt,k , E prompt,k+1 ], we can calculate the expected number of signals T d,k as, where t DAQ is the total data taking time. Statistical analysis and systematics To extract the sin 2 θ 13 sensitivity of the experiment, we firstly generate a binned Asimov dataset with the nominal setup and approach described above. Then we fit the pseudo data with hypotheses using the Poisson-likelihood χ 2 with nuisance parameters and pull terms to account for the systematic uncertainties [66], where D d,k is the event rate in the k-th energy bin of detector d. T d,k is the predicted value with the oscillation parameters θ and the nuisance parameters . σ s,d,k in the pull term represents the estimation of the s-th systematic uncertainties for the k-th energy bin of detector d, and s,d,k is the corresponding uncorrelated nuisance parameter. corr , corr , and V corr are the correlated nuisance parameters, their central values, and their covariance matrix, respectively. In this work, we perform the binned analysis using 320 equal bins for prompt energy from 0.8 MeV to 12 MeV. To extract the sensitivity of sin 2 θ 13 , we minimize the χ 2 defined in Eq. (2.4) with respect to all the oscillation parameters and nuisance parameters. For the oscillation parameters, we use the prior central values and uncertainties in Table 1 from PDG2020 [4] to constrain sin 2 θ 12 , ∆m 2 21 , and ∆m 2 31 . The sin 2 θ 13 sensitivity weakly depends on these parameters, as shown in Sec. 3.4. To find the best location and luminosity requirements of the experiment, we defined the 1σ precision sensitivity of sin 2 θ 13 as is the upper (lower) bound for parameter x (i.e., sin 2 θ 13 ) at 1σ level, i.e., at which the marginalized ∆χ 2 (x) ≡ χ 2 (x) − χ 2 min is equal to 1. x bft is the best fit value of the parameter. The systematic uncertainties are especially important for the precision measurement of sin 2 θ 13 with large statistics. Based on the experiences and prospects of reactor experiments Daya Bay [14,67,68], Double Chooz [69], KamLAND [70], RENO [16], and JUNO [23], here we list the estimation of the systematic uncertainties used in this work in Table 2 pack the systematics into several groups: rate uncertainty, spectral shape uncertainty, and energy calibration uncertainty. We estimate their values and assess their impact on sin 2 θ 13 precision measurement sensitivity as follows. Overall rate uncertainty The overall event rate systematic uncertainty contains all the effects that may affect the normalization of the total event number. We set the value to be 3% as the quadratic sum of all independent sources. In Eq. (2.4), we assign a nuisance parameter rate with constraint σ rate to take into account this uncertainty. The major source of the rate uncertainty is the predicted number of antineutrinos yielded by the nuclear reactor. The model prediction uncertainty can be constrained by the absolute reactor neutrino flux measurement by the near detectors of the Daya Bay [56], Double Chooz [15], and RENO [61] experiments, which include the uncertainty of the IBD cross-section, and the overall uncertainty is less than 2%. Other short baseline reactor neutrino experiments also provide precise rate measurement of the reactor neutrino flux [73], such as the Bugey-4 [55] and Rovno 91 [74]. In this work, we put a 2% uncertainty on flux rate prediction. We also include the rate uncertainties per reactor from the fission fraction (0.6% in total) and thermal power (0.5%) [15,61,67]. On the detection side, we assume that the uncertainty of the number of target protons would be 0.9%, close to the Daya Bay experiment [67]. In the pioneer kton LS reactor neutrino experiment KamLAND, the IBD selection efficiency uncertainty is about 2% and dominated by the fiducial volume cut uncertainty [70]. For a future experiment, we anticipate that we will pay more attention to the detector calibration and event reconstruction. These efforts would enable the experiment to control the IBD selection uncertainty to be close to or better than the Daya Bay [67] and JUNO [17] experiment. Nonetheless, we put a 2% rate uncertainty for the IBD selection. Spectral shape uncertainty The spectral shape uncertainty refers to the factors that may distort the spectral shape and are uncorrelated from bin to bin. It is essential for high-precision measurement. The reactor antineutrino flux shape prediction based on the direct measurement of the reactor neutrino experiments [15,50,61] or from the summation and conversion methods [53,54,75,76] has an uncertainty of 2% to more than 5%. In this work, we set the reactor antineutrino spectral shape uncertainty constrained by the future short-baseline experiment, which is <1% for a bin width of about 35 keV in most signal energy range (2)(3)(4)(5). In Eq. (2.4), we assign each energy bin a nuisance parameter shape,k with constraint σ shape,k to take into account this uncertainty. The Taishan Antineutrino Observatory (TAO) [71], with a ton-level Gadolinium-doped Liquid Scintillator (GdLS) detector at a baseline of 30 m from a reactor core of the Taishan Nuclear Power Plant (NPP), will start operation in 2023 as a satellite experiment of the JUNO experiment. Thanks to the almost full optical coverage with high photon detection efficiency (>50%) Silicon Photomultipliers (SiPMs), the TAO energy resolution is better than 2% at 1 MeV. After six years of data taking, TAO could measure the reactor spectral shape to the precision of better than 1% in the prompt energy range of 2-5 MeV, with the bin width of about 35 keV to investigate the possible fine structure in the spectrum. The reactor ν e flux shape constrained by the direct measurement of the TAO experiment is more precise than the current summation (or ab-initio) [75,76] or the conversion [53,54] methods; and the shape uncertainty distribution is dominated by the statistics. Differently, the relative shape uncertainty distribution of the summation or conversion model prediction is approximately flat in most of the energy range. We find that the relative shape uncertainty distribution has a large impact on the high precision sin 2 θ 13 measurement, and more details are described in Sec. 3.2. We also include the uncertainties of other sources that may distort the spectrum's shape. The antineutrinos from the spent nuclear fuel (∼ 0.3%) and non-equilibrium (∼ 0.6%) are taken into account using the calculation from Ref. [77], with negligible shape uncertainty. We assign both contributions a 30% relative rate uncertainty based on the evaluation in Ref. [78] and Ref. [54]. The background subtraction may also induce spectral shape uncertainty. We do not include the specific background in this work; instead, we assess their impacts within the total spectral shape uncertainty. For a detector with a baseline of several kilometers, we assume the overburden for suppressing the cosmogonic backgrounds is close to the JUNO experiment (∼650 m). We also assume that with a technology similar to JUNO [79], the radioactive background would be sufficiently small. We calculate the reactor antineutrino event rate and roughly estimate the rate of the possible backgrounds after the IBD selection with similar criteria as the KamLAND experiment [8]. We find that the relative spectrum uncertainty from the background subtraction should be at the level of 0.1%. The backgrounds considered in the estimations are radioactive background [79], cosmogonic background [67,80], Geo-neutrino and atmospheric neutrinos [81]. The energy detection uncertainties may also distort the spectrum's shape. Their major sources are energy nonlinearity and relative energy scale. The spectral shape uncertainty provided by the TAO experiment has taken into account the uncertainty of the LS physics nonlinearity [71]. With similar LS technology, we assume the LS nonlinearity and its uncertainty of our future experiment are mostly correlated to the TAO experiment. The residual nonlinearity from the SiPM readout system of the TAO experiment can be neglected in the energy range of 1 MeV to 10 MeV [82]. With the novel dual calorimetry and dedicated calibration strategy, the JUNO experiment can control the instrumental nonlinearity's uncertainty to a 0.3% level [72]. This work assumes the same event-level instrumental nonlinearity uncertainty of 0.3% as the JUNO experiment and is fully uncorrelated to the TAO experiment. We assign each energy bin a nuisance parameter RNL,k to take into account the residual nonlinearity uncertainty. The covariance matrix V RNL for these parameters are evaluated by 10,000 toy MC simulations for each experimental setup. Energy scale uncertainty The relative energy scale uncertainty can be controlled by proper calibration strategy, as verified by the Daya Bay experiment [68]. This work assumes the relative energy scale calibration uncertainty to be 0.5%, which is close to the expected value of the coming large detector JUNO [72]. To account for this uncertainty, we assign a nuisance parameter calib with constraint σ calib . In predicting the observed energy spectrum of Eq. (2.2), we analytically replace the expected energy E 0 by (1+ calib )·E 0 , on which the energy resolution is defined. 3 Sub-percent precision measurement of sin 2 θ 13 In this work, we quote the sin 2 θ 13 precision measurement sensitivity for different statistics using the integrated luminosity [kton · GW · year] under the assumption of a single reactor. Take the Daya Bay experiment [83] as an example. The reactor's thermal power is 17.4 GW. The total target mass of the four far detectors is 80 tons. Thus, the total integrated luminosity is about 12.5 kton · GW · year from the start of data taking in 2011 to the shutdown in 2020. The reactor antineutrino experiments can measure sin 2 θ 13 by installing identical near and far detectors, like the Daya Bay [83], Double Chooz [15], and RENO [16] experiments. The identical detectors could tremendously reduce the rate systematic uncertainties with the near-far relative measurement. However, as the statistics increase, the spectral shape systematic uncertainty becomes more important, which can not be suppressed as efficiently as rate uncertainties by the near-far detector strategy. With the setup presented in Sec. 2, we numerically calculate the sin 2 θ 13 measurement precision sensitivity for an experiment with identical near and far detectors. We assume only one reactor core and set the detector uncorrelated uncertainty values based on the Daya Bay experiment's experiences [67]. Between the two detectors, we set 0.2% uncorrelated rate uncertainty, 0.1% uncorrelated spectral shape uncertainty, and 0.1% uncorrelated energy calibration uncertainty. We find the optimal baseline is ∼1.8 km, which is well-known and consistent with Ref. [37]. With the optimal baseline, we find the identical near and far detectors that could measure sin 2 θ 13 to sub-percent precision level with the luminosity of ∼50 kton · GW · year. Considering the baseline spread in a real reactor complex, the required luminosity for sin 2 θ 13 's sub-percent precision measurement could be larger. At the same time, based on the experiences of the Daya Bay experiment [38], the cost for identical near and far detectors can be used to build a roughly four times larger single far detector. Besides, if we want to build an experiment with identical near and far detectors, building a kton-level near detector at a very short baseline with no vision of the oscillation effect seems less feasible. Another strategy is to build non-identical near and far detectors; one example is the recently proposed SuperChooz experiment [24]. For non-identical detectors, the systematical correlation and physics potential is similar to the proposals of using the very short baseline experiments as effective near detectors. For measuring sin 2 θ 13 to the sub-percent precision efficiently, out of the reasons listed above, we discuss only the feasibility of building a single detector alone or employing the TAO experiment as an effective near detector. The latter strategy is usually attractive since we can make use of the under-construction TAO experiment and save the budget from building a new near detector. With the spectrum prediction and systematic uncertainty estimation in Sec. 2, we numerically calculate the precision sensitivity of sin 2 θ 13 to find the best choice of the baseline and other configurations. Then, we propose the nominal setup and discuss the impact of various factors on sensitivity. Fig. 1 shows the unoscillated measurable antineutrino energy spectrum (flux multiplied by the total IBD cross-section) at 1 km, the relative spectral shape uncertainty constrained by the future TAO experiment [71], together with the ν e disappearance probability at different baselines. The ν e →ν e disappearance amplitude represents the oscillation parameter sin 2 θ 13 ; thus, sin 2 θ 13 's high precision measurement requires low uncertainties around the disappearance probability peak energy. The green line in the figure shows that the direct measurement can constrain the shape uncertainty to a sub-percent level for the neutrino energy of 3-6 MeV [71]. Due to the limit of statistics, however, the relative uncertainties are large at the low and high energy end. Thus, as discussed in Sec. 2.2, if the future summation or conversion methods could predict the reactor antineutrino flux to be consistent with the measurement of the TAO experiment, the spectral shape uncertainty of their combined prediction would be better than 1%. Therefore, hereafter we set a 1% spectral shape relative uncertainty to study the optimal baseline of a future sin 2 θ 13 measurement experiment. year 1 ] ×10 5 Figure 1. The unoscillated measurable reactor antineutrino energy spectrum φ × σ tot (Eν e ) (flux multiplied by the total IBD cross-section) at 1 km and the ν e disappearance probability 1 − P ee at different baselines. For a single detector, the rate uncertainty would mimic the oscillation pattern of sin 2 θ 13 /∆m 2 31 when the oscillation maximum is at the peak of the antineutrino spectrum. The relative spectral shape uncertainty constrained by the TAO experiment (green line, same y-scale as left y-axis) [71] shows that the most precise neutrino energy region is 3-6 MeV. Experiment setup and precision measurement sensitivity With a 1% spectral shape uncertainty, Fig. 2 shows the 1σ contour of the precision measurement sensitivity on sin 2 θ 13 for different baselines and integrated luminosities. The optimal baseline for the most efficient sin 2 θ 13 precision measurement varies with the increase of the integrated luminosity. The top pad of Fig. 2 shows the optimal baseline is about 1.3 km for low statistics 60 kton · GW · year and gradually shifts to about 2.9 km as statistics increases. At ∼ 2.9 km, the experiment could measure sin 2 θ 13 to a precision of sub-percent level with a ∼ 10 4 kton · GW · year luminosity. Besides, we find that the precision is limited to ∼ 1% level by the spectral shape uncertainty, as the sensitivity will not be better with the increase of luminosity after ∼ 10 4 kton · GW · year. With such spectral shape uncertainty, the single detector at the first oscillation maximum of ∼ 1.8 km would be impossible to measure sin 2 θ 13 to the precision of a sub-percent level. The baseline preference differs from those reactor experiments with identical near and far detectors [37], whose optimal baseline at about 1.8 km, the first oscillation maximum for reactor antineutrinos. The major difference is that the 3% relative rate uncertainty is the dominant systematics for the single detector strategy, and it is suppressed to ∼ 0.1% for the identical near and far detector configuration. The oscillation parameter sin 2 θ 13 characterizes the disappearance amplitude, which can be mimicked by the rate uncertainty nuisance parameter when the disappearance maximum is at the same energy as the measurable reactor antineutrino energy spectrum. The offset of the oscillation maximum and The luminosities are arbitrarily selected to present the evolution of the baseline's preference. The optimal baseline shifts from ∼ 1.3 km at low statistics to ∼ 2.9 km at high statistics. The ultimate precision of the Daya Bay experiment is also shown with a horizontal blue dash-dotted line. The bottom pad shows the required luminosity to reach the given precision, from 2.9% to 1.0% labeled on the curves, at different baselines. The spectral shape uncertainty is set to be 1% for a bin width of 35 keV. unoscillated measurable antineutrino energy spectrum is shown in Fig. 1; the rate uncertainty is important for a single detector at low statistics. As the statistics increase, the spectral shape distortion contributes more and more sensitivity since each energy bin has enough statistics to reflect the oscillation effect, and all bins share the same absolute rate uncertainty. The correlation among different bins thus suppresses the impact of the rate uncertainty at high statistics. Shifting the baseline from the rate oscillation maximum will help further reduce the impact of the rate uncertainty by offsetting the oscillation maximum energy bin from the peak of the reactor neutrino spectrum. At larger baselines, there are more oscillation cycles in the measured spectrum; thus, the error cancellation due to correlation among different bins is enhanced. Furthermore, multiple cycle measurement helps to reduce energy-correlated uncertainties. The optimal baseline comes from the balance of systematic uncertainty suppression and the statistics loss. With flat spectral shape uncertainty given by the future nuclear theory community's prediction, the optimal baseline keeps being ∼ 2.9 km. We find that the optimal baseline is the same for different input shape uncertainty values (0.1%, 0.5%, 1%, 2%, 5%), and with 1% shape uncertainty, it would be the dominant systematics for the luminosity larger than 20 kton · GW · year. To verify the impact of the spectral shape uncertainty, here, we numerically calculate the sensitivity with different input shape uncertainties for an experiment with a luminosity of 10 4 kton · GW · year at 2.9 km. Fig. 3 shows the sin 2 θ 13 precision measurement sensitivity for different input shape uncertainties. It is shown that the shape uncertainties are the bottleneck of the high precision measurement of sin 2 θ 13 . 10000 kton GW year, 2.9 km Figure 3. The sin 2 θ 13 precision sensitivity for different input spectral shape uncertainty. The x-axis is the spectral relative shape uncertainty in %. The baseline is 2.9 km, and the luminosity is 10 4 kton · GW · year. The sin 2 θ 13 precision sensitivity is almost linearly proportional to the spectral shape uncertainty. The impact of the shape uncertainty distribution At large luminosity, the dominant uncertainty is the spectral shape uncertainty; thus, the optimal baseline highly depends on the energy distribution of the shape uncertainty. In the above discussion, we use a 1% spectral relative shape uncertainty. In the near future, after the TAO experiment starts running, we can use the direct measurement of the reactor ν e to constrain the spectral shape uncertainty. The baseline preference will be different with such a spectral shape uncertainty setup. Fig. 4 shows the 1σ contour of the precision measurement sensitivity on sin 2 θ 13 using the TAO measurement as an external spectral shape uncertainty constraint. It is shown that the optimal baseline is about 1.3 km for low statistics 10 kton · GW · year and shifts to about 2.0 km as statistics increase. With the luminosity of ∼ 150 kton · GW · year, the experiment could measure sin 2 θ 13 to a precision of sub-percent level. With TAO-based spectral shape uncertainty, the required luminosity for measuring sin 2 θ 13 to 1% precision is less than that with the flat 1% assumption. The reason is that the TAO-based uncertainty is < 1% for E ν ∈ (3, 6) MeV, which is the peak of unoscillated reactor antineutrino and thus has the largest statistics. At 2 km, the oscillation maximum is at the same energy as the peak of the unoscillated measurable reactor antineutrino energy spectrum. We can get more information for sin 2 θ 13 oscillation with the spectral shape uncertainty constrained by the TAO experiment. In contrast, with the 1% flat uncertainty model, the absolute spectral shape uncertainty is proportion to the statistics of the unoscillated measurable reactor antineutrino spectrum for all energy. Larger statistics bring larger absolute spectral shape uncertainty; thus, the optimal baseline in Fig. 2 is 2.9 km, offsetting the oscillation maximum from the spectrum's peak. Systematics breakdown and sub-percent precision Based on Fig. 2 and Fig. 4, the most efficient baseline for sub-percent measurement of sin 2 θ 13 is about 2.0 km. Thus, we set the nominal baseline to be 2.0 km and study the impact of different systematic uncertainties. Fig. 5 shows the breakdown of the statistical and systematic uncertainties for the precision measurement sensitivity of sin 2 θ 13 at different luminosities. It helps us to identify the most important systematic uncertainties at different The rate uncertainty is the dominant systematics at low statistics for the single detector strategy. With the integrated luminosity larger than ∼ 60 kton · GW · year, the shape uncertainty gradually dominates the precision measurement of sin 2 θ 13 . The ultimate precision sensitivity is approximately 0.8%, limited by the spectral shape uncertainty. luminosities. The rate uncertainty would be the dominant systematics at low luminosity 60 kton · GW · year; then, the shape uncertainty would be dominant. The impact of energy scale calibration uncertainty is negligible at low statistics and would have a minor impact as luminosity increases. The shape uncertainty would totally dominate the sin 2 θ 13 precision measurement with luminosity larger than ∼ 10 3 kton · GW · year. With the integrated luminosity of about 150 kton · GW · year, the experiment can measure sin 2 θ 13 to the sub-percent level precision. One feasible design is to install a 10% energy resolution, 4 kton liquid scintillator detector near a reactor complex like the Taishan reactor, whose thermal power is about 9.2 GW. With such a setup, consider a typical > 80% signal selection efficiency and 11/12 reactor duty cycle [67], the experiment could measure sin 2 θ 13 to the sub-percent precision level within four years. The spectral shape uncertainty is crucial for the sin 2 θ 13 sub-percent precision measurement. The shape uncertainties given by different methods have either the close energy distribution as the TAO-based model (direct measurement) or the flat 1% model (theoretical calculation). We calculate the sin 2 θ 13 precision sensitivity by multiplying the TAO uncertainty curve with different factors (>1). The optimal baseline is the same for different factors. The precision sensitivity has a similar dependence on shape uncertainty as Fig. 3. The results show that with the spectral shape uncertainty 1.3 times as large as the TAO uncertainty curve, the single detector strategy's ultimate precision sensitivity would be larger than 1%. The 30% buffer of the shape uncertainty provides ample space for our future experiment on measuring sin 2 θ 13 to a sub-percent precision. The impact of the oscillation parameters The optimal baseline (∼ 2.0 km) we propose in this work depends on the oscillation parameters' central values and the reactor antineutrino energy spectrum. Thus, we generate several Asimov data sets with different oscillation parameter values and mass orderings to assess the impact. The results using the nominal luminosity 150 kton · GW · year is shown in Fig. 6, where we always update ∆m 2 32 correspondingly assuming ∆m 2 32 =∆m 2 31 −∆m 2 21 . sin 2 13 PDG2020 1 Figure 6. The sin 2 θ 13 precision measurement sensitivity for the Asimov pseudo data generated with the oscillation parameters shifted from PDG2020 [4]. The green and orange lines are on the top of each other since the roles of sin 2 θ 12 and ∆m 2 21 true values are almost the same for the sin 2 θ 13 measurement. The shadow area represents the 1σ region given in PDG2020. The normal and inverted ordering hypotheses yield almost the same results. It can be seen that the true value of sin 2 θ 13 has more impact than all other oscillation parameters. It can be seen that the sin 2 θ 13 measurement is almost independent of the true values of the solar oscillation parameters sin 2 θ 12 and ∆m 2 21 at such baseline. The sensitivity weakly depends on the true value of ∆m 2 31 . As we expected, the relative precision sensitivity is almost inversely linear proportional to the true value of sin 2 θ 13 . This dependence shows that the ability of the detector to locate the absolute value of sin 2 θ 13 is robust. The true value of sin 2 θ 13 is important for the sub-percent relative precision measurement of sin 2 θ 13 ; if the true value is smaller, the detector will have to take more data for the sub-percent precision measurement. The central value estimated by PDG2020 [4] is 0.0218, and recently the results from global fit groups yield that the central value is 0.0223 (∼ 2% larger) [84][85][86]. With a larger true value, the experiment can measure sin 2 θ 13 to the sub-percent precision level with fewer statistics. In Sec. 2.2, we constrain sin 2 θ 12 , ∆m 2 21 , and ∆m 2 31 using the PDG2020 central values and 1σ uncertainties by adding a pull term defined in Eq. (3.1), where θ refers to the oscillation parameters, including sin 2 θ 12 , ∆m 2 21 , and ∆m 2 31 . θ 0 and σ θ are the central values and 1σ uncertainties. With the nominal setup, we find that the sin 2 θ 13 precision sensitivity keeps almost unchanged (relative difference within 0.2%) whether we constrain, fix, or free ∆m 2 31 . Actually, the experiment can also measure ∆m 2 31 to the precision of ∼ 0.6% with 150 kton·GW·year luminosity. As a future experiment, we can anticipate the future external information of the oscillation parameters. With the sin 2 θ 12 , ∆m 2 21 , and ∆m 2 31 constraints from the projected relative precision listed in Table 1, the sin 2 θ 13 precision sensitivities would be slightly better (relative difference ∼0.5%). When we fix all other oscillation parameters, sin 2 θ 12 , ∆m 2 21 , and ∆m 2 31 , the sin 2 θ 13 precision sensitivity would be almost the same as the projected relative precision (relative difference within 0.1%). The experiment can measure sin 2 θ 13 to the sub-percent precision level with other oscillation parameters fixed or constrained with external information. However, since this detector is not designed for sin 2 θ 12 and ∆m 2 21 measurements, when we free all other oscillation parameters, the nominal sensitivity reduces from 1% to ∼ 2.6%. Thus, for a sin 2 θ 13 high precision measurement experiment, using external information could help accomplish the major physics goal. The impact of the reactor antineutrino anomaly and excess As shown in Fig. 1, the offset of the optimal baseline depends on the unoscillated measurable reactor ν e energy spectrum. In Sec. 2.1, we use the Huber-Mueller model [53,54] as the nominal flux model to predict the observed reactor antineutrino energy spectrum. However, a ∼ 6% absolute flux deficit compared to the Huber-Mueller model is observed by many reactor neutrino experiments [15,[55][56][57][58][59][60][61][62], with the leading evidence provided by the Daya Bay [56], Double Chooz [15], and RENO [61] experiments. This deficit is the so-called reactor antineutrino anomaly. Besides, the experiments also find an excess around 5 MeV on the spectral shape called "5 MeV excess" [3]. We can use the flux measured by previous reactor neutrino experiments to explore the impact of reactor antineutrino anomaly and excess. This work employs the unfolded isotope flux from the Daya Bay experiment given in Ref. [50], which naturally includes both effects. In the interest of studying the flux model dependence of the sensitivity, we switch the flux calculation of all isotopes in Eq. (2.2) from i f i φ i (Eν) to Eq. (8) of Ref. [50]. With all the other setups, including systematics, same as in Sec. 2, Table 3 gives the 1σ precision sensitivity of sin 2 θ 13 with different isotope flux models. It can be seen that the reactor antineutrino anomaly and "5 MeV" excess have a minor impact on the sub-percent precision measurement of sin 2 θ 13 . The latter effect means there sin 2 θ 13 1σ uncertainty (%) Huber+Mueller [53,54] Table 3. The sin 2 θ 13 1σ precision sensitivity under different consideration of the uncertainties with the Huber-Mueller flux model and the Daya Bay unfolded flux model. The latter naturally includes the reactor antineutrino anomaly and the "5 MeV excess". The 6% deficit reduces the statistics and the "5 MeV excess" slightly benefits the sin 2 θ 13 precision measurement. All results are for the detector installed at 2.0 km from the reactor and 150 kton · GW · year luminosity. are more events at the neutrino energy around 6 MeV, which helps the measurement by providing more statistics at this energy. As shown in Fig. 1, more statistics around 6 MeV slightly improve the sensitivity as shown in Table 3. The Double Chooz experiment also demonstrates a similar conclusion using its single-detector configuration data [15]. With the existing measurements and the coming high precision measurement from TAO as input, the reactor antineutrino anomalies will not bias the sub-percent measurement of sin 2 θ 13 . The impact of the detector performance In Sec. 2.1, we assume the detector's energy resolution to be 10%. The high energy resolution is usually expensive and technically challenging for a large-volume liquid scintillator detector. Thus, with the nominal setup, we study the sin 2 θ 13 precision measurement sensitivity for a detector with different energy resolutions. As shown in Fig. 7, the relative variation of sensitivity would be within 5% for the energy resolution from 3%/ E(MeV) to 15%/ E(MeV). In general, the energy resolution is not a key factor for sin 2 θ 13 precision measurement at the baseline of ∼ 2.0 km. A 10% energy resolution is sufficient for measuring sin 2 θ 13 to the sub-percent precision level. As a detector with volume at the kton level, we assume that the energy scale calibration's high accuracy (uncertainty <0.5%) can be achieved as the Daya Bay experiment [68] and the prospects of the JUNO experiment [72]. Nonetheless, even if the energy scale uncertainty is at a 1% level, the sin 2 θ 13 precision measurement sensitivity keeps almost the same (relative difference <0.1%). The result is consistent with the calibration uncertainty contribution we observe in Fig. 5. Conclusion For measuring sin 2 θ 13 to the sub-percent level precision, the crucial requirements are the statistics, the baseline, and the control of the spectral shape uncertainty. We perform a numerical calculation of the sin 2 θ 13 precision measurement sensitivity and find that the optimal baselines for a single liquid scintillator detector setup are different from the identical near and far detectors setup. The latter setup can suppress the rate uncertainties by near-far relative measurement, and the optimal baseline is about 1.8 km. The optimal baseline for the former setup is about 1.3 km at low luminosities 10 kton·GW·year as the dominant systematics is the rate uncertainty. For larger statistics, as the shape uncertainty becomes dominant, the optimal baseline shifts to about 2.0 km and keeps being so for the integrated luminosity up to 10 6 kton · GW · year. The reason is that sin 2 θ 13 characterizes the disappearance amplitude; thus, the rate uncertainty plays an important role in the measurement when the disappearance maximum is at the peak of the unoscillated antineutrino energy spectrum. For a single detector experiment with large rate uncertainty, the optimal baselines shift from the baseline of the maximum rate oscillation. With the spectral shape uncertainty constrained by the TAO experiment, a single liquid scintillator detector at the baseline of ∼ 2.0 km with a JUNO-like overburden could measure sin 2 θ 13 to the sub-percent precision level within 150 kton · GW · year integrated luminosity. The energy resolution is not a key factor for an experiment at several kilometers' baselines. Thus, we propose to install a single 4 kton, 10% energy resolution detector at ∼ 2.0 km from a 9.2 GW reactor complex like the Taishan reactor. The experiment with such a setup could measure sin 2 θ 13 to the sub-percent precision level within four years. Various factors that may increase or decrease the sensitivity are discussed; the dominant factors are the reactor antineutrino spectral shape uncertainty and the sin 2 θ 13 true value. With the flat relative spectral shape uncertainty given by the future nuclear theory community's prediction, the optimal baseline is 2.9 km at larger luminosities. The relative shape uncertainty is the same for different energies in this model; thus, the optimal baseline is further shifted to offset the oscillation maximum of 1.8 km at the peak of the reactor neutrino spectrum. The detector performances on the energy resolution and energy scale uncertainty have minor impacts on the sensitivity. Since we set a 0.1% spectral shape uncertainty of the background subtraction, the experiment should have good control of the background, such as the natural radioactivity and cosmogenic backgrounds.
2022-07-01T01:15:43.800Z
2022-06-30T00:00:00.000
{ "year": 2022, "sha1": "1c537301c279d5e7d2957abd42d05584554703d1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c7fcdd05fff7c5a1ecfc89b1e6254461bbbf9369", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
56110642
pes2o/s2orc
v3-fos-license
Advanced search for the extension of unresolved TeV sources with H.E.S.S.: First measurement of the extension of the Crab nebula at TeV energies The resolution power of current Imaging Atmospheric Cherenkov Telescopes is presently restricted to scales of a few arcminutes. In the very high-energy (VHE; E>100 GeV) gamma-ray regime, the measurement of source sizes that are comparable to or smaller than the resolution of the instrument is usually limited by statistics and in particular by the uncertainties in the characterisation of the instrument Point Spread Function (PSF). The PSF varies strongly with observation and instrument conditions, demanding time-dependent simulations of these conditions. Employing such simulations, we substantially improve our understanding of the H.E.S.S. PSF and are now able to probe source extensions well below one arcminute scale. We present the results of this new approach applied to known VHE gamma-ray sources and show how this enables us to reveal for the first time the extension of the Crab nebula at TeV gamma-ray energies, with a width of 52 arcsec assuming a Gaussian source shape. Instrument and Dataset H.E.S.S. is an array of five Imaging Atmospheric Cherenkov Telescopes (IACTs) located in the Khomas Highlands of Namibia. The Crab nebula has been observed with the H.E.S.S. telescopes since the beginning of operations in 2004. For the measurement presented here we only use observations with the four IACTs of the first phase of H.E.S.S. In addition to the standard runquality selection criteria [1], we apply additional cuts e.g. on the maximum wind speed (< 3 m/s), the observation wobble offset (< 0.8 • ), and the zenith angle of the observations (< 55 • ) to define a high-quality dataset for extension measurements. The resulting Crab nebula dataset consists of observations performed between February 2004 and November 2011, amounting to a total deadtimecorrected live time of 25.7 h. Since the Crab nebula is a Northern source, the typical observation zenith angles at the H.E.S.S. site in the Southern hemisphere are 45 • − 50 • . Analysis and Results The data analysis is performed using semi-analytical air-shower templates [2]. To further improve the angular resolution, we use a tight cut on the direction reconstruction uncertainty for the event selection. Furthermore, we demand energies of E > 0.7 TeV to eliminate potential systematic effects near the energy threshold of the dataset 1 . We detect the Crab nebula with a statistical significance of 137σ (using Eq. 17 from [3]) and obtain a signal to background ratio of 58 within 0.1 • around the source position. We find a total number of about 4600 excess events, the majority of them with energies between the lower cut of 0.7 and 10 TeV. We show the resulting distribution of events as a histogram in squared angular distance (ϑ 2 ) in the top panel of Fig. 1. The value of ϑ 2 is calculcated with respect to the centroid of the gamma-ray excess count distribution in the sky. The centroid position in equatorial coordinates (J2000) is α = 5h34m30.9s ± (1.2s) stat ± (20s) sys , δ = +22 • 00 44.5 ± 1.1 stat ± 20 sys (systematic error from [4]), at a distance of 16.8 from, and within uncertainties compatible with, the Crab pulsar location. Dedicated run-wise Monte-Carlo (MC) simulations of the dataset, including the actual instrument and observation conditions at the time the data were recorded, are generated as described in [5]. We re-weight the simulations to mimic the energy spectrum of the Crab nebula and analyse them with the same algorithms and analysis configurations as the actual data. The resulting ϑ 2 histogram of this MC analysis serves as the PSF for this source and dataset and is shown in the upper panel of Fig. 1. The 68%, 80%, and 90% containment radii of our PSF are 0.05 • , 0.07 • , and 0.09 • , respectively. As shown in Fig. 1, the PSF is highly inconsistent with the distribution of the gamma-ray excess counts. The residuals in the lower panel of Fig. 1 indicate a clear broadening of the data compared to the simulated PSF. To study this further, we perform a 2D morphology fit with Sherpa [6], using the sky images of gamma-ray-like events around the Crab nebula, of gamma-ray-like background events estimated from a ring well outside the source [7], as well as the simulated PSF. To determine the best fit, we convolve the PSF with a 2D Gaussian with different widths. For each width, we calculate a likelihood value to assess the compatibility of the data and the convolved PSF. We find the best-fit extension to be σ 2D,G = 52.2 ± 2.9 stat ± 7.8 sys , with a preference of an extension of the Crab nebula over a point-source assumption of TS ≈ 83 2 . As systematic uncertainty of the extension we quote the quadratic sum of uncertainties related to the calibration and analysis method, to the spectral shape used to re-weight the MC PSF, and to the fit method. The resulting best-fit convolution is also plotted in Fig. 1. It clearly provides a good description of the data both in the upper panel and the residuals in the lower panel. To demonstrate the robustness of our results, we apply the same analysis using time-dependent simulations to two other bright and highly significant extragalactic gamma-ray sources, the active galactic nuclei PKS 2155-304 and Markarian 421. As illustrated in Fig. 2, we find both sources compatible with being point-like and show upper limits on their extension. In both cases, these limits are well below the measured extension of the Crab nebula. We note that Markarian 421 culminates at large zenith angles of θ > 60 • at the H.E.S.S. site, making this source a particularly convincing test of our PSF understanding: even under such challenging observation conditions, Markarian 421 appears to be point-like. As we also show in Fig. 2, we tested the Crab nebula dataset for a zenith angle dependence by splitting the observations in two datasets above and below 46 • . The measured extensions are compatible with each other. Also shown in Fig. 2 is the extension of the Crab nebula cross-checked with an independent 2 TS is the likelihood ratio test statistic and √ TS can be interpreted as statistical significance. calibration, reconstruction, and analysis method [8]. We find this second extension measurement slightly larger than our nominal value, and incorporate the difference as one of our systematic uncertainties mentioned above. Our VHE gamma-ray extension of the Crab nebula is compared to the morphology found at UV wavelengths and X-ray energies in the left and right panel of Fig. 3, respectively. The H.E.S.S. extension covers a good fraction of the optical nebula. Comparing the TeV gamma-ray emission from Inverse Compton scattering to the keV synchrotron X-ray emission, we find that the nebula measured with H.E.S.S. is significantly larger than when measured with Chandra. This result is naturally explained by the radiation cooling of electrons. The Crab nebula size therefore decreases with increasing electron energy, and the energies of electrons producing the TeV gamma rays are well below those of electrons emitting the hard X-rays measured by Chandra. The measured size of the TeV gamma-ray nebula can be reproduced within the standard magnetohydrodynamic model of Kennel and Coroniti [12,13] assuming a magnetisation parameter σ ≈ 0.01. Conclusions Here we document the ability of the H.E.S.S. IACT array to robustly measure VHE gammaray source extensions down to 30 − 40 . The performance boost provided by using time-dependent simulations allows us to resolve for the first time the Inverse Compton component of the Crab nebula. The emission region size we find is well below the previously most constraining upper limit of 1.5 [11] and is determined with a high accuracy of ≈ 15%. Compared to the synchrotron Crab nebula seen in keV X-rays, the VHE gamma-ray emission region is clearly more extended.
2017-07-13T16:13:36.000Z
2017-07-13T00:00:00.000
{ "year": 2017, "sha1": "aaba61711293eea0d129dd6b7549313aed59d562", "oa_license": "CCBYNCND", "oa_url": "https://pos.sissa.it/301/676/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "cc12e4d1e70e8a6652333b5b6830654fd99a85aa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
126132666
pes2o/s2orc
v3-fos-license
STRUCTURAL BEHAVIOUR OF WELDED STEEL BEAM-COLUMN CONNECTIONS UNDER CONCENTRATED LOAD. Yassin Ali Ibrahim. This paper consists of an experimental work of a solid cantilever steel beam with a span of 1000 mm and a rectangular cross section of 75mmx20mm; the beam is connected to asolid column by 6mm arc welding, strain gauge and strain indicator were used to measure strains and dial gauge to measure deflections. The experimental results of the bending stresses and deflections are compared to with the results from theory of Macaulay Methods and computer program called Strand7. The agreement is found quiet enough and satisfied for experimental and Strand7 results for both deflection and bending stress. For the theoretical results, there is a good consensus for bending stress values with the experimental values. Due to deficiency of shear and torsional forces in Macaulay Method, there are a high percentages difference between Macaulay Method and both experimental Method and Strand7 deflection values. Yassin Ali Ibrahim. This paper consists of an experimental work of a solid cantilever steel beam with a span of 1000 mm and a rectangular cross section of 75mmx20mm; the beam is connected to asolid column by 6mm arc welding, strain gauge and strain indicator were used to measure strains and dial gauge to measure deflections. The experimental results of the bending stresses and deflections are compared to with the results from theory of Macaulay Methods and computer program called Strand7. The agreement is found quiet enough and satisfied for experimental and Strand7 results for both deflection and bending stress. Introduction:- Welding is a process of connecting pieces of metal together by application of heat with or without pressure. For welding structural steel work, the most commonly used technique is the electric arc process in which the pieces of metal to be joined and fused together by an electric arc. Additional metal being added at the same time from a rod or wire (1). Dimitrakis (2) used a special designed part termed a stress diffuser with beam-column connections in which the beam web and bottom flange are separated near the weld access hole, was found to eliminate the high stress concentration at the weld access hole and reduce the high degree of local bending inthe beam flange by 49% and the maximum principal stress at 2.54 mm from the column face was reduced by 31%. The weld access hole is a structure technique that allows welding flange of T beam and I beam across their full width and also minimizing the induction of thermal stresses with a combination of partially releasing the welded section, avoiding welding the T section where the flange joints the web and improving cooling condition figure (1). The configuration adopted for web access holes also affects the performance of moment connections due to factors such as stress concentrations. A simple case studied of I -beam subjected to an asymmetric loads is carried out by FEA. Three different models of the I-beam were prepared and analysed separately using 1D element, 2D element and 3D elements. The results of these models were compared with mathematical models of the I-beam. The FEA results of these models showed respectable agreement with the theoretical calculation despite the small and negligible errors in the analysis. Experimental Work:-Preparation of Specimen:- The solid steel beam was connected to 2.5 m height column by using arc welding. A Strain gauge indicator attached by wires to the steel beamusing a gel pads to measure a strain. Four pads near the fixed end are connected to a beamandare shown in Figure ( Testing Procedure:-The load was applied with increment, equal to 50 Newton up to 1000 newton. For each load increment, the readings for dial gauge deflections at thefree end and strain gauge at the fixed end of solid beam were taken. Theoverall results for deflection at the free end and bending stress at fixed endare shown in Table (1). Strand7:-Strand7 is an easy and affordable Finite Element Analysis package designed to be used in small and largeConsulting and Engineering companies, involved in many different types of industries. As an additional feature Euro Technology has developed structural steel code compliance to SANS (4) 10162:1-2005 with the use of the Strand7 API.The Strand computer software was first developed by a group of academics from the University of Sydney and the University of New South Wales. Further to this early research work, an independent company called G+D Computing was established in 1988 to develop an FEA program that could be used commercially for industrial applications. In 1996 the company commenced work on a completely new software development specifically for the Windows platform. This product was first released in 2000 and was named Strand6. In 2005 the company also changed its name to Strand7 to better reflect its primary focus (5). To check the deflection and bending stress of the tested cantilever solid steel beam within the elastic range, the finite element analysis was carried out using Strand7 program to testify the comparison with the experimental results. The solid steel beam has been modelled as beam of 985 mm using Hexa20 solid element with 8 nodes in this paper Figure (5). The load attributes start from 50 Newtonup to 1000 Newtonapplied directly at the free end of the beam. The beam is fixed at one end and free at the other end and the beam was subdivided to 5 plates. Table (2) presents the dimension and material properties (6)of the beam used in the Strand7 analysis. The Strand7 results for deflection and bending stress are shown in Table (3) When a structural member like beam, slab etc. are loaded due to dead load or super imposed load, they deflect in a particular pattern. The pattern is parabolic for applied uniformly distributed loading, triangular for point loading or concentrated loading, trapezoidal for two concentrated load acting at middle third of span or any convenient shape as per loading pattern. All the above patterns are for simple support. But in this paper a cantilever beam is studied. The theoretical comparison is one of the procedures used to compare between bending stress and deflection in a beam with different type of methods. The cantilever beam is fixed at the supportwith 6 mm welding to a plate and loaded at the free end using a load increment from 50 Newton up to 1000 newton. The bending stress can be found at the fixed by using this equation (M x y/I xx ). Macaulay Method is used to determine a deflection at the fixed end. Where, A is the Area of beam section, y is the half depth of the beam, M x is the bending moment, I zz is the beam second moment of area, f y is the bending stress due to applied load (P). In this case, it is easier to consider the bending moment from the free end, as then reactions at the fixed end need not be determined. Results and Discussion:- The parameters that had been investigated in this study are the deflection and bending stress for the cantilever beam by using load increment starting from 50N up to 1000 N. Figure For the load's deflection curvefor entire loads for the welded beam-column connection are shown in Figure (9). The deflection for the beam obtained from Strand7 model is close to experimental results and the average difference between them for deflection are 6.7% while the average difference between the theoretical and experimental results for deflection are 45% at the free end. The difference between the deflection value obtained from theoretical and experimental work for the beam-column connection are high and this is due to some reasons like torsion and shear leg. In theoretical calculation for deflection which is due to shear force and torsional force not included in deflection equation. So this paper is concern only for elastic range for this beam and it's needed for future to study beam behaviour up to failure. Strand7 Experimental Theory
2019-04-22T13:02:26.640Z
2016-07-31T00:00:00.000
{ "year": 2016, "sha1": "810db23a2fb337e30f137439d983015ea89c0f80", "oa_license": "CCBY", "oa_url": "http://www.journalijar.com/uploads/581_IJAR-11234.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "96083bcae862f0e6b19a59bbd525114fa0a3da00", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
254152522
pes2o/s2orc
v3-fos-license
The oocyte cumulus complex regulates mouse sperm migration in the oviduct As the time of ovulation draws near, mouse spermatozoa move out of the isthmic reservoir, which is a prerequisite for fertilization. However, the molecular mechanism remains unclear. The present study revealed that mouse cumulus cells of oocytes–cumulus complexes (OCCs) expressed transforming growth factor-β ligand 1 (TGFB1), whereas ampullary epithelial cells expressed the TGF-β receptors, TGFBR1 and TGFBR2, and all were upregulated by luteinizing hormone (LH)/human chorionic gonadotropin (hCG). OCCs and TGFB1 increased natriuretic peptide type C (NPPC) expression in cultured ampullae via TGF-β signaling, and NPPC treatment promoted spermatozoa moving out of the isthmic reservoir of the preovulatory oviducts. Deletion of Tgfb1 in cumulus cells and Tgfbr2 in ampullary epithelial cells blocked OCC-induced NPPC expression and spermatozoa moving out of the isthmic reservoir, resulting in compromised fertilization and fertility. Oocyte-derived paracrine factors were required for promoting cumulus cell expression of TGFB1. Therefore, oocyte-dependent and cumulus cell-derived TGFB1 promotes the expression of NPPC in oviductal ampulla, which is critical for sperm migration in the oviduct and subsequent fertilization. I n mammals, large numbers of spermatozoa are deposited in the female reproductive tract during coitus. A subpopulation of the ejaculated spermatozoa migrates to the lower isthmus of the oviduct and quickly binds to the epithelial cells to form a functional spermatozoa reservoir (known as the isthmic reservoir), where spermatozoa are sequestered for variable periods, hours to months, depending on the species 1 . In mice, spermatozoa are stored in the isthmic reservoir for several hours 2,3 . As the time of ovulation draws near, spermatozoa move out of the isthmic reservoir 4,5 . Finally, only a few spermatozoa reach the ampulla for fertilization 2 . The migration of spermatozoa from the isthmic reservoir to the ampulla is a prerequisite for fertilization and is believed to be achieved by one or a combination of factors, such as oviduct peristaltic contractions, adovarian flow of oviduct fluid 6 , cilia beating 7 , and/or signaling molecules in the oviductal microenvironment 8,9 . During estrus, luteinizing hormone (LH) from the pituitary acts on the granulosa cells of preovulatory follicles to induce oocyte maturation and ovulation. Ovulated oocytes-cumulus complexes (OCCs) are picked up by the infundibulum and then are held in the ampulla for fertilization 10 . In vitro in cattle, the spermatozoa bound in the isthmic reservoir leave the reservoir as soon as the vital OCCs reach the ampulla 9 . In vivo in pig, the number of spermatozoa is increased in the ampulla and cranial isthmus segments after the transfer of OCCs into the oviductal ampulla 11 . The number of spermatozoa is dramatically lower in the oviductal lumen as the ovulated OCCs cannot enter the infundibulum of mice deficient in miR-34b/c and miR-449a/b/c (miR-dKO) without functional motile cilia 7 . Thus, the interaction between OCCs and the oviductal ampulla may initiate a signaling cascade that plays a crucial role in triggering spermatozoa to move out of the isthmic reservoir. OCC-derived factors could promote gene expression in the oviductal epithelial cells to regulate the physiological function of the oviduct 10,12 . We previously reported that OCCs induce the expression of natriuretic peptide type C (NPPC) in the oviductal ampulla 13 . NPPC can promote human and mouse spermatozoa motility by binding to its receptor natriuretic peptide receptor 2 (NPR2) 13,14 . Furthermore, spermatozoa derived from Npr2 mutant mice are unable to reach the ampullae for fertilization 13 . Thus, OCC-induced NPPC in the ampulla may promote spermatozoa moving out of the isthmic reservoir for fertilization by increasing spermatozoa motility. NPPC expression is upregulated by estrogen 15,16 , dexamethasone 17 , gonadotropin-releasing hormone 18 , and transforming growth factor-β (TGF-β) 19 in various cell types. We recently reported that TGF-β in granulosa cells promotes NPPC expression to maintain oocyte meiotic arrest 20 . In mammals, the TGF-β family contains three highly conserved ligands, including TGFB1, TGFB2, and TGFB3 21 . These ligands bind to the type 2 receptor (TGFBR2) followed by the type 1 receptor (TGFBR1) to phosphorylate SMAD2 (Sma-and Mad-related protein 2), and SMAD3. Phosphorylated SMAD2 (p-SMAD2) and p-SMAD3 then form a complex with SMAD4 to regulate its target gene expression 22 . TGF-β ligands are expressed in cumulus cells 20,23 and their receptors in the oviduct 24,25 . Therefore, we examined the role of TGF-β signaling in OCC-stimulated NPPC expression in the oviductal ampulla and the mechanisms of NPPC in promoting fertilization. Based on this background, we hypothesize that oocytedependent and cumulus cell-derived TGFB1 promotes NPPC expression by binding to TGFBR in the ampullary epithelial cells. NPPC then promotes sperm migration from the isthmic reservoir to the ampulla for fertilization. Results LH/human chorionic gonadotropin (hCG) promotes TGFB1 expression in cumulus cells. We first detected the expression patterns of TGF-β ligands and receptors in OCCs and ampulla. In cumulus cells of OCCs, the relative abundance of Tgfb1 mRNA was obviously higher than that of both Tgfb2 and Tgfb3 mRNAs ( Supplementary Fig. 1a). Tgfb2 was highly expressed in oocytes ( Supplementary Fig. 1b, c), consistent with our previous findings 20 . TGFB1 mRNA and protein levels were significantly higher in cumulus cells than oocytes and ampulla ( Supplementary Fig. 2). On the contrary, TGFBR1 and TGFBR2 mRNA and protein levels were significantly higher in ampulla than cumulus cells ( Supplementary Fig. 2). The regulation of TGFB1 expression in cumulus cells was studied during LH/hCG-induced ovulation. Immunofluorescence analysis revealed that TGFB1 was located in the cytoplasm of cumulus cells (Fig. 1a). No green fluorescence signals were found in isotype-specific IgG staining, suggesting the specific binding of TGFB1 ( Supplementary Fig. 3a). The fluorescence intensity of TGFB1 in the hCG treatment group was dramatically higher than that in the no hCG treatment group (Fig. 1a). In line with these findings, hCG treatment significantly increased TGFB1 mRNA and protein levels in cumulus cells (Fig. 1b, c). LH/hCG induces epidermal growth factor (EGF)-like growth factors in mural granulosa cells, which transactivates EGF receptor (EGFR) signaling in cumulus cells 26 . Thus, we studied the effect of EGFR signaling on LH/hCG-promoted TGFB1 expression. LH significantly increased Tgfb1 mRNA levels in cumulus cells from cultured follicles (Fig. 1d), which was completely abolished by the EGFR tyrosine kinase inhibitor AG1478 (Fig. 1d). Furthermore, EGF significantly increased TGFB1 mRNA and protein levels in cumulus cells from cultured cumulus-oocyte complexes (COCs, Fig. 1d, e), which was also completely blocked by AG1478 (Fig. 1d, e). All these results suggest that LH-EGFR signaling promotes TGFB1 expression in cumulus cells. EGFR signaling-induced gene expression in cumulus cells usually requires the participation of oocyte-derived paracrine factors (ODPFs) 27 . Therefore, we studied the possible role of ODPFs in regulating EGF-induced TGFB1 expression in cultured oocytectomized (OOX) complexes. EGF showed no effect on TGFB1 expression in OOX cumulus cells (Fig. 1f, g), but significantly increased TGFB1 mRNA and protein levels during coculture of OOX with fully-grown denuded oocytes, GDF9, BMP15, FGF8B, and/or TGFB2 (Fig. 1f, g). These results indicate that ODPFs participate in LH-EGFR signaling-promoted TGFB1 expression in cumulus cells. LH/hCG promotes TGFBR expression in the ampulla. We studied the effect of LH/hCG on the expression of TGF-β receptors and their downstream signaling molecules in the ampulla. Tgfbr1, Tgfbr2, and Smad4 mRNA levels were significantly increased in the ampulla at 11 h (without OCCs in the ampulla) and 13 h post-hCG (with OCCs in the ampulla) (Fig. 2a), suggesting that LH/hCG promotes the expression of these genes independent on OCC stimulation. The protein levels of TGFBR1, TGFBR2, SMAD4, p-SMAD2/3, and p-SMAD3 in the oviductal ampullae from hCG treatment group were significantly higher than those from no hCG treatment group (TGFBR1, P < 0.001; TGFBR2, P = 0.0012; SMAD4, P = 0.002; p-SMAD2/3, P = 0.0016; p-SMAD3, P = 0.0068. Figure 2b). Immunofluorescence analysis also showed that hCG treatment dramatically increased the cytoplasm accumulation of TGFBR1 and TGFBR2, and the nuclear accumulation of p-SMAD2/3, p-SMAD3, and SMAD4 (Fig. 2c). No fluorescence signals were found in isotype-specific IgG staining, suggest the specific binding of these primary antibodies ( Supplementary Fig. 3b). Although hCG and LH bind to the same receptor, they may have different roles. We collected oviductal ampullae from mice at the diestrus Large follicles and COCs were isolated from eCG-primed mice. OOX cumulus cells were produced by microsurgically removing oocytes from the COCs. OCCs were collected from the ampullae of superovulated mice (at 13 h post-hCG treatment). a Immunofluorescence analysis of TGFB1 (green) in cumulus cells before and after hCG injection (at 13 h post-hCG treatment). (n = 3 independent experiments). Nuclei were counterstained by DAPI (blue). The small white box indicates the location of the enlarged area, as shown in the lower left corner. Scale bars represent 50 μm. b, c Comparison of steady-state mRNA (b) and protein (c) levels of TGFB1 in cumulus cells before and after hCG injection (at 13 h post-hCG treatment). (n = 5 independent experiments in (b) and n = 4 independent experiments in (c)). d, e The effects of LH and EGF on TGFB1 mRNA (d) and/or protein (e) expression in cumulus cells. Follicles were cultured in medium supplemented with LH (1 μg/ml) and/or AG1478 (1 μM) for 12 h, and COCs were cultured in medium supplemented with EGF (10 ng/ml) and/or AG1478 for 12 h. (n = 3-5 independent experiments). f, g The effects of EGF, oocytes, and ODPFs on TGFB1 mRNA (f) and protein (g) expression in cumulus cells. COCs or OOX cumulus cells were cultured in a medium supplemented with EGF, oocytes (3 oocytes/μl), GDF9 (500 ng/ml), BMP15 (500 ng/ml), FGF8B (FGF8, 100 ng/ml), and/or TGFB2 (50 ng/ml) for 12 h. ODPFs, GDF9 + BMP15 + FGF8B + TGFB2; AG, AG1478. (n = 3-5 independent experiments). β-actin was used as a loading control in (c), (e), and (g). Bars indicate the mean ± SD. Each data point represents a biologically independent experiment. Statistical analysis was performed by two-tailed unpaired Student's t-test for two groups, and by one-way ANONA Tukey test for experiments involving more than two groups. ns, no significance (P ≥ 0.05). *P < 0.05, **P < 0.01, and ***P < 0.001. and estrus stages to detect the effect of endogenous LH. The results showed that the gene and protein levels of TGFBR1, TGFBR2, and SMAD4 in the ampullae from estrous mice were significantly higher than those from diestrous mice (Supplementary Fig. 4). Thus, LH/hCG upregulates the expression of TGFBR and SMAD4 in the ampulla and activates TGF-β signaling pathway. In the ampulla, the relative abundance of Lhcgr (encoding the LH receptor) mRNA was very low ( Supplementary Fig. 5a), and was lower than that of Egfr (encoding EGF receptor) mRNA ( Supplementary Fig. 5b). Furthermore, EGF, but not LH, significantly increased the mRNA levels of Tgfbr1, Tgfbr2, and Smad4 in cultured ampullae ( Supplementary Fig. 5c, d). These results show that LH/hCG promotes the expression of TGFBR1, TGFBR2, and SMAD4 in the ampulla, possibly via the EGFR signaling pathway. OCC-stimulated NPPC in the ampulla promotes sperm migration in the oviduct. Our previous work shows that ovulation induces Nppc expression in the oviductal ampullae 13 . OCCs promoted NPPC mRNA and protein expression in the cultured ampullae, which was completely blocked by the TGFBR1 kinase inhibitors, SB431542 and SD208 (Fig. 3a, b). We used TGFB1 to activate TGF-β signaling in the cultured ampullae since TGFB1 is highly expressed in the cumulus cells of OCCs. The addition of TGFB1 significantly increased the mRNA and protein levels of NPPC in the cultured ampullae (Fig. 3a, b), which was abolished by SB431542 and SD208 (Fig. 3a, b). Thus, OCCs stimulated NPPC expression in the ampulla by cumulus cell-derived TGFB1 binding to TGFBR. We examined the role of OCC-stimulated NPPC in spermatozoa moving out of the isthmic reservoir by monitoring sperm migration in the lower isthmus using B6D2-Tg (CAG/su9-DsRed2, Acr3-EGFP) RBGS002Osb (RBGS) mice whose spermatozoa express a red fluorescent protein in their mitochondria 3 . Whole oviducts were dissected from female mice after coitus at a stated time, and the movement of spermatozoa in the lower isthmus was observed using a Zeiss LSM 880 confocal laserscanning microscope. In postovulatory oviducts with OCCstimulated NPPC expression, large numbers of spermatozoa moved out of the isthmic reservoir in the observed time, and many shifted in the flow of the adovarian oviduct fluid (Supplementary Video 1). A few spermatozoa were observed in the ampullae after time-lapse imaging (60 min) (Fig. 3c, d). In contrast, only a small amount of spermatozoa moved out of the isthmic reservoir in preovulatory oviducts without OCCstimulated NPPC expression (Supplementary Video 1), and fewer spermatozoa were detected in the ampullae (Fig. 3c, d). The addition of NPPC into the preovulatory oviducts promoted the migration of spermatozoa from the isthmic reservoir to the oviductal lumen (Supplementary Video 1), and a few spermatozoa were detected in the ampullae (Fig. 3c, d). Taken together, these results show that OCC-stimulated NPPC promotes sperm migration from the isthmic reservoir to the ampulla. , and Smad4 mRNA in the ampullae isolated from mice before and after hCG injection (n = 3 independent experiments). b Protein levels of TGFBR1, TGFBR2, SMAD4, p-SMAD2/3, and p-SMAD3 in the ampullae isolated from mice before and after hCG injection (at 13 h post-hCG treatment). (n = 3 independent experiments). Total levels of SMAD2/3 and SMAD3 were used as corresponding loading controls for p-SMAD2/3 and p-SMAD3, respectively, and β-actin was used as the loading control for TGFBR1, TGFBR2, and SMAD4. Bars indicate the mean ± SD. Each data point represents a biologically independent experiment. Statistical analysis was performed by one-way ANOVA Tukey test in (a) and by two-tailed unpaired Student's t-test in (b). ns, no significance (P ≥ 0.05). **P < 0.01 and ***P < 0.001. c Immunofluorescence analysis of TGFBR1,TGFBR2, SMAD4, p-SMAD2/3, and p-SMAD3 (green) in the oviductal ampulla isolated from mice before and after hCG injection (at 13 h post-hCG treatment). (n = 3 independent experiments). Nuclei were counterstained by DAPI (blue). The amplified views of the boxed area are shown on the right-hand side. Scale bars represent 50 μm. Conditional deletion of Tgfb1 and Tgfbr2 blocks NPPC expression and sperm migration in the oviduct. To study whether cumulus cell-derived TGFB1 promoted NPPC expression in the ampulla, we produced conditional knockout (cKO) mice with inactivated TGFB1 in the cumulus cells (Tgfb1 cKO ) by crossing Tgfb1 fl/fl mice with Fshr-Cre mice, and cKO mice with inactivated TGFBR2 in epithelial cells (Tgfbr2 cKO ) by crossing Tgfbr2 fl/fl mice with Wnt7a-Cre mice. Immunofluorescence analysis revealed that TGFB1 and TGFBR2 were specifically reduced in cumulus cells and ampullary epithelial cells, respectively (Fig. 4a). The knockout efficiency was also confirmed by Western blotting (Fig. 4b). Histological and co-staining analyses indicated that Tgfbr2 deletion in epithelial cells had no obvious effect on the microstructure of the oviducts or distribution or the relative number of ciliated and secretory cells in the oviductal epithelium ( Supplementary Fig. 6). We also stained sections of . The ampullae were isolated from superovulated mice at 11 h post-hCG (without OCCs in the ampullae), and OCCs were collected from the ampullae of superovulated mice at 13 h post-hCG treatment. The ampullae were cultured with OCCs (3 OCCs/μl), TGFB1 (5 ng/ml), SB431542 (SB, 5 μM), and/or SD208 (SD, 1 μM) in a 50 μl drop for 3 h, and NPPC mRNA and protein levels were determined at the end of the culture. β-actin was used as a loading control. Each data point represents a biologically independent experiment. c Representative frames from the time-lapse imaging of the oviductal lower isthmus (left panel) and representative images of the ampulla at the end of the time-lapse imaging (right panel). (n = 6 mice in each group). The dashed line represents the edge of the lower isthmus. Oviducts were isolated from mice at the stated time. Amplified views of the boxed area are shown on the right-hand side. Arrow indicates spermatozoon in the oviductal ampullae. Scale bars represent 100 μm. Posto, postovulatory oviduct; Preo, preovulatory oviduct. d The number of spermatozoa in the ampullae at the end of the time-lapse imaging (n = 6 mice in each group). Each data point represents a single mouse. Statistical analysis was performed by one-way ANOVA Tukey test. Bars indicate the mean ± SD. ns, no significance (P ≥ 0.05). *P < 0.05 and ***P < 0.001. the oviduct with a marker of smooth muscle (αSMA), and found there were no differences in the thickness of myosalpinx in either the ampullary or isthmic regions between the Tgfbr2 fl/fl and Tgfbr2 cKO mice ( Supplementary Fig. 7). hCG-induced ovulationrelated NPPC mRNA and protein levels in the oviductal ampullae of Tgfb1 fl/fl and Tgfbr2 fl/fl mice (Fig. 4c, d), which was not detected in those of Tgfb1 cKO or Tgfbr2 cKO mice (Fig. 4c, d). In line with this, luciferase immunoassay analysis revealed that significantly lower concentrations of NPPC in the ampullae from Tgfb1 cKO and Tgfbr2 cKO mice compared with the corresponding i Representative images of the two-cell embryos. Scale bars represent 100 μm. j, k Rate of two-cell embryos (j) and the number of pups per litter (k). (n = 6-17 mice in each group). Two-cell embryos were obtained from mice at 1.5 days post coitum. β-actin was used as a loading control in (b, d, and e). Bars indicate the mean ± SD. Each data point represents a biologically independent experiment in (b-e), or represents a single mouse in (g, h, j, and k). Statistical analysis was performed by two-tailed unpaired Student's t-test in (b, g, h, j, and k), and by one-way ANOVA Tukey test in (c-e). ns, no significance (P ≥ 0.05). *P < 0.05, **P < 0.01, and ***P < 0.001. control mice (Supplementary Fig. 8). Thus, cumulus cell-derived TGFB1 promoted ampullary NPPC expression via TGFBR. Interestingly, NPPC protein levels in the ampullae of Tgfbr2 cKO mice were significantly lower than those of Tgfb1 cKO mice in the hCG treatment group (Fig. 4e), and Tgfbr2 deletion also decreased NPPC levels in the ampullae in the no hCG treatment group (Fig. 4c, d). This suggests that TGF-β ligands from the oviductal microenvironment promote NPPC expression (basal levels) in the ampulla by binding to TGFBR2, which is independent of ovulation. We next examined the physiological role of TGFB1-promoted NPPC in sperm migration in the oviduct. In postovulatory oviducts of Tgfb1 fl/fl and Tgfbr2 fl/fl mice, large numbers of spermatozoa moved out of the isthmic reservoir, and many shifted in the flow of the adovarian oviduct fluid ( Supplementary Video 2, 3). In contrast, small numbers of spermatozoa moved out of the isthmic reservoir in the postovulatory oviducts of Tgfb1 cKO and Tgfbr2 cKO mice (Supplementary Video 2, 3). The addition of NPPC to the postovulatory oviducts of Tgfb1 cKO and Tgfbr2 cKO mice obviously promoted spermatozoa moving out of the isthmic reservoir (Supplementary Video 2, 3). We also examined the distribution of spermatozoa in the oviducts from mice at~3 h post-copulation (at 16 h post-hCG treatment). Compared with the corresponding control mice, fewer spermatozoa were observed in the ampullae, and many more spermatozoa in the isthmic reservoir of Tgfb1 cKO and Tgfbr2 cKO mice (Fig. 4f, g). There were no differences in the total number of spermatozoa in the oviducts between Tgfbr2 fl/fl and Tgfbr2 cKO mice ( Supplementary Fig. 9), suggesting that conditional deletion of Tgfbr2 in epithelial cells has no overt effect on spermatozoa entering the oviducts. Therefore, cumulus cell-derived TGFB1 induces NPPC expression in the ampulla, and then NPPC promotes sperm migration in the oviduct. We further examined the effects of Tgfb1 and Tgfbr2 deletion on fertilization and found that 50.3% of oocytes in Tgfb1 cKO mice and 7.9% of oocytes in Tgfbr2 cKO mice underwent successful fertilization and development to the two-cell stage after natural mating (Fig. 4i, j). Fertility testing revealed that Tgfb1 cKO mice had significantly fewer pups per litter than Tgfb1 fl/fl mice (3.05 ± 0.90 versus 6.15 ± 1.14, P < 0.001; Fig. 4k). Unlike Tgfbr2 fl/fl mice that produced 6.3 pups per litter, Tgfbr2 cKO mice only produced 0.9 pups per litter (P < 0.001; Fig. 4k): six Tgfbr2 cKO mice with a copulatory plug failed to produce pups, and the other eleven mice with copulatory plug produced 1-3 pups (Fig. 4k). However, conditional deletion of Tgfb1 and Tgfbr2 had no effect on ovulation, and the ovulated oocytes could be fertilized in vitro and underwent the first round of cleavage normally (Fig. 4h, Supplementary Fig. 10). Collectively, Tgfb1 deletion in cumulus cells and Tgfbr2 deletion in epithelial cells blocked ovulation-induced NPPC expression in the ampulla, resulting in a dramatic decrease in sperm migration in the oviduct and compromised fertility. Conditional deletion of Tgfbr2 in epithelial cells changes the oviduct transcriptome. To uncover the potential molecular mechanisms attributing to sperm migration defects, transcriptome analysis was performed on the oviductal cells of Tgfbr2 fl/fl and Tgfbr2 cKO mice (at 13 h post-hCG. Supplementary Fig. 11a). A total of 348 transcripts were identified to be significantly dysregulated in the oviductal cells of Tgfbr2 cKO mice, of which 202 transcripts were downregulated and 146 transcripts were upregulated (Fig. 5a, Supplementary Data 1). The significant dysregulation of the representative transcripts was validated by qRT-PCR (Fig. 5b). Gene enrichment analysis showed that the upregulated transcripts were mainly enriched for cell adhesion and immune responses (Fig. 5c). The downregulated transcripts were mainly enriched for mitotic cell cycle ( Supplementary Fig. 11b). Gene set enrichment analysis (GSEA) suggested that cell adhesion and NF-kappa B signaling were activated (Fig. 5d, e) and the cell cycle and cholesterol metabolism were inactivated in the oviductal cells of Tgfbr2 cKO mice ( Supplementary Fig. 11c, d). Further analysis of gene enrichment showed that the expression of Has1 was upregulated, and the expression of Cemip and Ser-pina1e was downregulated in the oviductal cells of Tgfbr2 cKO mice (Fig. 5f). Discussion Stored spermatozoa move out of the isthmic reservoir during ovulation. The present study indicated that cumulus cell-derived TGFB1 from ovulated OCCs promoted NPPC expression by binding to its receptors in the ampullary epithelial cells, and then NPPC triggered sperm migration from the isthmic reservoir to the ampulla for fertilization. A small number of spermatozoa moved out of the isthmic reservoir of the preovulatory oviducts without OCC-stimulated NPPC expression (Supplementary Video 1), which was promoted by the addition of NPPC (Supplementary Video 1). Large numbers of spermatozoa moved out of the isthmic reservoir of the postovulatory oviducts with OCC-stimulated NPPC expression (Supplementary Video 1), which was blocked in Tgfb1 cKO and Tgfbr2 cKO mice (Supplementary Videos 2 and 3). Thus, NPPC produced by cumulus cell-derived TGFB1 in the ampulla promotes sperm migration from the isthmic reservoir to the ampulla. The ovulated OCCs are wrapped by the ampulla to form a relatively closed microenvironment from the ampulla to the isthmus, which would be beneficial for NPPC diffusion from the ampulla to the isthmus. There is some evidence that mouse spermatozoa detach from the epithelium before they move out of the isthmic reservoir 4 . NPPC increases the motility of mouse and human spermatozoa by binding to NPR2 13,14 , which may promote spermatozoa detachment from the epithelial cells by overcoming the adhesive forces. The lack of cumulus matrix impairs oocyte fertilization in bikunin-deficient female mice 28 , and spermatozoa from Npr2 mutant mice could not arrive at the ampulla for fertilization 13 , possibly because of sperm migration failure. In humans, TGFB1 and TGFBR are also expressed in the granulosa cells 29 and oviduct 30 , respectively, which may promote NPPC expression in the ampulla for sperm migration in the oviduct. The decrease in the number of spermatozoa migrating from the isthmic reservoir to the ampulla led to reduced fertilization in Tgfb1 cKO and Tgfbr2 cKO mice. Tgfbr2 cKO mice had lower levels of NPPC in the oviductal ampulla than Tgfb1 cKO mice, suggesting that conditional deletion of Tgfbr2 in oviductal epithelial cells not only blocks OCC-stimulated NPPC expression but also decreases basal levels of NPPC in the ampulla. TGF-β ligands from the oviductal microenvironment, particularly TGFB1 31 , may promote NPPC expression via TGFBR in the oviduct. Lower levels of NPPC in the oviducts of Tgfbr2 cKO mice resulted in fewer spermatozoa moving out of the isthmic reservoir and lower numbers of two-cell embryos and litter size. Thus, NPPC produced in the ampulla is critical for sperm migration in the oviduct for fertilization. Although ovulated OCCs were unable to induce NPPC expression in the ampullae of Tgfb1 cKO mice, a small number of spermatozoa still moved out of the isthmic reservoir, consistent with the findings of a previous study that reported a small number of swimming-free spermatozoa observed in the oviductal lumen in miR-dKO mice without OCCs in the ampulla 7 . This small number of spermatozoa migrate from the isthmic reservoir to the ampulla, possibly by the induction of the basal levels of NPPC in the oviduct. Conditional deletion of Tgfb1 in cumulus cells resulted in around a 50% fertilization rate. A previous study shows that the global deletion of Tgfb1 has a slight effect on the fertilization rate 32 . This may be because the mice with global deletion of Tgfb1 have around 50% fewer spontaneously ovulated oocytes than wild-type mice. Nppc mRNA is identified in the secretory cells of ampullary epithelial cells 33 . In the present study, the deletion of Tgfbr2 in the epithelial cells using Wnt7a-Cre blocked NPPC expression in the oviducts, resulting in a dramatic decrease in sperm migration in the oviduct and severely compromised fertilization, but had no effect on the oviductal microstructure or the ability to produce pups. The deletion of Tgfbr1 and Tgfbr2 in the smooth muscle compartment using Amhr2-Cre reportedly impairs the integrity and function of the oviduct and uterus but has no effect on fertilization 24,25 . This indicates that TGFBR signaling plays different roles in the oviductal epithelial cells and the smooth muscle compartment. This concept is supported by a previous study that reported that the functions of estrogen receptor α signaling are different in the oviductal epithelial cells and the smooth muscle compartment 34 . The activation of the LH receptor by LH/hCG promoted TGFB1 expression in cumulus cells, consistent with a previous report that the mRNA and protein levels of TGFB1 are increased in cumulus cells when the follicles are cultured in the presence of LH 23 . EGF-like growth factors are mediators of LH/hCG action in cumulus cells 26 . Consistent with this, LH/hCG promoted TGFB1 expression in cumulus cells via the transactivation of EGFR. LH-EGFR signaling-induced TGFB1 expression in cumulus cells required the cooperation of ODPFs, similar to the regulation of cumulus expansion-related transcripts 27 . Thus, oocytes are involved in sperm migration by promoting TGFB1 expression in cumulus cells and then NPPC expression in the oviductal ampulla. The previous reports show that oocytes promote follicular development by controlling the proliferation and metabolic activities of granulosa cells 35 , and promote ovulation by participating in cumulus expansion 36 . Therefore, oocytes coordinate follicular development, ovulation, and fertilization. LH/hCG also promoted TGFBR expression in the ampulla, possibly via the EGFR signaling pathway. Thus, LH/hCG promotes the expression of TGFB1 in cumulus cells and TGFBR in the ampulla, which is a prerequisite for ovulation-stimulated NPPC expression and subsequent sperm migration in the oviduct. The entire mount oviduct was used for transcriptomic analysis since there is a potential interaction between the stromal cells and epithelial cells 25 . In Tgfbr2 cKO mice, the upregulation of Has1 and the downregulation of Cemip may prevent sperm migration in the oviduct by increasing hyaluronan levels 37 , and the upregulation of Serpina1e may impair the spermatozoa motility by inhibiting serine proteases 38 . The activation of cell adhesion and NF-kappa B signaling in the oviductal cells of Tgfbr2 cKO mice may impair spermatozoa moving out of the isthmic reservoir 39 . The inactivation of cholesterol metabolism in the oviductal cells of Tgfbr2 cKO mice may impair the production of steroid hormones, and then impair spermatozoa survival and release 8,40 . Thus, these changed transcripts in the oviduct of Tgfbr2 cKO mice may also participate in the block of sperm migration in the oviduct. The finding of the present study reveals a complex regulatory network involving ovulation-triggered mouse sperm migration in the oviduct. LH promotes the expression of TGFB1 in cumulus cells and TGFBR in the ampullary epithelial cells during ovulation. The interaction between OCCs and ampulla promotes NPPC expression by binding cumulus cell-derived TGFB1 to TGFBR in the ampullary epithelial cells. NPPC promotes sperm migration from the isthmic reservoir to the ampulla for fertilization. Oocytes are involved in sperm migration by promoting TGFB1 expression in cumulus cells (Fig. 6). The mechanism of ovulation-triggering sperm migration in the oviduct revealed here will be crucial for deciphering the potential role of NPPC in sperm migration in the oviduct in humans and other species. Methods Animals and chemicals. Female C57BL/6 J mice were purchased from Guangdong Medical Laboratory Animal Center. Tgfb1 fl/fl , Tgfbr2 fl/fl , and Wnt7a-Cre mice were purchased from The Jackson Laboratory (Bar Harbor, ME, USA). Mice with Tgfb1 deletions in the cumulus cells were generated by crossing Tgfb1 fl/fl mice with Fshr-Cre mice [20] . Mice with Tgfbr2 deletions in the epithelial cells were generated by crossing Tgfbr2 fl/fl mice with Wnt7a-Cre mice 34,41 . RBGS mice using for monitoring spermatozoa movement 3 were purchased from RIKEN BioResource Center (Ibaraki, Tsukuba, Japan). Female mice (21-23 days old) were intraperitoneally injected with 5 IU equine chorionic gonadotropin (eCG) 48 h before use to stimulate follicle growth. In some experiments, female mice were intraperitoneally injected with 5 IU eCG followed by 5 IU hCG 48 h later to induce ovulation (superovulation). Fertility was analyzed in 6-week-old female mice. All mice were raised under controlled temperatures of 23°C ± 2°C with a 12/12 h light/dark cycle. Mouse experimental procedures and protocols were approved by the Institutional Animal Care and Use Committee of the South China University of Technology and were conducted in accordance with the institutional guides for the Fig. 5 Distortion of the oviduct transcriptome in Tgfbr2 cKO oviduct. a Volcano plot illustrating the differential expression genes in the oviductal cells (at 13 h post-hCG) of Tgfbr2 fl/fl and Tgfbr2 cKO mice determined by RNA-seq analysis (n = 3 mice in each group). The pink spots denote the transcripts that were validated by qRT-PCR. b qRT-PCR validating changes in representative transcripts selected from RNA-seq data (n = 3 independent experiments). Bars indicate the mean ± SD. Each data point represents a single mouse. Statistical analysis was performed by a two-tailed unpaired Student's t-test. **P < 0.01 and ***P < 0.001. c Bar graph showing the enriched GO/KEGG terms associated with the significantly upregulated transcripts in oviducts isolated from the Tgfbr2 cKO mice identified by RNA-seq, with the color indicating P value (n = 3 mice in each group). d, e GSEA plots illustrating enrichment of gene sets of cell adhesion molecules (d) and NF-kappa B signaling pathway (e) in oviductal cells of Tgfbr2 fl/fl and Tgfbr2 cKO mice (n = 3 mice in each group). f Heatmap showing differences between the Tgfbr2 fl/fl and Tgfbr2 cKO oviducts in the expression of a group of transcripts involved in various processes (n = 3 mice in each group). Fig. 6 Schematic showing that cumulus cell-derived TGFB1 promotes NPPC expression in the ampulla for mouse sperm migration in the oviduct. a, b Molecular signaling (a) and cellular interaction (b) for sperm migration in the oviduct. When ovulation occurs, LH activates EGFR signaling in cumulus cells by EGF-like growth factors, and then cooperates with oocyte-derived paracrine factors (ODPF) to promote cumulus cell TGFB1 expression. TGFB1 induces NPPC production by binding to TGFBRs in the ampullary epithelial cells, and then promotes sperm migration from the isthmic reservoir to the ampulla for fertilization. AIJ, ampullary-isthmic junction. care and use of laboratory animals. Reagents were purchased from Sigma-Aldrich (St. Louis, MO, USA) unless otherwise stated. Immunofluorescence and histology analyses. COCs from eCG-primed mice and OCCs from superovulated mice (at 13 h post-hCG treatment) were fixed in 4% paraformaldehyde (PFA) for 30 min at room temperature. After permeabilization with phosphate-buffered saline (PBS) containing 0.5% Triton X-100 for 5 min, samples were blocked with 5% bovine serum albumin for 1 h at room temperature. Oviducts were dissected from female mice under a dissection microscope before and after hCG injection (at 13 h post-hCG treatment), and fixed in 4% PFA at 4°C overnight. After dehydration, the oviducts were embedded in paraffin and cut into 5-μm sections. Sections were blocked with 10% normal donkey serum. After blocking, all the samples were incubated with primary antibodies (Supplementary Table 1) and then incubated with Alexa Fluor 488 or 555-conjugated secondary antibodies (1:200, Thermo Fisher Scientific). Finally, samples were counterstained with DAPI. The isotype-specific immunoglobulins (IgG) at the same protein concentration as the primary antibodies were used for the negative control (Abcam, ab172730, Cambridge). Immunofluorescent staining was examined using a Zeiss LSM 800 confocal microscope (Zeiss, Oberkochen, Germany). Histological analysis was performed using sections from oviductal tissues stained with hematoxylin and eosin. Western blotting. In each group, 4-8 oviductal ampullae, 100-300 oocytes, or the cumulus cells from 50-100 OCCs or OOX were used. Total proteins were extracted in WIP buffer (Cell Chip Biotechnology, Beijing, China) with 1 mM phenylmethylsulfonyl fluoride on ice. Protein concentrations were quantified using the BCA Protein Assay Kit (Beyotime, Shanghai, China). Normalized protein amounts were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and then electrically transferred to polyvinylidene fluoride membranes (Millipore). After blocking with 5% nonfat milk in Tris-buffered saline (TBS) for 1 h at room temperature, the membranes were incubated with primary antibodies (Supplementary Table 1) overnight at 4°C. Membranes were then washed three times for 5 min each in TBS with 0.1% Tween-20 and incubated with horseradish peroxidase-conjugated secondary antibodies (each diluted 1:5000, ZSGB-BIO, Beijing, China). Proteins were detected using SuperSignal West Pico Kit (Thermo Fisher Scientific) and visualized using the Tanon 5200 chemiluminescent imaging system (Tanon, Shanghai, China). β-actin was used as a loading control. Uncropped scans of Western blotting membranes are shown in Supplementary Fig. 12. RNA extraction and qRT-PCR analysis. In each group, 4 oviductal ampullae, 50 oocytes, or the cumulus cells from 40 OCCs, OOXs, or COCs were used. Total RNA was extracted and purified using the RNeasy micro-RNA Isolation Kit (Qiagen, Valencia, CA, USA), and no more than 1 μg RNA was reverse transcribed into cDNA using the QuantiTect Reverse Transcription System (Qiagen) according to the manufacturer's instructions. qRT-PCR was conducted in 15-μl reaction volumes and analyzed on a Light Cycler 96 instrument (Roche, Basel, Switzerland) using a standard protocol. The reagent used for qRT-PCR is TransStart Tip Green qPCR SuperMix (TransGen, Beijing, China). Relative gene expression was quantified using the threshold cycle value and normalized using ribosomal protein L19 (Rpl19) as a housekeeping gene. Details of the qRT-PCR primers are shown in Supplementary Table 2. RNA-seq analysis. RNA-seq analysis was performed on triplicate samples of Tgfbr2 fl/fl and Tgfbr2 cKO oviducts (at 13 h post-hCG). Total RNA was extracted and purified from the oviducts using the RNeasy micro-RNA Isolation Kit (Qiagen) according to the manufacturer's instructions. RNA quality was assessed on an Agilent 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA) and checked using RNase-free agarose gel electrophoresis. Libraries were prepared with NEB-Next Ultra RNA Library Prep Kit for Illumina (NEB #7530, New England Biolabs, Ipswich, MA, USA). The resulting cDNA library was sequenced using Illumina Novaseq6000 by Gene Denovo Biotechnology Co (Guangzhou, China). Correlation analysis was performed by R. Correlation of two parallel experiments provides the evaluation of the reliability of experimental results as well as operational stability. The differentially expressed transcripts were calculated by DESeq2 software between two different groups. Gene enrichment analysis (Gene Ontology/KEGG) was conducted using Metascape (http://metascape.org), a gene annotation and analysis resource, following the online instruction provided by the web developer. Gene set enrichment analysis (GSEA) was performed using the GSEA software. Measurement of NPPC levels. Oviductal ampullae were collected from the superovulated mice at 13 h post-hCG treatment. Ampullae from 10 mice per group were transferred into a 1.5-ml centrifuge tube for NPPC analysis. Samples were prepared as previously reported with slight modifications 13,44 . Briefly, ampullae were boiled in 1.0 M acetic acid for 5 min and then lysed using an ultrasonic cell disruptor (Scientz-IID, Ningbo, China). The samples were centrifuged at 20,000×g at 4°C for 30 min, and the supernatant containing 1.0-5.0 mg of protein was lyophilized and assayed using fluorescent enzyme immunoassay kits (Phoenix Pharmaceuticals, Belmont, CA, USA) according to the manufacturer's instructions. Analysis of ovulation, fertilization, and fertility. Ovulation was analyzed in female mice (21-23 days old) of different genotypes that were injected intraperitoneally with 5 IU eCG followed by 5 IU hCG 48 h later to induce ovulation. Oocytes collected from the oviductal ampulla of mice at 13 h post-hCG treatment were fertilized with capacitated epididymal spermatozoa isolated from cauda epididymis of 10-week-old fertile male mice. In vitro fertilization was performed in a human tubal fluid (HTF, MR-070) medium. After culturing at 37°C in an atmosphere of 5% CO 2 for 6 h, oocytes were washed, and fertilization was determined by the presence of two pronuclei at 8-9 h after IVF. The fertilized oocytes were then transferred to a K + simplex optimized medium (KSOM) for further development 45 . Images of two-cell embryos were obtained using a Zeiss Axio Vert. A1 microscope. Fertility was analyzed by mating the 6-week-old female mice with 10-week-old fertile male mice. The number of pups per litter was recorded at birth. Analysis of sperm migration in the oviducts. Sperm migration was analyzed using RBGS male mice as their spermatozoa appear red under a fluorescent microscope. The 6-week-old superovulated female mice were caged with 10-weekold RBGS male mice starting at 9 h post-hCG injection. The presence of a copulation plug was checked every 20 min. Female mice mated at 10-11 h post-hCG (without OCCs in the ampulla) were used for preovulatory studies, and the female mice mated at 12-13 h post-hCG (with OCCs in the ampulla) were used for postovulatory studies. The ovary-oviduct-uterus complex was removed from the mouse~30 min after copulation and placed in PBS. Entire oviducts were dissected from the ovary by opening the ovarian bursa. The oviduct was uncoiled and separated from the uterine horn by sharp dissection to yield single intact oviduct preparation comprising the infundibulum, ampulla, isthmus, and intramural segments. The mesosalpinx was carefully cut under a stereo microscope (Zeiss) to avoid pulling and damaging the tubes. All operations were performed in PBS. In some experiments, the oviducts were incubated with NPPC (10 nM) for~30 min. The whole mount oviduct was transferred into a glass-bottomed dish (Nest, Jiangsu, China) and fixed to the bottom of the dish using low-melting agarose. The dish was then filled with PBS and maintained at a physiologically relevant temperature of 37°C ± 0.5°C in a thermal chamber. The movement of the spermatozoa within the lower isthmus was imaged using a Zeiss LSM 880 confocal microscope. Sites of the movies in the oviducts from preovulatory and postovulatory mice are shown in Supplementary Fig. 13. Time-lapse imaging was performed as previously described 46 for 1 h with a time interval of~1 min. Imaging files were processed using ImageJ software. At the end of the time-lapse imaging, the images of the oviductal ampullae were captured using a Zeiss LSM 880 confocal laser-scanning microscope. Then, spermatozoa in the ampullae were counted as previously described 34 . In some experiments, whole oviducts were dissected from female mice~3 h postcopulation (at 16 h post-hCG treatment) to observe spermatozoa distribution and to count the total number of spermatozoa in the oviducts. Statistics and reproducibility. Statistical analyses were performed using Graph-Pad Prism version 8.3 (GraphPad Software, La Jolla, CA, USA). Two-tailed unpaired Student's t-test was used for the analysis of the two groups, and P < 0.05 was considered significant. For experiments involving more than two groups, differences between groups were compared by one-way ANOVA Tukey test, and P < 0.05 was assigned to significantly different. Data are represented as the mean ± SD. All experiments were repeated at least three times using different mice, and detailed information was presented in the figure legends. Reporting summary. Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article Data availability All data needed to evaluate the conclusions in this study are present in the manuscript and/or Supplementary information, and available from the corresponding author upon reasonable request. The source data underlying graphs presented in the main figures are provided in Supplementary Data 2. Uncropped scans of Western blotting results are provided in Supplementary Fig. 12. RNA-seq data are deposited in the NCBI Sequence Read Archive (accession number PRJNA888862).
2022-12-03T14:47:38.223Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "9de270ee6f52d725d598ca1ec4e8b249950956f1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "9de270ee6f52d725d598ca1ec4e8b249950956f1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233321687
pes2o/s2orc
v3-fos-license
Association of AID and MUM1 by Immunohistochemistry in Diffuse Large B-Cell Lymphoma 1Division of Hematology and Medical Oncology, Department of Internal Medicine, Faculty of Medicine, Public Health and Nursing, Universitas Gadjah Mada/Dr. Sardjito General Hospital, Jl. Farmako, Yogyakarta, Indonesia 2Department of Anatomical Pathology, Faculty of Medicine, Public Health and Nursing, Universitas Gadjah Mada/Dr. Sardjito General Hospital, Jl. Farmako, Yogyakarta, Indonesia 3Department of Internal Medicine, Universitas Kristen Duta Wacana/Bethesda Hospital, Jl. Dr. Wahidin Sudirohusodo No.5-25, Yogyakarta, Indonesia Introduction As the most frequent type of B-cell lymphoma, diffuse large B-cell lymphoma (DLBCL) comprises for 30%-40% of this type of non-Hodgkin lymphoma, even more than 60% in Indonesia. (1,2) Since two decades ago, DLBCL has been known as a heterogeneous disease both molecularly and clinically with the existence of two distinct types of DLBCL, namely germinal center B-cell (GCB) and activated B-cell (ABC) or non-GCB, using the gene expression profiling. (3) The basic construction of the classification originates with the different developmental stage of B-cell, and its disruption may explain the arising of DLBCL. (1,3,4) Somatic mutations could be viewed as one explanation of cancer, in which such defect accumulation may later transform the normal characteristic of cells into malignant phenotypes. (5) It has been known that several mutations of DLBCL were associated with the mechanism of class switch recombination (CSR) and somatic hypermutation (SHM). Activation-induced cytidine deaminase (AID) which is expressed by the aicda gene is an important enzyme for both two physiological processes. (6) On the other hand, AID is also recognized as the mutator enzymes that contributes for lymphomagenesis of DLBCL. (1,7) Previously, Aicda gene is more classified as a dominant gene expressed in germinal center cells, but in DLBCL AID expression is known to be more prominent in the ABC than GCB.(8) Clinically, AID overexpression has lower overall survival and progression free survival (PFS) in addition to poor response of salvage therapy after relapse in DLBCL. (9,10) A transcription factor, namely multiple myeloma oncogene 1 (MUM1) or interferon regulatory factor 4 (IRF4), has been recognized as the marker of B-cells differentiation into plasma cells and their expression has an important role in upregulating the expression of AID for initiation of CSR. (11,12) Immunohistochemical staining with MUM1 is a marker of non-germinal center (non-GC) type of DLBCL.(3) Moreover, MUM1 expression in DLBCL is reported to have an impact on DLBCL clinical outcomes. (13,14) In previous studies there was a correlation between AID and MUM1 expression with known close interactions between them physiologically. (10,11) However, the expression between AID and MUM1 in local DLBCL samples and their association with clinical presentation have not been studied further in Indonesia although the cases are known to be quite often in a number. Thus, we aimed to examine the expression between AID and MUM1 among our local DLBCL samples using immunohistochemstry method to observe the association among them and further correlation with the clinical findings. Immunohistochemistry Analysis For immunostaining, 4-µm thick sections were cut from formalin-fixed paraffin-embedded (FFPE) tissue blocks and placed on electrostatic-charged, poly-L-lysine-coated slides (Biogear, Microscope Slide, Biogear Scientific, BioVentures, Inc., Coralville, Iowa, USA). Sections were dehydrated at 45°C overnight. All immunostaining procedures including deparaffinization were performed on Semi-automatic Intellipath FLX (Biocare Medical, Concord, Massachusetts, USA) with open kit. The antigen retrieval process was performed on Deckloaking Chamber from Biocare Medical. The counterstaining process with hematoxylin was performed under a semi-automatic slide stainer. After that, the dehydration process was achieved, followed by clearing with xylene, and finally the mounting process was finished to end the entire immunostaining process. The following primary antibodies were used in this study: AID (Invitrogen, eBioscience, San Diego, California, USA) and MUM1 (dilution: 1/50, clone: MUM1p, Dako SA, Glostrup, Denmark). Reactive lymph nodes tissue samples were used as positive controls. Negative controls were treated with the same immunohistochemical method by omitting the primary antibody. The cut-off level for interpreting MUM1 as positive was >30% tumor cell staining. (15) However, the optimum cut-off of AID will be determined using subsequent ROC analysis, which resulted for 1% as threshold. For this study, we grouped DLBCL based on AID and MUM1 IHC expression into the concordant and discordant group. The concordant group consisted of AID + /MUM1 + and AID -/MUM -, however, the discordant group was only existed in AID -/MUM1 + . Statistical Analysis Comparative, correlation, and survival analyses were performed using Rstudio (Version 1.3.959; R Foundation for Statistical Computing, Vienna, Austria). Receiver operating characteristic (ROC) curve was performed to obtain optimum cut-off and Kaplan-Meier curve was graphed to perform time-to-event analysis. Statistical significance was determined when p<0.05. Overall survival is defined as time from diagnosis to death resulting from any cause. Results From 20 cases of DLBCL involved in this study, most cases had good ECOG performance status (95%) and limited stage (70%). Only few patients had multiple extranodal site (10%). Many patients did not perform LDH level examination in their initial visit with only a patient among the available LDH group (n=4) was known having increased level of LDH (25%). Most patients were treated with rituximab based regiment (90%). The baseline characteristics of the subjects could be seen in Table 1. Representative expression between AID and MUM1 was shown in Figure 1. A diverse degree of expression of AID was found in either dark or light zones of germinal centers of the tumor area. Moreover, AID was mainly collocated in the cytoplasm of the tumor, despite in some cases were also positively stained in the nucleus ( Figure B1). MUM1 was mainly stained in the nuclear area, however, some cases also presented with weak to moderately-stained of the cytoplasm ( Figure A2; Figure B2). IHC examination obtained a significant association between AID and MUM1 in our local DLBCL sample (p=0.008). The percentage of expression among DLBCL samples was provided in Figure 2. All AID positive samples had concordant positive of MUM1 expression with AID expression sample was 2.25 times to have same expression status of MUM1 (PR: 2.25; 95% CI: 1.08-4.67). Statistical significance of concordant expression between AID and MUM1 was presented in Table 2. Concordance rate expression occurred in 80% of sample with Cohen's kappa of 0.578 (p=0.004). Clinicopathological comparison between concordant positive, negative, or discordant expression of AID and MUM1 was presented in Table 3. It was evidenced that no statistical difference in terms of age, stage, extranodal involvement, performance status, or LDH level (p>0.05). Survival analysis with Kaplan-Meier curve based on concordant expression status between AID and MUM1 among DLBCL sample was presented in Figure 3. Overall, no significant difference was found between the concordant and discordant expression, with the latter type had lower hazard ratio of 0.99 (95% CI: 0.10-9.65; p=0.992) compared with the former. In addition, restricted mean survival time was also shorter in concordant group but no significant difference was observed (21.16 vs. 22.5 months; p=0.531). Discussion The clinical characteristic of DLBCL subjects involved in this study was dominated by patient with predicted favorable factors, for instance, patient with limited stage, good performance status, and limited extranodal involvement. In addition, the DLBCL subjects age were similar with previous epidemiology study of DLBCL in Indonesia which predominates with patients aged of 50-59 years old.(2) On the other hand, the age was relatively younger than American, but comparatively similar with Asian, particularly in China. (16) Most patients were also treated using rituximab (anti-CD20) containing regiment as the real data world of DLBCL was shown favorable outcome using this therapy. (17) Using the previous explained cut-off of expression, our preliminary study observed a high concordance expression rate between AID and MUM1 among our DLBCL samples. Moreover, the relatively moderate degree of expression agreement observed by the kappa index obtained in this study may suggest the interplay between both genes expression in DLBCL. Similar association between AID and MUM1 expression was also observed by other study group.(10) Moreover, an earlier study also reported a predomination of non-GC or ABC to express AID along with MUM1, in which MUM1 was formerly recognized as a genetic marker of ABC DLBCL.(8) It has been known that MUM1/IRF4 was a transcription factor that may increase AID expression, directly or indirectly. (11,18) For translational relevance, this study may further confirm the diversity of DLBCL using the expression of AID and MUM1 which potentially add molecular information among DLBCL. The predomination of non-GC type of DLBCL in Indonesia also suggests an alternative measure to explain the diversity of this subtype using expression of AID and MUM1, as the latter gene expression was commonly found in this subtype. (19) Clinicopathological comparison between the concordant and discordant expression of AID and MUM1 did not result any significant different clinical parameter for each group. The survival analysis also revealed insignificant survival differences in each group. Interestingly, the trend of a longer survival, even insignificant, in samples with AID negative but MUM1 positive, namely the discordant group, showed a tendency of favorable prognosis in DLBCL with AID negative. Molecularly, AID as a mutator enzyme may generate other oncogenic mutation such as IGH-MYC or IGH-BCL6 translocation, which later may transform into more aggressive type of lymphoma, namely molecular high-grade B-cell lymphoma. (4,20,21) It was also proposed that expression of AID could indicate the alteration into more refractory type of lymphoma. (10,22) Thus, this preliminary study is still limited and needs to be expanded to examine any significant impact of expression status towards patient survival later. Another limitation of the study was the discordant group only existed in sample with AID -/MUM + . Therefore, expanding the sample size in later research may increase the probability to find AID + /MUM1among DLBCL. The existence of this group also further confirms the characteristic distinction for each group and adds molecular heterogeneity of DLBCL. Conclusion In conclusion, our study confirmed the significant association of AID and MUM1 expression with immunohistochemistry methods to our local DLBCL samples. This association may potentially add molecular information of DLBCL type despite the insignificant differences in clinical presentation and survival, which is likely due to the limited samples. Expanding sample numbers and prolonging the evaluation period of study may further confirm any clinical impact of both AID and MUM1 expression in DLBCL.
2021-04-21T08:45:36.892Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "96010c93c6e6cbd0ed2fca9c6cd400c41761ef09", "oa_license": "CCBYNC", "oa_url": "https://inabj.org/index.php/ibj/article/download/1421/514", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "96010c93c6e6cbd0ed2fca9c6cd400c41761ef09", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218597050
pes2o/s2orc
v3-fos-license
An Improved Interference Alignment Algorithm With User Mobility Prediction for High-Speed Railway Wireless Communication Networks The enhancement of the carrying capacity of high-speed railway and the acceleration of train speed lead to the proliferation of signal interference in the communication network, which leads to the degradation of network performance and user service quality. To address the issue, we first classify the cell users in high-speed railway wireless communication environment based on Fuzzy C-Means algorithm and user mobility prediction model, and divide communicating users on trains into cell center users and edge users. Then, differential power distribution schemes are implemented for center users and edge users according to the classification results. Finally, the Max-SINR based interference alignment algorithm is used to realize the interference management. Simulation results show that the proposed algorithm fully takes into account the mobility of high-speed train users and effectively manages interference, which improves the high-speed railway wireless communication network performance significantly. wireless communication system. The technology is becoming one of the key technologies of public communication system [10]- [14]. In the HSR communication environment, the train runs along the track, and the wireless communication users on the train continuously drive from the edge of a chain-distributed BS cell to its center, and from the center to the edge of the cell. Therefore, the channel quality of the users on the train changes constantly. The channel quality of cell center users is relatively good while the channel quality of edge users is not optimistic. As a result, in order to improve the network performance of the center users and meet the service quality requirements of the edge users, it is necessary to make a reasonable distinction between the edge users and the central users in the high-speed rail communication networks according to the prediction of user mobility, and the adaptive resource management and interference elimination strategies should be adopted for the users with different types. In terms of the user classification, the current research is mainly focus on setting a fixed threshold according to one or several performance criteria of the users, and those who are better than the threshold are judged as cell center users, while those who are worse than the threshold are judged as cell edge users. The advantage of the algorithm is simple and easy to implement, but it is often unable to maximize the efficiency of the system because it only considers one or a few performance indexes. In addition, the user mobility is not fully considered in the classical algorithm which has definitely a great impact on the change of the users type in the HSR communication environment. The fuzzy theory focuses on the fuzziness of the set and adopts the membership function to describe the degree to which things belong to different sets. The membership function is between 0 and 1, and the closer it is to 1, the higher the degree to which the object belongs to the set.In user classification, the judgment of whether the user belongs to the center user or the edge user has relativity and fuzziness. Therefore, it is more suitable to apply the fuzzy theory than the set determination method for non-zero or one in user classification. Some studies have used power allocation techniques in interference-aligned radio networks to improve network performance [15]. Applying user classification results to power allocation algorithm can improve the effect of IA, effectively suppress the interference signals of classified users, and improve the overall performance of the system. Based on the analysis above, we are committed to research an mobile user classification based IA algorithm for the HSR wireless communication networks in the article. The structure of this article is organized as follows. Section II outlines the works of the existing IA and user classification algorithms. In the Section III, we set up a HSR communication system model and propose a novel user classification algorithm based on the prediction of train user mobility. On this basis, we also study an improved interference alignment algorithm. The simulation environment description, network parameter settings, and the experimental results and analysis are given in the Section IV. Finally, Section V summarizes the article and discusses some of future work. II. RELATED WORK In the HSR scenarios, the design of the network architecture is particularly important due to the Doppler frequency shift, the severe carriage penetration loss, the complex channel environment and the ''signaling storm'' caused by group handover. The dual-hop network architecture based on the on-board relay has installed the train relay station(TRS) on the top of the train [16]. The user equipment(UE) can be connected to the TRS through the access points(AP) deployed in the carriages, which can effectively reduce the transmission loss of the carriage, improve the quality of the received signals, and save the energy of the user terminal. In order to improve the handover success rate and the system stability, Tian et al. proposed a dual-antenna scheme based on the TRS, in which the two antennas were installed as the train relay stations at the front and the rear of the train respectively [17], [18]. However, this dual-hop network architecture with added relays increases end-to-end delay, has the risk of ''all-or-none'' group handover, and needs to achieve the integration problems with the train control systems [1]. On the other hand, the traditional cellular systems for high-speed trains support the interactive mode of direct communication between the passengers and the BSes [1]. With the large-scale construction of the 4G and 5G BSes and the maturity of MIMO technology, the public mobile communication network can meet the real-time communication requirements of the high-speed mobile, massive and mass data services. A growing number of HSR passengers are wirelessly connecting their mobile terminals such as mobile phones and tablets that support the cellular communications with the public mobile communication BSes and accessing the Internet. The interference alignment method studied in this paper is based on this single-hop communication network architecture. In recent years, due to the feasibility and effectiveness, IA has attracted many scholars to study this field. The current researches focus on various IA algorithms in the ideal and non-ideal channels, combined with power allocation [19]- [21]. Many scholars believe that the IA combined with power allocation is more effective in considering the resource allocation of the network system, which makes power allocation-based IA to become one of the research hotspots. It is critical to consider both the time-varying channel state information (CSI ) and the reasonable power allocation to the user. On the one hand, scholars dynamically calculate the precoding matrix and power allocation according to the time-varying channel matrix, as in [15], [19]. On the other hand, game theory is used to maximize the system performance to calculate the power allocation, such as [20], [21]. Literature [20] is a representative study for dynamic power allocation. For the case of CSI with delay and error, a joint interference phase alignment algorithm based on Bayesian estimation and inter-stream power allocation for VOLUME 8, 2020 multi-cell Multiuser MIMO broadcast channels (MIMO-BC) is proposed. In [21], in order to improve the transmission rate of cognitive radio MIMO (CR-MIMO) systems, a gametheory-based IA algorithm is proposed. The algorithm uses the water-flood algorithm to allocate power to the primary user, and designs the secondary users precoding so that their signals fall into the sub-channel of the unassigned power of the primary user, then the multiple interference links between the different users constitute a game. The model is solved to achieve efficient IA between secondary users. However, there are not many verification algorithms for high-speed movement in UDNs. Furthermore, the existing methods lack of the consideration of the classification of mobile users. It is necessary to carry out some reasonable power distribution and interference management strategies according to the user classification results. In terms of user classification, many scholars believe that user classification is the basis of improving communication quality and system resource allocation. The existing user classification algorithm is mainly designed based on three indicators, including the distance between user and BSes, the signal to interference plus noise ratio (SINR) received by the user and the reference signal receiving power (RSRP) [22]- [24]. Czerwinski et al. set the threshold of distance between the user and the BS in literature [22]. If the distance is greater than this threshold, the user is judged to be an edge user, otherwise a center user. SINR can reflect better the interference situation and signal quality of users, and be an important indicator to evaluate channel state. Therefore, Novlan et al. set a rational threshold value of the SINR to distinguish user categories according to experience in literature [23]. If the received SINR of the user is greater than this threshold, the user is the center user; otherwise, the user is the edge user. In literature [24], a RSRP double threshold escalation mechanism based user classification method was presented by Qi et al. This method sets the upper limit and the lower limit of RSRP, and sets a transition zone between the upper limit and the lower limit for user type discrimination. In this way, even if the user is in the channel with drastic changes in performance, the user category would not be changed frequently, effectively reducing the waste of network resources. However, the factors that affect the channel qualities of users are multiple, and the classification of users by only one attribute tends to be one-sided. Hence, some researchers put forward the use of multiple attributes to make user classifications. Based on the fuzzy C-means(FCM) theory, Cai and Yan [25] presented an improved granulation feature weighted multi-label classification algorithm. The algorithm granulates the label space, calculates the membership degree of each feature parameter to the labelled particles and their correlation by information gain, and determines the weight coefficient of the feature for weighting, thus determining the user category. The algorithm solves the problem of the correlation between the features and the labels and the explosion of the tag combination, and achieves good results in the evaluation index. In Literature [26], Zhan designed a two-attributes based classification algorithm. The algorithm divides the users into three categories according to their distances to the BS, and then types the users with medium distance according to the value of SINR. Finally, the classification results are used for interference management of the system by the interference coordination algorithm, which greatly improves the SNR of the central user and the total throughput of the system. In the HSR scenario, the high-speed mobility of trains has a great impact on the accuracy of classifying of user categories, so the user classification algorithm needs to consider the mobility prediction problem. In Literature [27], Lin et al. proposed a Markov location prediction algorithm based on user mobile behavior similarity clustering, which can help solve the user classification problem requiring mobility prediction in HSR scenarios. It can be seen that there are many user classification algorithms, and various algorithms have their own characteristics in solving practical problems. However, there are not many studies and discussions on the communication quality of users in HSR communication system. Therefore, there are still some problems worthy of our consideration and discussion. Firstly, how to determine the edge users based on the location prediction of the mobile users reasonably in railway wireless communication system. Secondly, how to solve the problem of channel quality degradation caused by high-speed mobility in railway wireless communication. With the above questions and research objectives, this article will carry out the research on the IA algorithm considering user classification in HSR environment. A. SYSTEM MODEL In the HSR environment, the wireless BSes are generally deployed along the railway track in a strip. With the development of wireless communication in recent years, the types of BSes distributed on both sides of the orbit are more diversified, including GSM-R, LTE-R and 5G BSes. We consider the single-hop communication between the on-board passengers and the BSes, and the train will pass through the heterogeneous network covered by these BSes at a faster speed, as shown in Figure 1. In the above scenario, the user on the train will enter the boundary of the cell at a fast speed, cross the central area of the cell, reach the boundary of the cell again, and then cross to the next base station coverage area. In this process, the Doppler Effect caused by the rapid movement will cause the signal quality of the users to be in the instantaneous random fluctuation state, thus affecting the communication service quality of the users on the train. For the BS, as the train moves quickly into its coverage area, there will be a surge in the number of users, which will have an impact on network performance. Therefore, the BS needs to be sensitive to distinguish the user types and provide differentiated services to different users to ensure the QoS for each user. B. FUZZIFICATION OF MOBILE USERS LOCATION ATTRIBUTES In the HSR environment, the location of mobile users relative to the BS has strong real-time performance, which needs to be considered from multiple aspects. Thus, we take into account three attributes: the distance from User i to the serving BS BS c , denoted as d BS c i , the distance from User i to the next BS, denoted as d BS next i , and the SINR of the received signals. In the process of discretization of continuous data, the traditional algorithm generally divides the continuous value domain into several discrete value intervals by setting several discrete thresholds, and defines a certain output value in each interval as the quantization result, which often leads to a large quantization error. The fuzzy c-means (FCM) clustering algorithm [12] can obtain the membership degree of each sample point to all centers by optimizing the objective function, so as to determine the category of sample points and automatically classify the sample data, which is more objective, reasonable and scientific.Therefore, we use the FCM clustering algorithm for fuzzy processing of the continuous attributes including d BS c i and SINR in the paper. According to the FCM clustering method, we first need to determine the objective function to be optimized, and then achieve user classification by minimizing the objective function. Using the membership degree U to indicate how much the user belongs to a certain type. The fuzzy C partitioning objective function of user space is X = x 1 , x 2 , . . . , x M . The objective function of FCM is defined as shown in Equation (1). where c represents the number of user categories, and M represents the number of users. µ ij represents the degree to which the i−th user belongs to the j−th category, µ ij ∈ [0, 1] , and the higher the value, the more inclined user i is to type j. x i represents the attribute of user i. z j represents the cluster center of type j. In addition, l ij = x i − z j represents the Euclidean distance between user i and the clustering center z j , and m represents the fuzzy weight corresponding to the function. The membership of each user in a different category adds up to 1. Construct a new objective function as follows to get the necessary conditions of approaching the minimum value obtained by the objective function: According to the requirements of the Equation (2), we are seeking for the minimum value obtained by the objective function. We set the partial derivative of z j and µ ij equal to 0, and connect it with the solutions of Equation (1) and (3), we can get the clustering centers as shown in Equation (4): Then, the membership can be calculated by Equation (5). where k is the number of iterations.The final effect of clustering is that the intraclass similarity is the smallest, and the similarity between classes is the largest. At this time, the sum of the weighted distances between the point and the center is the smallest. Therefore, when we get the minimum value that can be obtained by the objective function, we can further obtain the value of the clustering center of each classification and the membership value of each user relative to each classification, so as to obtain the final fuzzy clustering discrimination result. Finally, we calculate the final decision set based on the fuzzy multi-attribute decision-making(FMADM) algorithm. C. MOBILITY PREDICTION ALGORITHM When calculating the distance from user i to the adjacent BS, which is one of the characteristic parameter of the user's position, we comprehensively consider the speed and acceleration of the user's motion in the high-speed rail carriage, and construct a mobility prediction model. We set v e as the maximum speed of user i, and a the acceleration which is obtained by the following equation: where a max and a min are the maximum and minimum acceleration of user i respectively. Assuming that v(t) is the instantaneous speed at the moment t, and order β = (a max −a min )/v e , then we can get the predicted acceleration a(t) at the moment t by Equation (7), and obtain the velocity v (t) by Equation (8). where, v 0 is the initial velocity. Solve the differential Equations (7) and (8) to obtain v(t), as shown in Equation (9): In the equation, ζ = v 0 +a min +(v e −v 0 )·a max a max −a min is a constant, which can be calculated by solving the differential equation. Assuming that β 0 = a max − (a max −a min )·v 0 v e · e −βt , then we can obtain the simpler calculating the equation of v(t), as shown in (10): As can be seen from the above equation, as long as the initial speed of the train and the current time t are known, then the speed value of the next moment can be calculated. We set the position of the user i at the current time as (x 0 , y 0 ), and the position of the next adjacent base station as (X , Y ). Given that the current speed of the train is v(t), then, after T seconds, the position of the user i can be calculated by Equation (11), and the d BS next i can be obtained by Equation (12). Here, in order to ensure the accuracy of the prediction algorithm, we define T as the update period of the user's measured position and the predicted position information. That is, every T seconds, we will update the current measured position information of the user and predict the position information at the beginning of the next T seconds. Next, we compare the obtained distance d BS next i with the radius r next of the next adjacent cell. If d BS next i < r next , it indicates that the current location of user i is in the overlapping coverage area at the edge of the two cells, and will enter the adjacent next base station coverage area at a faster speed. Then we will adopt FCM algorithm to calculate the membership of each attribute and judge its category. If d BS next i r next , it indicates that user i is far apart the next adjacent cell and will stay in the current cell for a period of time, then the user i will be determined as a central user directly and the membership function value will be set as 0, so as to reduce the resource waste and calculation caused by path loss. D. DECISION-MAKING OF MOBILE USER CLASSIFICATION Through the above two sections, we obtained the membership function value of the position attribute of user i and its mobility prediction result respectively. Next, we set reasonable weights for different attributes and used the weighted sum multi-attribute decision algorithm (MADM) to calculate the final type decision parameter value Q i of user i. Refer to Table 1, we can carry out fuzzy processing on the attributes through the fuzzy decision process. Where w i denotes the weight of the i−th attribute, which has to satisfy the following formula of w i = 1; µ ij represents the maximum membership of user i under attribute j. q ij represents the set category corresponding to the maximum membership obtained by user i within attribute j, and q ij ∈ [0, 1] . If q ij = 1, it indicate that when we only consider the single attribute of j, the user i is an edge user of the current cell. Else if q ij = 10, it means that the user i is a central user of the cell when we consider the only attribute of j. With the purpose of achieving a more scientific division of user i, we assign the corresponding weight w i to each attribute, and then the weighted sum of the user attributes is calculated to obtain the classification discrimination parameter Q i of user i according to the following Equation (13): Finally, we determine the type of the user by comparing the size between the value of Q i and 0.5, as shown in Equation (14): In this equation, we define D i to represent the final type of the user i, where D i ∈ {0, 1}. We also define U c to represent the central user set, and U e to represent the edge user set. If D i = 0, then the user i will be judged as a central user. Else if D i = 1, the user i will be judged as an edge user. , and getting the membership grades µ i2 and its type q i2 ; Otherwise, the train user is seen as a center user with its membership grade µ i2 = 1. 6: Making all train users' decisions using the sum of weights by Equations (12) and (13). Now, the implementation steps of the mobility prediction based user classification algorithm are shown in Algorithm 1. E. A MOBILITY PREDICTION BASED INTERFERENCE ALIGNMENT ALGORITHM In this section, an IA algorithm based on the mobile user classification proposed above will be used to realize the interference management in the HSR wireless communication networks. First of all, we assume that the current base station provides communication and interference management services for the central users of the cell. Due to the edge users will move and switch to the next adjacent cell, we set up the next adjacent base station to provide communication service and interference management. Assuming that there are M transmitters corresponding to M receivers in the MIMO limited feedback channel, the number of transmitting antennas of the BS is N t , the number of receiving antennas of the user is N r , and the transmitting power of the BS is p t , then the signal power received by user i can be calculated by Equation (15). where η m,i is the transmission path loss of the channel between transmitter m and user i, and p i is the transmit power allocated to the user i. If i ∈ U c , the transmit power will be assigned by the current service BS. If i ∈ U e , then the next adjacent BS will distribute the transmit power for the service. As an effective interference management scheme, IA algorithm divides the signal space into the desired signal subspace and the interference signal subspace from the perspective of maximum the freedom degree. Since the dimension of the desired signal subspace determines the freedom degree and the throughput of the system, IA can increase the dimension of the desired signal subspace by compressing the dimension of the interfering signal subspace, so that the system capacity is not constrained by the network size and the system performance can be improved. The construction of transmitter precoding matrix and receiver interference suppression matrix is the key to achieve the IA. In order to simplify the model, we only consider the downlink channel from the BS to the user because of the interoperability of the wireless channel. We define x i is the transmitting data of the transmitter i, the matrix H ij is a channel matrix from the transmitter j to the receiver i, the dimension of H ij is N t * N r . Define the matrix V j as a precoding matrix of the transmitter j. When the user i communicates with the BS i, the interference signal reaching the receiver is all the transmitted signals except the one from transmitter i. Therefore, in the IA technique, the channel matrix H ij corresponding to the interference signal of the receiver i is multiplied by the linear set composed of the precoding matrix V j , so that all the signal subspaces of the set are compressed to a single dimension, and the number of interference signals to be distinguished is effectively reduced. In this way, the receiver i can construct a receive suppression matrix U i to achieve interference forcing zero without affecting the reception of the expected signal. We apply the above power allocation method based on the user classification to the classical Max-SINR IA algorithm [28] to realize the improved interference management strategy. Then the signalŷ i that finally received by the user i can be represented by the Equation (17): (17) in this equation, n i is an additive complex white Gaussian noise(ACWGN) with the average value of 0 and the variance of σ 2 . Then we can calculate the received SINR of user i by Equation (18). Finally, according to the Shannon theory, we can obtain the total throughput of the network, as shown in Equation (19): It can be seen from Equations (18) and (19) that system throughput in a high-speed moving environment is affected by the quality and power of received signals of each user. Reasonable user classification will significantly improve the effectiveness of IA, which is particularly important to ensure the performance of the communication system. In particular, for the edge users, if the current base station still provides the services for them without considering the classification results, the received signal quality will continue to decline as the users rapidly get away from the current BS, and the BS needs to improve the transmission power to ensure the service quality, which will lead to the degradation of system performance. After considering the user classification, since the edge user is about to be handover to the next adjacent BS, we set the adjacent BS to provide service for the edge user. This not only reduces the load of the current BS, but also does not instantly bring a large number of users into a certain cell, so as to improve the total capacity of the system to a certain extent. A. ENVIRONMENT SETTING In this section, we numerically evaluate our IA algorithm through a simulation topology as shown in Figure 2. In the simulation network environment, based on the single-hop HSR communication network architecture, we deployed a railway line with a length of 3km. On both sides of the track, six LTE BSes were deployed at 800m intervals, with the coverage radius of the cell being 800m. The length of each train is 100m and the number of users is 10. The positions of users were evenly distributed in the cars. In the above simulation scenario, the characteristics of high-speed train movement will bring significant doppler frequency shift effect. However, the improvement of train speed will not change the multipath model, only increase the doppler frequency shift of each path. Therefore, during the experiment, in order to ensure the rationality of the channel matrix, we set the doppler frequency shift on each path according to the train running speed, and considered the influence of train mobility on the channel by establishing the small-scale fading model of high-speed railway wireless channel. In the initial stage of the experiment, the train travelled at a random speed and a random acceleration from the beginning of the track, and the user randomly entered one of the nearest BSes on both sides of the track. The user type used the default value. Once the train run, the users were classified according to the algorithm in this paper. The interval between each experiment was 2s, the attribute weights were w 1 = 0.5, w 2 = 0.3, w 3 = 0.2 respectively, and the other key parameters were summarized in Table 2. Based on the above setting, the network performances were achieved and summarized. B. NETWORK PERFORMANCE ANALYSIS To be specific, the performance indicators in our study include average bit error rate (BER), throughput and SINR of the users. We respectively verified the change of each performance indicators when the downlink SNR changes between −5dB ∼ 20dB, and the train run at the five average speeds of 100km/h, 150km/h, 200km/h, 250km/h and 300km/h. The simulation results are shown in Figure 3, 4 and 5, each of the value is represented by its mean value obtained from the system carrying out 60 times of the algorithm per experiment. From Figure 3, 4 and 5, we can see that the performance level of the wireless network in railway environment is greatly affected by train speed. With the increase of the train speed, the network throughput and the SINR of the signal received by the user both decrease, while the average BER increases. Specifically, Figure 3 reflects the relationship between the system BER and the train speed. In the case of high-speed movement, the user has a large radial velocity relative to the BS, which makes the signal received by the user have a large Doppler Frequency Shift, resulting in the deterioration of the system BER performance with the increase of the speed. In addition, when the channel transmission quality is poor, that is, SNR < 0dB, the BER performance will significantly deteriorate with the increase of train running speed, when SNR > 5dB, the BER performance tends to be stable. As can be seen from Figure 4, when the train runs at different speeds, the value of attribute d BS next i will change, so the user classification results based on the mobility prediction will change accordingly. Therefore, when the train speed is high, at the next user classification moment, the user is likely to be out of the coverage range of the current BS, so it will be judged as an edge user, resulting in the decline of the current BS throughput. Figure 5 shows that the received SINR is greatly affected by the channel quality, especially when the channel transmission quality is poor. By comparing SINR curves at five average speeds, it can be seen that under the mobility prediction based IA algorithm presented in this article, the changing trend of received SINR performance is consistent at different speeds, and when the channel transmission conditions are good, the performance will be stable at about 10dB. C. PERFORMANCE COMPARISON ANALYSIS In order to further verify the performance of the algorithm, we applied the user classification algorithm based on mobility prediction(Classification-New), the user classification algorithm based on the SINR(Classification-SINR), the user VOLUME 8, 2020 classification algorithm based on the Distance(Classification-Distance), the user classification algorithm based on the improved Distance-and-SINR(Improved D&SINR), as well as the User-free classification algorithm (non-Classification) to the IA algorithm model constructed in this article. In the HSR wireless communication network architecture established in section 4.1, we compared the network's performance including average BER, total system throughput and SINR of the users when the train average speed is 50km/h, 100km/h, 20km/h and 300km/h, respectively. Meanwhile, the SINR threshold of the Classification-SINR is 5dB, the Distance threshold of the Classification-Distance is 2/3r, the Distance threshold L of Improved D&SINR is 0.4r, and the area window size M is 0.2r. The simulation results are shown in Figure 6, 7 and 8. It can be seen from the Figure 6 that the system BER obtained by using the IA algorithm proposed in this paper is lower than that obtained by other algorithms. The main reason is that, when classifying the users, we not only consider the relationship between the users and the current service BS, but also adopt the mobility prediction method to get the location relationship between the users and the adjacent BSes. The more comprehensive the factors, the more reasonable the results. In addition, when classifying the mobile users on the train, the algorithm in this paper adopts the FCM algorithm to realize the single-attribute user division. This classification method is more reasonable and scientific than the hard partition method based on a fixed threshold. As shown in Figure 7, in terms of total system throughput, Classification-New has the best performance and non-Classification has the worst performance. In the system without user classification, when the train completely enters a cell, the users on the train execute the strategy of switching the service BS at the same time. Users with different channel quality jointly accept the service of the same BS leads to better signal quality of users with good channel quality, while users with poor channel quality will also have poor signal quality, which is likely to cause polarization. For the BSes, when a train just enters a cell, the number of users in the cell will surge instantly, and a large number of users will share limited spectrum resources, which will lead to the decrease of system throughput and the uneven channel state will also lead to the increase of system BER. In the system with user classification algorithm, the edge users will be switched to the next BS to be entered, which not only reduces the load of the current BS, but also avoids bringing a mass of mobile users into a certain cell instantly. Therefore, the total capacity of the network can be increased to a certain extent. In particular, as shown in Figure 7(a), when the train is running at a high average speed of 300km/h, the application of the IA algorithm based on Classification-New can greatly improve the total throughput of the system when the channel SNR is relatively low. When the SNR < 15dB, the total throughput of the system using Classification-New is optimal. This is because the algorithm takes into account the mobility of users, and fully considers the location relationship between the users and the current service BS as well as the location relationship between the users and the adjacent BSes, so as to make a reasonable classification and switching decisions, thus, the total throughput of the system is improved. When the SNR ≥ 15dB, due to the good channel state, the channel quality of users is mainly affected by the distance between them and the service BS, while the Improved D&SINR algorithm divides users into three parts according to distance, which is more accurate than other algorithms, so the system throughput is better. In terms of the performance index of SINR received by users, as shown in Figure 8, the IA algorithm based on Classification-New performs best. To be specific, in the case of poor channel transmission quality and high train average speed, it has more significant advantages. This is because the Classification-New algorithm is designed to improve the SINR of edge users, thereby closing the gap between the SINR of center users and edge users, and therefore performs well in SINR. V. CONCLUSION In this work, in order to improve the service quality of mobile communication users in high-speed railway environment and ensure that users can minimize the signal interference between each other in the process of high-speed train operation, we classify the communicating users according to their mobile characteristics and improve the efficiency of VOLUME 8, 2020 interference management. For obtaining a practical solution, combined with the distance from the user to current BS (d BS c i ), the distance from the user to the next BS (d BS next i ) and the current SINR, our FCM algorithm calculates the type of user under single attribute for d BS c i and SINR. By setting a reasonable weight and summing the above three attributes, the user's type is obtained. We then improve the power allocation method for users in the edge of the cell, and achieve effective interference management based on Max-SINR IA algorithm. Our experimental evaluation prove that our algorithm is an effective method to improve user service quality and network performance in high-speed railway communication environment. In future work, we will consider using machine learning methods to optimize the attribute weights in the user classification algorithm. He is also with the Engineering College, Armed Police Force, Xi'an. He has authored or coauthored six books and 140 scientific research articles, and holds 26 invention patents in his research areas. His research interests include the research and applications of orthogonal frequency-division multiplexing techniques, high-power amplifier linearization techniques, radio propagation and channel modeling, global systems for mobile communications for railway systems, and long-term evolution for railway systems. VOLUME 8, 2020
2020-04-30T09:04:12.394Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "f7e39ed938ca30a8bcd495fa9e1394ffb34258c1", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09076633.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "ca1e04136a4fe16422302f22c1192d2827d9cf9c", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
114519401
pes2o/s2orc
v3-fos-license
Investments in oil field development by the example of Tomsk oblast The article describes the geologic structure of the formation located not far from Strezhevoy Tomsk Oblast. The formation has been poorly studied by seismic methods. The reserves categories C1 and C2 as well as hydrocarbon potential are presented. 4 exploratory and 39 production wells are designed to be drilled depending on geologic knowledge and formation conditions. The article deals with the investment plan including development, oil export expenditures and implementing cost calculation. Introduction The field license area of 1192 km 2 is located 460 km from Tomsk to the North-West, 300 km from Strezhevoy to the South-East. The nearest settlement is Kedrovy, which is located 90 km South-East from the field and is the major oil-and-gas producing industry in the southern territory. There are no highways or railways in this area. The territory of license area is considered of little value for agriculture and forestry. The seismic grid interval of CDPM is: in total -0.77 km/km 2 , in 1986-99 -0.01 km/km 2 . The field is poorly studied by seismic CDP methods [5]. The geologic structure of the formation is complex; the oil deposit is confined by terrigenous Upper-Jurassic sandstone formations J 1 2 and J 1 3 . The major oil-and-gas formation consists of horizons J 1 and J 2 . In terms of reserves, it is referred to small and is of up to 5 mln. tons. It has been developed since 2009, in 2012 the production wells were first drilled. The project drilling depth is at 2961 m. In total, 4 exploratory and 39 production wells are designed to be drilled [3]. Hydrocarbon potential and reserves Oil formation is confined to sandstone formations J 1 2 and J 1 3 horizon J 1 of Vasyugan suite. Formation J 1 2 is a hanging type controlled by OWC from West to South at minus 2345 m, from Eastlithologically capillary sealed at minus 2300 m to minus 2320 m. Formation J 1 3 is developed throughout the elevation area, but is significantly shaled out in the East forming a lithological seal, where oil accumulation is controlled by the OWC from the West to the South at minus 2355 m. From the East the boundary of the oil field is a lithological seal revealed within the hypsometric limits from minus 2320 m to minus 2300 m. Oil deposit is a blanket-like accumulation with an overlying seal. According to the State Reserves Register the parameters of the prospected deposits: gross pay/net pay thickness of producing reservoir J 1 2 -3.0/2.6 m, effective porosity -15%, hydrocarbon saturation -0.59, oil recovery efficiency -0.3. Forecasting development parameters To forecast the development parameters the ultimate reserve categories С 1 and С 2 were used. It was assumed that the confirmation factor of category С 2 -50 %. The following geologic information was used for forecasting: layout of net formation thickness distribution (indirectly reflecting changeability of reservoir properties) and layout of oil-saturated formation thickness distribution [4]. The layout of net formation thickness distribution of J 1 2 and J 1 3 formations shows the improvement of filtration characteristics in the crest. Production well pattern is based on the layout of oil-saturated formation thickness distribution of J 1 2 and J 1 3 formations. In terms of exploration and reserve statement the field development is planned in the stages: I stage -production test of well №1 after its recommissioning, according to well test operation plan (1 year), recommission of exploration well and its test implementation (2 years). Within this period the appraisal drilling of 4 wells is performed to delineate the field and identify other formations. Construction of development and pipeline facilities begins at this stage; II stagedrilling six production wells and field development, simultaneously, grid-type seismicgeophysical operations are performed to specify the structural field plan and field junction zones. Construction of development and pipeline facilities is completed. Artificial lift is implemented (electric centrifugal pump (ECP) and beam-balanced pump (BBP) with the capacity from 10 m 3 /day to 60 m 3 /day). The relationship of the well number equipped with ECP and BBP -40/60. The period lasts 2 years. Thendesign of field development plan; III stageimplementation of the field development plan. In a period of 15 years to explore the geological structure of the field, it is planned to make the seismic grid denser. The initial cost of geologic exploration for additional field exploration is about 7 mln. roubles (100 lineal km. common depth point seismic 2D) (figure 1) [2]. The formation is planned to be drilled in production well pattern of 700*700 m, boundary net-pay thickness of well placement -2 m, in total, 10 production wells are designed to be drilled, 3 of theminjection ones. In the course of field development special attention should be paid to the dynamics of gas-oil factor change, as there can be a gas cap. At the first stage of field development the recovery rate from the reservoir of recoverable resources will not exceed 3 % (of total), the forecasting annual oil production -18400-33000 tons, cumulative oil -51400 tons. At the second stage of field development six production wells are implemented, in two of them hydraulic fracturing treatment is performed. The recovery rate from the reservoir amounts about 10 % of all recoverable resources, annual recovery from 40 to 112 000 tons, water cutting increases to 1.5 %, oil production at the second stage -182400 tons. According to the field development plan, at the third stage of field development, 4 more wells are drilled, in 3 of them hydraulic fracturing treatment is performed. The annual production achieves 149.0 000 tons, water cuttings -92 % by the end of the given period, recovery rate from the recoverable resources decreases from 14.5 % to 1 %. At the given stage 840800 tons of oil are planned to be recovered. The spot contour waterflooding system will be implemented in the field. In total, 3 wells will be converted into injection after oil production. All in all, over 15 years 1075000 tons of oil will be produced (97.0 % of approved reserves of categories С 1 +С 2 ). Economic analysis and calculations Commercial efficiency of investments in field development is estimated on the basis of accepted drill footage, production level, well stock conditions. Calculation of key economic indicators is performed for 15 years of field development. The calculation includes taxes and duties, which are reported and paid by an enterprise for the budgets of different levels in the course of its economic activity (table 1): 1* unified production tax -340 roubles/t with the coefficient describing the dynamic of the oil world market prices, 2* property tax -2 %, 3* income tax -24 %. 4* road fund tax -1 %, 5* unified social tax -35.6 %. Commercial evaluation of field development was made under the conditions: 30 % export sale, 70 % domestic sale. The sale price in the domestic market is 8200 roubles/t (with VAT), in the international market -50 USD/barrels. Export cost for oil transport to the borders is calculated in accordance with the rate equal to 22 USD/t, duty -in accordance with the RF Law "On Custom Tariff "taking into account the average cost rate of Urals crude oil in the world market is included in the calculations [1]. The most critical indicator for calculation of investment projects is oil price. Changes in oil sale price in domestic market and for export, its cuts will result in increase of payback period. Oil price increase (domestic up to 8200 roubles/t and export up to 50 USD/barrels) will allow paying back the investments in field development within 5.1 years. The present paper considers the reserves confirmation of category С 2 by 50%. The maximum risk in the field development is reserve non-confirmation of category С 2 by more than 50 %. In this case there are changes in dynamics of fluid and oil withdrawal and, in terms of expert appraisal; the payback period for these operations may exceed 8 years, which makes it economically unattractive under the existing norms [3]. The field development is planned to be performed in stages taking into account the geological terms and reserves conditions: Calculation of commercial efficiency of field development is performed using the system of indicators specified by the current "Methodical recommendations of investment project evaluation and selection for support". According to the decisions taken in the present paper the development of promising N fields is connected with high investment risk as evidenced by small cash flow (or net present value (NPV)) at the discount factor 40%. Changes in oil prices and possible decrease in tax burden (MET, profit tax relief) may increase the performance indicators of the given operation and make it attractive for investors. Calculating the indicators sensitivity towards oil prices change with price decline by 28.5% (export up to 35USD/barrels and domestic prices up to 4500 roubles/t) the payback period of capital investments exceeds 10 years. With increase in prices for export oilup to 55USD/barrels and domestic pricesup to 8000 roubles/t the payback period decreases up to 5.1 years. Conclusion Thus, the greatest effect can be achieved by cost reduction for construction of new wells by 10 %; the capital exposure decreases by nearly 8 % and, consequently, decreases the payback period of oil field development. Evaluation of performance factors for oil field development is carried out on the basis of proven oil reserves and suggested operational process design. To forecast the production indicators the ultimate reserves of the categories С 1 and С 2 are used. It is suggested that reserve confirmation of the category С 2 -50 %, at less confirmation the oil development is non-commercial in the current economic conditions. The given oil field has a well-developed infrastructure. There are special facilities for oil and gas extraction, different vehicles, cranes, construction equipment, and relocatable buildings for personnel. Oil produced at the given field will be transported in pipelines and then further processed.
2018-10-22T07:17:24.964Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "73c2d04ae99bf35d719bed2682e2a50060dbf5fc", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/27/1/012017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "20678f47d8ba054d66a81b800db606db540fbd15", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Engineering" ] }
196611041
pes2o/s2orc
v3-fos-license
Double Maximum Ratios of Viruses to Bacteria in the Water Column: Implications for Different Regulating Mechanisms The viruses play an important role in limiting bacterial abundance in oceans and, hence, in regulating bacterial biogeochemical functions. A cruise was conducted in September 2005 along a transect in the deep South China Sea (SCS). The results showed the double maxima in the ratio of viral to bacterial abundance (VBR) in the water column: a deep maximum at 800–1000 m coinciding with the oxygen minimum zone (OMZ) and a subsurface maximum at 50–100 m near the subsurface chlorophyll maximum (SCM) layer. At the deep maximum of VBR, both viral and bacterial abundances were lower than those in the upper layer, but the former was reduced less than the latter. In contrast, at the subsurface maximum of VBR, both viral and bacterial abundances increased to the maximum, with viral abundance increasing more than bacterial abundance. The results suggest that two VBR maxima were formed due to different mechanisms. In the SCM, the VBR maximum is due to an abundant supply of organic matter, which increases bacterial growth, and stimulates viral abundance faster. In contrast, in the OMZ, organic matter is consumed and limits bacterial growth, but viruses are less limited by organic matter and continue to infect bacteria, leading to the maximum VBR. The OMZ in the deep-water column of oceans is over hundreds of years old and receives a constant supply of organic matter from the water above. However, the oxygen level cannot be depleted to anoxia. Bacterial respiration is largely responsible for oxygen consumption in the OMZ; and hence, any process that limits bacterial abundance and respiration contributes to the variation in the OMZ. Viral control of bacterial abundance can be a potential mechanism responsible for slowing down oxygen consumption to anoxia in the OMZ. Our finding provides preliminary evidence that viruses are an important player in controlling bacterial abundance when bacterial growth is limited by organic matter, and thus, regulates the decomposition of organic matter, oxygen consumption and nutrient re-mineralization in deep oceans. INTRODUCTION The viral shunt plays an important role in biogeochemical processes in oceans (Suttle, 2005). Viruses are present at concentrations in the levels of 10 8 ml −1 in coastal waters, 10 7 ml −1 in offshore waters and 10 6 ml −1 in oceanic waters, one order of magnitude higher than the bacterial abundance for coastal, offshore and oceanic waters, respectively. Most observations of viruses are focused on surface waters. Measurements of oceanic viral-particle direct counts in the deep oceans (>1000 m) are relatively scarce, and they have been conducted only in regions of the Southeastern Gulf of Mexico (Boehme et al., 1993), the Mediterranean Sea (Magagnini et al., 2007;Winter et al., 2009;Fonda Umani et al., 2010;Magiopoulos and Pitta, 2012), the North Pacific (Hara et al., 1996), the Atlantic (Parada et al., 2007;De Corte et al., 2010;De Corte et al., 2012;De Corte et al., 2016), the Southern Oceans (Yang et al., 2014), and the tropical and subtropical waters of the global ocean (Lara et al., 2017). The subsurface maximum of viral abundance has often been observed, for example, at 50 m in the southeastern Gulf of Mexico (Boehme et al., 1993) and in the North Pacific (Hara et al., 1996), at 75-100 m at all stations of the Ionian, Libyan and South Aegean Seas (Magiopoulos and Pitta, 2012) and 50-100 m in the Mediterranean Sea (Magagnini et al., 2007). In the North Aegean Sea, the maximum viral abundance occurred at 2 to 50 m (Magiopoulos and Pitta, 2012). The ratio of viral to bacterial abundance (VBR) is often used as an indicator for the relationship between bacteria and viruses. The ratio of viral abundance to prokaryotic abundance is the result of a comprehensive balance of factors, such as the viral production, the transport of viruses through sinking particles, decay rates and life strategies (Hara et al., 1996;Weinbauer et al., 2003;Wigington et al., 2016). Viruses affect bacterial ecological functions such as decomposition and respiration of organic matter, and thus, play an important role in biogeochemical processes in the deep ocean. A previous study (Liu et al., 2015) has shown that viruses reduce bacterial abundance and bacterial respiration in laboratory cultures compared with virus-free bacterial cultures. However, such viral effects depend on the nutrient conditions. Viruses exert more control on bacterial abundance and activities in eutrophic seawater, but viruses sustain the bacterial population by viral lysates of bacteria under nutrient-limited seawater. This clearly suggests that viruses play a more important role where the organic supply is limited. Dissolved oxygen (DO) is a key indicator for biological activity and biogeochemical processes in deep oceans. Because of the thermohaline circulation in the ocean and biological consumption of oxygen, an oxygen minimum zone (OMZ) exists in the middle water column of the deep oceans. The OMZ feature occurs in most world oceans. The formation of an OMZ is thought to result from the balance of organic-matter-sustained oxygen consumption and the thermohaline circulation (Paulmier and Ruiz-Pino, 2009). Given the long residence time of the OMZ water mass, DO should be depleted; however, DO has rarely been depleted in the OMZ. We hypothesize that viruses play a regulating role in controlling bacterial abundance and the bacterial growth is limited by the organic matter in the water column. To test this hypothesis, a cruise was conducted in the northern South China Sea (SCS). The SCS is the largest inland sea in the tropical region, extending from the equator to 22 • N and from 99 • E to 121 • E, with a surface area of approximately 3.5 × 10 6 km 2 . The maximum water depth is approximately 5000 m (Shaw and Chao, 1994), with the average depth being approximately 1200 m. The SCS exchanges waters with the Pacific and Indian Oceans through several passages, most of which are very shallow (less than 100 m). Only the Luzon Strait has a depth of approximately 2500 m and allows more exchange of deeper waters in the Philippine Sea between the western Pacific and SCS Wang et al., 2018). The SCS is oligotrophic with a high sea-surface temperature, low nutrients, low chlorophyll a, and low primary productivity (Liu et al., 2002;Wu et al., 2003;Ning et al., 2005), but relatively high viral and bacterial abundance (He et al., 2009). However, there is little information on the occurrence and characteristics of viruses and bacteria in the deep oceanic water column in the SCS. The major objectives of this study were to test the hypothesis by investigating the vertical distribution of viruses and bacteria and to examine the role of viruses in regulating bacterial abundance and biogeochemical processes in the deep ocean water column in the SCS. • 00 E). The sampling depths covered the entire water column from the surface to 4000 m. A CTD rosette with 12 Niskin bottles was used to read vertical profiles of the salinity and temperature. Water samples were collected at twelve discrete sampling depths throughout the water column: three in the epipelagic zone (3, 50, and 100 m), four in the mesopelagic zone (200, 300, 500, and 800 m) and five in the bathypelagic zone (1000, 1500, 2000, 3000, and 4000 m). A YSI @ 6600 probe was used and slowly lowed into the water column, taking readings of chlorophyll fluorescence, down to 100 m. Samples for DO were collected following the water overflow procedure, and DO was determined by the Winkler titration method as outlined by Parsons et al. (1984). Counting of Viruses and Bacteria Samples for bacterial and viral abundance were transferred to 2 ml centrifuge tube, fixed immediately with glutaraldehyde (final concentration, 0.5%) and stored in darkness for 15 min. Fixed for 15 min, 0.8 ml samples were filtered onto 0.02 µm Anodisc filters (Whatman, Maidstone, United Kingdom). The vacuum pressure was about 20 kPa. The filters were placed on a drop of SYBR-Green-I solution (final concentration, 0.25%) and dyed in darkness for 15 min, according to Noble and Fuhrman (1998). The slides with the fixed filters were completed in about 30 min and stored at −20 • C. The counting of bacteria and viruses was made within 4 weeks. With the immersion oil (type A, Nikon), Virus and bacterial particles were counted at 1000 times magnification under 100 W mercury lamp for the epifluorescence illumination using an OLYMPUS BX41 microscope in the laboratory. At least 200 bacteria and viruses in each field were counted in at least 10 fields (Noble and Fuhrman, 1998). Nutrients Nitrite samples were filtered through glass fiber filters (GF/F) and frozen immediately (−20 • C) until analysis. All plasticwares were pre-cleaned with 10% HCl. The nitrite was analyzed colorimetrically with a SKALAR (San Plus) Autoanalyzer using JGOFS Protocols (Knap et al., 1996). Dissolved organic carbon (DOC) samples were filtered through pre-combusted 0.7 µm GF/F filters. The DOC samples were acidified with 50 µl 50% H 3 PO 4 to pH < 2 to remove the inorganic carbon, and the acidified samples were purged with ultra-high purity nitrogen for about 10 min before analysis to drive off the inorganic carbon (Knap et al., 1996). Statistical Analysis Spearman correlation coefficient was used to test the relationships between variables. By using an analysis of variance and a Tukey post-test, the significant differences for all parameters between depth zones were studied. All statistical analyses and the principal components analysis (PCA) were performed using SPSS statistics 19.0 software (SPSS Inc.). Vertical Profiles of Salinity and Temperature The salinity was 33.97 in the mixed layer of approximately 30 m at E409 (Figure 2), and the halocline depth was 70 m at E409. Below 70 m, the salinity increased gradually and reached a maximum (34.65) at approximately 150 m throughout the water column. There was a salinity minimum of 34.42 at 445 m from which the salinity increased to 34.60 at approximately 1500 m. The salinity was constant below 1500 m. The temperature was 29.50 • C in the mixed layer of approximately 30 m at E409. The thermocline was thick; the temperature decreased with depth within the mesopelagic zone, reaching 4.42 • C at 1000 m and 2.50 • C at 2000 m, below which temperature was constant throughout the bathypelagic layer (2.40 • C). The vertical distribution of salinity and temperature at the other deep stations E407 (Figure 3), E701 (Figure 4), and E703 ( Figure 5) was similar to that at E409 except for variations in the depth of the mixed layer, the thickness of the halocline and the depth of the salinity minimum. The average values of salinity and temperature among stations are summarized for the epipelagic, mesopelagic and bathypelagic zones ( Table 1). Viral Abundance In general, the viral abundance decreased from the surface to the deeper layer. At E409 (Figure 2), the viral abundance was 21.59 × 10 6 ml −1 at the surface, decreased to 2.43 × 10 6 ml −1 at 200 m and was lower below 200 m. However, there was a deep maximum (3.15 × 10 6 ml −1 ) at 800 m. At E407, the deep maximum of viral abundance was more pronounced, running from 300 to 1000 m ( Figure 3); however, the vertical distribution of viral abundance was similar to that at E409. At E701 (Figure 4) and E703 (Figure 5), there were one or two maximums above 100 m, and the viral abundance was low below 100 m. VBR The VBR value remained constant or decreased from the epipelagic to the mesopelagic and bathypelagic zones on average ( Table 1). The distinct feature in the vertical distribution of bacterial and viral abundance were the occurrences of two maxima of VBR in the water column at the deep stations: the subsurface maximum between 50 and 100 m and the deep maximum at approximately 800 m. At E409 (Figure 2), VBR was 8.21 at the subsurface maximum and higher (12.33) at the deep maximum. The double maximum also occurred at E407 (Figure 3), E701 (Figure 4), and E703 ( Figure 5). Chemical and Biological Parameters The DO in the epipelagic zone was higher than that in the mesopelagic and bathypelagic zones ( Table 1). At the deep station E409 (Figure 2), DO was 6.12 mg L −1 at the surface, increased to 7.13 mg L −1 at 50 m, decreased to 2.91 mg L −1 at 800 m, and increased again to 3.90 mg L −1 at 3800 m. A deep minimum for DO occurred at approximately 800 m. The vertical distribution of the other deep stations, E407 (Figure 3), E701 (Figure 4), and E703 (Figure 5), was similar to that at E409. In these deep stations, DO presented a maximum at the subsurface layer and a deep minimum at approximately 800 m. The chlorophyll fluorescence was 0.80 µg L −1 at the surface, increased to 1.68 µg L −1 at 65 m, and decreased to 1.20 µg L −1 at E409. For the other stations, the chlorophyll fluorescence values increased from the surface layer to the subsurface layer. The DOC in the epipelagic zone was higher than that in the mesopelagic and bathypelagic zones ( Table 1). For the deep station E409 (Figure 2) that at E409. At E703 (Figure 5), there were two maximum DOC at 50 m and 150 m, and the DOC remained at 50.00 µmol L −1 below 300 m. Nitrite was lower at all layers. At the deep stations, E409 (Figure 2), E701 (Figure 4), and E703 (Figure 5), nitrite was below 0.10 µmol L −1 in the water column. At the deep station E407 (Figure 3), there were two maxima at 200 m (0.10 µmol L −1 ) and 500 m (0.14 µmol L −1 ), and the value remained low at other depths. Relationships Between Viruses and Other Variables The viral abundance was positively correlated with the bacterial abundance ( Figure 6A, r = 0.874, p < 0.001, n = 48). The regression slope of VA against BA was 5.67, smaller than the 10:1 line ( Figure 6A). Whether the correlations among VA, BA, and VBR are significant depends on the different layers. For the epipelagic zone, the correlation between VBR and BA (r = −0.454, p = 0.058, n = 18) was not significant at p < 0.05, but was stronger than that between VBR and VA (r = −0.039, p = 0.879, n = 18); meanwhile, in the mesopelagic-bathypelagic zone the correlation between VBR and BA was not significant (r = −0.299, p = 0.109, n = 30), but the correlation between VBR and VA (r = 0.504, p = 0.005, n = 30) was significant. This indicates that in the epipelagic zone, the variation in BA dominated the variation in VBR, whereas in the mesopelagic and bathypelagic zones, VA drove the variation in VBR ( Figure 6B). However, no correlation was found between the viral abundance and chlorophyll a concentration (r = 0.201, p = 0.531, n = 12). A PCA was applied to the epipelagic and the mesopelagicbathypelagic zones separately ( The Subsurface VBR Maximum The variability of viral abundance is largely affected by bacteria; however, it is also influenced by other organisms as well as environmental factors. Viral abundance is known to fluctuate with bacterial variation, which is coupled with organic matter supply. The subsurface chlorophyll maximum (SCM, also known as the deep chlorophyll maximum, DCM) exists near the bottom of the surface mixed layer or the top of the pycnocline and nutricline in the water column in oligotrophic oceans (Cullen, 2015); it is also a common permanent feature in the SCS observed in many studies (Lu et al., 2010;Wang et al., 2015). As phytoplankton have access to nutrients at the nutricline, they can carry out photosynthesis and produce organic matter at the SCM layer. The maximum photosynthetic rate is just above SCM versus that in the surface mixed layer (Ghai et al., 2010), and as a result, the extracellular production of organic carbon is also the highest in the photosynthetic activity maximum zone (Avril, 2002). The subsurface DO maximum (Figure 2) above the depth of the chlorophyll maximum in our study indicates the presence of the maximum photosynthetic activity zone. The increased organic matter thus increases bacterial activities and stimulates viral infection and viral abundance in the SCM layer (Santinelli et al., 2010). As a result, this sequence of organic carbon-bacteriaviruses could result in the VBR maximum observed in our study (Figure 7). Our previous study found a significant and positive correlation between the viral abundance and chlorophyll concentration in the northern SCS (He et al., 2009). The more rapid increase in viral abundance versus bacterial abundance likely caused the subsurface VBR maximum. The observation of the subsurface maximum of viral abundance is consistent with other studies. The subsurface maximum has been reported to occur at varying depths from 15 m (Maranger and Bird, 1995) to 150 m (Cochlan et al., 1993;Hara et al., 1996) and is related to discontinuities in the water column (e.g., the pycnocline) or the gradients of chemical and biological parameters (e.g., the nutricline or SCM) (Weinbauer, 2004). Viral phage production and the frequency of bacteria containing mature phages increased with bacterial abundances (Steward et al., 1992;Weinbauer et al., 1993). Vertical profiles showing the maximum viral abundance were also reported in the Southern California Bight (Cochlan et al., 1993), the southeastern Gulf of Mexico (Boehme et al., 1993) and the Northern Adriatic Sea (Weinbauer and Peduzzi, 1995). In our study, the subsurface virus maximum usually occurred above the depth of the SCM at 75 m (E409, E407) to 100 m (E703). Both the depths of the subsurface virus maximum and SCM increased when the water column became deeper and further offshore. The Deep VBR Maximum Herndl and Reinthaler (2013) noted that the function of deepsea microbial community is fundamentally different from that of surface water communities. One of the concepts to measure the efficiency of bacterial utilization of organic matter is the remineralization length scale in the OMZ. With regard to the abnormally high remineralization length scale, one of the five hypotheses is a low utilization rate of sinking organic matter by microbes, as summarized by Cavan et al. (2017). Our observation suggests that one mechanism is viral control of bacterial activities, which slows down the utilization of sinking organic matter. In open oceans, the oxygen minimum is a result of bacterial consumption of oxygen by bacterial utilization of organic particles, which sink through the subsurface pycnocline and slow down through the deep pycnocline between 700 and 1200 m (Paulmier and Ruiz-Pino, 2009;Liu et al., 2011;Cavan et al., 2017). In the SCS, the OMZ is a permanent feature, as is often observed in other studies Wang et al., 2017;Wang et al., 2018). Because the seawater of western Pacific Ocean invades the SCS through the Luzone Strait over the 2400 m sill Wang et al., 2018), it brings in the OMZ water mass and moves westward with it (Yang, 1991;Liu et al., 2011). However, the thickness of the OMZ in the western part of the SCS is much thicker than that across the Luzone Strait along the same latitude, and this indicates that local biological activities contribute to the vertical expansion of the OMZ in the SCS. The DO concentrations in the minimum zone varied from 2.72 to 3.21 mg L −1 in our study, comparable with other studies which have reported 2.00 ml L −1 , (1.40 mg L −1 , Yang, 1991), and 83.50 µmol L −1 , (2.66 mg L −1 , Liu et al., 2011). However, DO has rarely been depleted in the OMZ. This suggests that the biological consumption of DO is limited by in situ factors, considering that the intermediate water mass has over at least 40 years' residence time (Li and Qu, 2006). The coincidence of deep VBR maximum and the OMZ at the same depth suggests their coupling association. Compared with the SCM, the decrease in both bacterial and viral abundance in the OMZ indicates the substrate limitation. The NO 2 maximum indicates the occurrence of denitrification, suggesting oxygen limitation as reported by other studies (Ganesh et al., 2014). Bacterial growth is limited by refractory DOC, since DOC is estimated to be 3700 and 6000 years old in the North Atlantic and North Pacific Oceans, respectively (Loh et al., 2004). As bacterial phages do not see the substrate limitation, they maintain the same lytic rate. A recent study that synthesizes many data sets from various ecosystems found that the slope between viral (Y-axis) versus bacterial abundance (X-axis) tilts higher (becoming more horizontal) near lower bacterial abundance, signifying that VBR is relatively higher when bacterial abundance decreases (Knowles et al., 2016). In the study by Arrieta et al. (2015), viruses increased when the concentrated bacterial abundance collapsed due to the substrate limitation, increasing VBR dramatically. These studies support our notion of the substrate limitation of bacterial growth, causing the maximum VBR in the OMZ. In the laboratory study, when the substrate in the culture was limited, viruses lysed bacterial lysates supported more bacterial cell growth (Liu et al., 2015), suggesting more recycling of the substrate. The same study also found the respiration per bacterial cell increased with viruses compared with the control without viruses. This also supports our hypothesis of the viral control of bacterial depletion of DO when organic matter is limited. The high VBR in the bathypelagic waters reported from the open North Atlantic (Parada et al., 2007;De Corte et al., 2010;De Corte et al., 2012), the South Atlantic Ocean (De Corte et al., 2016), and the Pacific (Yang et al., 2014). The formation of the maximum VBR can also have contributions from responses to other factors influencing bacteria and viruses. A possible explanation of the high VBP at depth is a longer viral turnover time (that is, lower decay rates) in deeper waters than in the surface waters where the viruses remain infective for 1-2 day (Wilhelm et al., 1998;Yang et al., 2014;De Corte et al., 2016). Another possible factor is the physical transport of viruses attached to sinking particles from the euphotic layers and subsequent dissociation in the deeper waters (Proctor and Fuhrman, 1991;Taylor et al., 2003;Bochdansky et al., 2010;Yang et al., 2014). Temperature can affect changes in VBR in the water column; the decay rates of viral assemblages increased between 4 and 25 • C, suggesting a positive effect of decreasing temperature on the survival of viruses (Cottrell and Suttle, 1995;Garza and Suttle, 1998;Wei et al., 2018). Therefore, the decreased temperature of 12 • C in the mesopelagic waters of the SCS may lead to an increase in the survival rate of viruses at 800-1000 m. Viral abundance and VBR have been reported to be high during anoxia events in deep waters of the Cariaco Trench (Taylor et al., 2001) and Mediterranean Sea (Weinbauer et al., 2003), which supports our notion of the viral control of bacterial activities because anoxia often indicates depletion of organic matter as a result of organic matter consumption. The decreased organic concentrations associated with the DO minimum limited the bacterial abundance. The long viral turnover time, the sinkingparticle transport, the lower temperature and near-hypoxic waters favoring viral survival are responsible for the deep VBR maximum observed. Biogeochemical Implications of the Findings The decomposition of host cells by marine virus releases DOC and particulate organic carbon (POC) back into the environment, where they can be absorbed by microorganisms or exported from surface water to the deep ocean (Figure 7). This virus-mediated organic matter recycling process is known as the "viral shunt" (Wilhelm and Suttle, 1999). This in turn affects nutrient cycle and alters the way of OC utilized by prokaryotes (Fuhrman, 1999;Wilhelm and Suttle, 1999;Wommack and Colwell, 2000;Weinbauer, 2004;Suttle, 2005). Refractory dissolved organic carbon (RDOC) in the deep ocean is the largest carbon pool on the earth (Siegenthaler and Sarmiento, 1993) and very old (Santinelli et al., 2010). In the North Atlantic and North Pacific Oceans, the weighted mean turnover time for DOC in deep-water estimated by δ 14 C, is 3700 and 6000 years, respectively (Loh et al., 2004). However, recently, a new study investigating the dilution hypothesis found that it is the very low DOC concentrations that limit bacterial utilization in the deep ocean (Arrieta et al., 2015). In their incubation experiments, the viral abundance increased after the bacterial abundance declined after the bacterial abundance reached a plateau. This suggests that viral abundance is stimulated by bacterial growth, and the increasing rate of viral abundance exceeded that of bacterial abundance when the substrate became limited again. The finding supports our hypothesis of the viral control of bacterial abundance and bacterial respiration in the OMZ, indicated by the deep maximum VBR. In the OMZ, DOC is very low, such that bacterial growth is more limited than in both the upper and lower zones. However, viruses are less limited by the low DOC and continue to lyse bacteria. This could result in the VBR maximum. Zhang et al. (2014) recently commented on the role of viruses and suggested that viruses can kill the winner "bacteria, " and therefore, leave more DOC in the water column. The VBR maximum in the deep ocean appears to corroborate this notion and suggests that viruses can control bacterial growth due to the already reduced DOC associated with the oxygen minimum and slow down further utilization of DOC, which would otherwise be reduced and consume more oxygen. DATA AVAILABILITY The datasets generated for this study are available on request to the corresponding author. Wang provided valuable advice. Rongyu Chen and Mr. Huabin Mao provided the CTD data. The Coastal Marine Lab of Hong Kong University of Science and Technology provided equipment support. The results of this study were presented at the Ocean Deoxygenation Conference in Kiel on the 3-7 September 2018.
2019-07-16T14:32:34.499Z
2019-07-16T00:00:00.000
{ "year": 2019, "sha1": "26b8002e21a21e401bf9796e33f309b64adf0380", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.01593/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "26b8002e21a21e401bf9796e33f309b64adf0380", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
269721981
pes2o/s2orc
v3-fos-license
Correlation between serum 1,25-dihydroxy-vitamin D and 25-hydroxyvitamin D in response to analytical procedures; a systematic review and meta-analysis Objectives: In this study, the aim is to provide a more detailed understanding of vitamin D metabolism by evaluating the correlation between 1,25-dihydroxyvitamin D (1,25(OH)2D) and 25-hydroxyvitamin D (25(OH)D) according to the variations in measurement methods and clinical conditions. Methods: We searched PubMed, Embase, and Web of Science for studies reporting correlation results between 1,25(OH)2D and 25(OH)D. We performed a meta-analysis based on the correlation results of 1,25(OH)2D and 25(OH)D in di ff erent clinical conditions. We included a total of 63 studies and our laboratory ’ s results in the meta-analysis. The studies were categorized into high-quality methods group (HQMG), medium-quality methods group (MQMG), and low-quality methods group (LQMG) based on the 25(OH)D and 1,25(OH)2D measurement. Results: In the healthy, renal disease, and other disease groups, the highest correlation values were observed in the studies categorized as HQMG, with values of 0.35 (95 % CI; 0.23 – 0.48), 0.36 (95 % CI; 0.26 – 0.42), and 0.36 (95 % CI; 0.22 – 0.48), respectively. Signi fi cant statistical heterogeneity was observed in the healthy, renal disease, and other disease groups, with I2 values of 92.4 , 82.7, and 90.7 %, respectively (p<0.001). Both Funnel plots and the results of Egger ’ s and Begg ’ s tests indicated no statistically signi fi cant bias across all studies. Conclusions: A signi fi cantly low correlation was found between 25(OH)D and 1,25(OH)2D. However, higher correlations were found in the studies categorized as HQMG. Various factors, including methodological inadequacies and disparities, might contribute to this. In the future, with more accurate and reproducible measurements of 1,25(OH)2D, a clearer understanding of vitamin D metabolism will be achieved. Introduction Vitamin D deficiency (VDD) is the most common worldwide, and more than 1 billion people are known to have a deficiency [1].Although VDD was known long ago, rickets and osteomalacia were first distinctly described in 1645 [2].The experimental animal studies have clarified its synthesis, mechanism, and treatments for its deficiencies [3][4][5][6].It has been stated that there has been a considerable increase in the number of individuals affected by VDD in recent years, especially common in the elderly and those who stay indoors for longer periods, people with pigmented skin, pregnant women, vegans, and children of developmental age. The regulation of vitamin D metabolism Vitamin D metabolism and regulation are very complex (Figure 1).Vitamin D3 is produced in the skin by ultraviolet-B (290-315 nm) irradiation of 7-dehydrocholesterol (7-DHC).Irradiation of 7-DHC produces pre-D3 (which later becomes vitamin D3), lumisterol, and tachysterol [7].Melanin in the skin absorbs UV radiation, potentially reducing the skin's ability to produce vitamin D from sunlight.This could be a key factor contributing to lower 25-hydroxyvitamin D (25(OH)D) levels (which is a well-established indicator of vitamin D levels) in Black and Hispanic individuals living in regions with less direct sunlight [8].The 25(OH)D levels can exhibit significant seasonal fluctuations, with elevated concentrations in the summer and decreased levels in the winter. Vitamin D is initially metabolized to 25(OH)D, predominantly in the liver, and then further converted to 1,25-dihydroxyvitamin D (1,25(OH)2D), mainly in the kidney.This final form, 1,25(OH)2D, is the primary active form responsible for most of vitamin D's effects in the body [9]. The 25-hydroxylase-CYP2R1 is highly controlled by a variety of diseases (obesity, diabetes mellitus, starvation, infection, inflammation, cancer, etc.), and although many regulatory factors have now been identified, significant gaps remain.Genetic silencing mutations in CYP2R1 can cause rickets, osteomalacia, or other clinical conditions.Serum 25(OH)D may not reflect only the vitamin D produced by diet and skin.In particular, hepatic 25(OH)D synthesis has a complex regulation involving many possible hormones and factors. The enzyme 1,25-dihydroxylase CYP27B1 is mainly present in the renal proximal tubule, and its production is influenced by changes in parathyroid hormone (PTH), fibroblast growth factor 23 (FGF23), 1,25(OH)2D, calcium, and phosphate levels [10].A specific region in the enhancer region of renal CYP27B1 is responsible for responding to PTH, FGF23, and 1,25(OH)2D regulation [11].However, this region is not open to such regulation in non-kidney tissues like skin, immune cells, and adipose tissue.In non-renal tissues, a different enhancer region of CYP27B1 is regulated by various factors such as interferon-γ (IFN-γ), cytokines like tumor necrosis factor-α (TNF-α), or leptin [12,13].These feedback loops play a vital role in regulating 1,25(OH)2D production in the kidney, which differs from CYP27B1 in other cell types, including distal renal tubular cells that are minimally affected by PTH [14].CYP24A1 regulates 1,25(OH) 2D levels in other tissues and is stimulated by TNF-α and IFN-γ.However, in some conditions like sarcoidosis, where macrophages produce excess 1,25(OH)2D without proper CYP24A1 regulation, it can lead to hypercalcemia and hypercalciuria [15][16][17]. 25(OH)D and 1,25(OH)2D are carried in the bloodstream by a protein called vitamin D-binding protein (DBP).This ) is freely available [9].Total 25(OH)D or 1,25(OH)2D includes three fractions: DBP-bound, weakly bound to albumin (also called bioavailable, as they can easily dissociate from albumin), and free compounds.Changes in DBP levels due to genetic factors, hormonal status, or liver and kidney conditions can impact the accuracy of 25(OH)D levels as a marker of vitamin D status, especially in pregnancy, estrogencontaining contraceptives, and liver or kidney diseases.Through the measurement of total 25(OH)D, DBP, and albumin by the principles of protein-ligand binding kinetics, it is possible to calculate both the 25(OH)D/1,25(OH)2D-DBP and 25(OH)D/1,25(OH)2D-albumin affinities [9,18].However, these are used limitedly due to their impracticality and complexity. The clinical decision limits for vitamin D Determining the clinical decision limits of 25(OH)D is complicated.In 1997, the Food and Nutrition Board of the Institute of Medicine identified serum 25(OH)D as a good marker for evaluating vitamin D status [19].However, there was not enough data at that time to fully understand its normal range in the body.Estimated values were insufficient, and it has since become evident that vitamin D deficiency is more prevalent than previously thought.It is affected by various factors such as seasons, diet, medications, and inaccurate measurement methods.Studies have investigated the relationship between serum PTH levels and 25(OH)D levels.It is known that PTH levels increase at low levels of 25(OH)D values but decrease at 75-110 nmol/L levels [20][21][22].In this context, high PTH levels suggest the body is adapting to lower calcium intake.However, whether this adaptation indicates better health is still uncertain [23].To date, no definitive functional change in 25(OH)D levels is known at the point where PTH stabilizes at the lower end of the healthy reference value (or clinical decision threshold level) of 25(OH)D.Vitamin D significantly increases circulating 1,25(OH)2D concentrations, but in vitamin D users, this increase is suppressed by calcium co-administration. The relationship between 25(OH)D and 1,25(OH)2D is multifaceted and complex.In this study, we planned to conduct a meta-analysis and systematic review to elucidate the relationship between 25(OH)D and 1,25(OH)2D, aiming to gain a better understanding of vitamin D metabolism.This correlation was designed considering two different scenarios.In the first scenario, the measurement methods for 25(OH)D and 1,25(OH)2D, along with the analytical limitations associated with these methods, were considered.In the second scenario, various clinical conditions, such as in healthy individuals, kidney diseases (where 1,25(OH)2D synthesis takes place), and systemic diseases, were individually evaluated by considering their specific effects on vitamin D metabolism. Materials and methods This meta-analysis was conducted following the guidelines recommended by the PRISMA statement [29]. The strategy of publication search In this meta-analysis study, comprehensive search strategies were created to identify publications.The selection and extraction criteria of publications Four independent reviewers, who were blinded to the study details, read the titles and abstracts of all reports found through electronic searches (MAS, FDA, NYS, DY).For studies that seemed to fulfill the inclusion criteria, or when the title and abstract provided insufficient data for a definitive decision, the complete report was acquired.The reliability between reviewers was assessed using Cohen's kappa test, setting an acceptable threshold at 0.74.Discussions among the reviewers resolved disagreements about whether to include or exclude certain studies.The relationship between 1,25(OH)2D and 25(OH)D levels was analyzed using various clinical conditions and analytical techniques. For this meta-analysis, studies were included if they reported correlation test results (such as Pearson correlation without or with log transformation, Spearman correlation, or regression analysis) conducted in serum or plasma, were published in English, had accessible full texts, and were conducted on human subjects.Studies were excluded if they lacked correlation values but provided interpretations, were animal studies, or involved other biological samples (such as cord blood, cerebrospinal fluid, etc.). The classification of analytical methods High-quality methods group (HQMG): Automated and traceable 25(OH)D and 1,25(OH)2D measurements, commercial liquid chromatographymass spectrometry/mass spectrometry (LC-MS/MS) measurement (ImmuTube ® LC-MS/MS assay), in-house LC-MS/MS measurement, and automated repeatable immunoassay (like LIASON) methods were used and given analytical performance for both tests (limit of detection, limit of quantification, repeatability, linearity, etc.). Low-quality methods group (LQMG): The methods have no or insufficient information regarding 25(OH)D and 1,25(OH)2D measurements. The classification of clinical conditions The clinical conditions were evaluated in three groups: healthy, kidney diseases, and other illnesses. The results of 1,25(OH)2D and 25(OH)D in our laboratory In addition to the studies, the total of 25.457 results of 5424 patients who applied for various reasons between 2015 and 2023 and had measurements of 1,25(OH)2D and 25(OH)D in the Acıbadem Labmed Laboratory have also been included.Patients have been presented in three groups, like the other groups, based on their ICD codes, diagnoses, clinical information, and other laboratory test results. Presentation of data Data extraction was achieved by the four authors of the metaanalysis (MAS, DY, NSY, FDA).Detailed data from 63 articles, along with our laboratory results, are included in this meta-analysis.The following information was extracted from the studies: the year of the research, place where it was conducted, study design, sample size, gender, age, diseases, analytical measurement procedures, sample size, gender, correlation results between 25(OH)2D and 25(OH)D, and analytical quality considerations.Data were compiled into evidence tables, and a descriptive summary was formulated to assess the volume of data, various study characteristics, and outcomes (Table 1). For measurements of 25(OH)D and 1,25(OH)2D in this study, information about the manufacturers, methods, sample types, analytical performances, and interferences belonging to the most commonly preferred brands are provided in Supplemental Table 1 (for 25(OH)D) and Supplemental Table 2 (for 1,25(OH)2D). Statistical analysis Meta-analysis was achieved on the correlation between 25(OH)D and 1,25(OH)2D.These analyses were done using Stata MP17 4.6.241(Stata Corp LLC, Texas, USA).Random effects meta-analyses were performed using the DerSimonian-Laird method.The χ2 and I2 statistics were used to assess the statistical heterogeneity among the included studies.Forest plots were drawn to describe the weighted correlation with 95 % confidence intervals (CI).In addition, the Funnel plot and Egger and Beggs tests were applied to explore the sources of bias. Results The flowchart of the study, according to the PRISMA statement, is presented in Figure 2. Initially, the search strategy retrieved 1388 references.After screening the titles and abstracts, 1325 articles were excluded due to unrelated topics.The entire texts of the remaining 264 articles were assessed, and 63 studies were included in the meta-analysis.In this meta-analysis, correlation analysis results of a total of 25.147 people, consisting of 63 studies and our laboratory data, were evaluated.The meta-analysis outcomes are represented according to clinical conditions in Figures 3-5. Accordingly, in the healthy group, a total of 24 studies were evaluated.Among these, ten were classified as HQMG, eleven as MQMG, and three as LQMG.The correlation values were calculated as 0.35 (95 % CI, 0.23-0.48)with 91.2 % of heterogeneity (I2) for HQMG, 0.21 (95 % CI, 0.10-0.31)with 91.3 % for MQMG, and as 0.22 (95 % CI, 0.03-0.42)with 38.0 % for LQMG.The correlation value was determined in the total healthy group as 0.26 (95 % CI, 0.18-0.34)with 92.4 % of I2.It was observed that the correlation value was the highest in HQMG, followed by MQMG, and lastly, LQMG.Significant heterogeneity was detected in all groups except for the LQMG group and in the overall evaluation of the study. In the case of renal diseases, a total of 19 studies were assessed.Among these, nine were categorized as HQMG, seven as MQMG, and three as LQMG.The correlation values were calculated as 0.34 (95 % CI, 0.26-0.42)with 78.6 % of I2 for HQMG, 0.28 (95 % CI, 0.17-0.38)with 75.0 % for MQMG, and 0.27 (95 % CI, 0.25-0.37)with 92.6 % for LQMG.In the overall analysis, encompassing both healthy and renal disease groups, the correlation value was determined to be 0.31 (95 % CI; 0.25-0.37)with 82.7 % of I2.It was observed that the correlation value was the highest in HQMG, followed by MQMG, and lastly, LQMG.Notably, significant heterogeneity was detected in all groups except for the LQMG group and in the comprehensive study assessment. In the context of other diseases, 36 studies were examined.Among these, 12 were classified as HQMG, 13 as MQMG, and 11 as LQMG.The correlation values were calculated as 0.36 (95 % CI; 0.22-0.48)with 94.8 % of I2 for HQMG, as 0.19 (95 % CI; 0.09-0.30)with 71.6 % for MQMG, and as 0.16 (95 % CI; 0.01-0.32)with 89.6 % for LQMG.The correlation value was determined to be 0.25 (95 % CI; 0.17-0.32)with 90.7 % of I2 in the comprehensive analysis covering healthy and disease groups.It was observed that the correlation value was the highest in HQMG, followed by MQMG, and lastly, LQMG.Importantly, significant heterogeneity was detected in all groups. As a result, the correlation values obtained from measurements conducted with HQMG are higher than those of MQMG and LQMG. The results of the assessment for publication bias in the conducted study are presented in Figure 6.According to both the Funnel plots and the results of Egger and Begg's tests, it was determined that there was no statistically significant bias. Discussion The relationship between 25(OH)D and 1,25(OH)2D is quite complex.In addition to the different reasons mentioned above, it is especially related to the measurement of 1,25(OH) 2D.The characteristics of 25(OH)D and 1,25(OH)2D methods are presented in Supplemental Tables 1 and 2. Since 2010, the U.S. National Institutes of Health, Office of Dietary Supplements (NIH-ODS), through the Vitamin D Standardization Program (VDSP), has been working to standardize the measurement of serum total 25(OH)D, which is the primary indicator of vitamin D levels.Studies have shown that the results of assays used to determine serum total 25(OH)D, comprising both 25-hydroxyvitamin D2 [25(OH)D2] and 25-hydroxyvitamin D3 [25(OH)D3], may vary depending on the specific assay method employed [93][94][95]. The VDSP is a cooperative venture involving the National Institutes of Health, National Institute of Standards and Technology (NIST), Office of Dietary Supplements (NIH-ODS) [96], Centers for Disease Control and Prevention (CDC), as well as the national survey laboratories in multiple countries, and vitamin D investigators worldwide [97]. The VDSP has enforced a reference measurement system that includes reference measurement procedures conducted at NIST and CDC, along with NIST Standard Reference Materials [98][99][100][101][102]. Additionally, it comprises the CDC The VDSP has set strict criteria for assay performance, ensuring that measurement variability and bias meet the standards of a coefficient of variation (CV) of ≤10 % and a mean bias of ≤ 5 % [106,107].Despite highly successful standardization efforts in 25(OH)D measurements, the issues still need to be solved.In the VDSP's Intra-laboratory Study for the Assessment study, 12 assays were compared, and 9 out of the 12 assays demonstrated a mean bias within ≤ 5 %.Samples with high levels of 25(OH)D2 were essential in evaluating the effectiveness of the immunoassays, highlighting possible differences in response or recovery between 25(OH)D2 and 25(OH)D3 in various assays [108]. Serious problems were also encountered in the LC-MS/MS method, which was presented as a better method.Only 53 % of the LC-MS/MS assays met the VDSP criterion of mean %bias ≤5 %.Four assays showed a mean %bias between 12 % and 21 % among those that did not.A regression study using the concentrations of four vitamin D metabolites in 50 single donor samples found that implementing several LC-MS/MS assays was affected by the presence of 3-epi-25(OH)D3 [109].Significant correlation discrepancies and high bias values have also been reported for 25(OH)D measurements by immunoassay, chromatography, and mass spectrometry [110][111][112]. It has been observed that the analytical measurement difficulties are much greater in 1,25(OH)2D measurements compared to 25(OH)D.1,25(OH)2D is a compound found in very low concentrations (pmol/L) in circulation and is highly lipophilic.Furthermore, the structurally similar metabolic precursor 25(OH)D circulates at nmol/L concentrations, making assay specificity an analytical concern.Significant advancements have been made in measuring 1,25(OH)2D.In 1974, a radioreceptor assay (RRA) was developed, utilizing the competitive binding of 1,25(OH)2D and a tritiated tracer to its nuclear receptor isolated from the calf thymus [113].The first RIA that measured 1,25(OH)2 D was introduced in 1978 [114].RIA for 1,25(OH)2 D using a radio iodinated (125 I) tracer was invented [115].The assay involves acetonitrile extraction and purification of endogenous 1,25(OH)2 D by solid phase chromatography and quantification by RIA. In this study, our laboratory results are particularly crucial.According to the correlation results obtained with automated systems in a quite extensive patient group, 1,25(OH)2D and 25(OH)D measurements were found to be significantly higher in healthy individuals compared to the groups with renal and other diseases, with correlation coefficients of 0.50 (0.42-0.58), 0.26 (0.16-0.36), and 0.26 (0.23-0.29) respectively.A moderately significant correlation was observed in the healthy group.We believe that these results, obtained through the use of the same systems across all groups and with a sufficient amount of data, represent the best data currently available that demonstrates the current relationship. Exacerbating the issue of low concentration is the poor ionization of the analyte, coupled with the potential complications arising from the derivatization required for sufficient analytical sensitivity.Additionally, poor sample preparation techniques can have a significant adverse impact on clinical performance in LC-MS/MS-based methods [125].It is worth noting that LC-MS/MS is a complex and specialized technique that requires advanced equipment Under standardized conditions, a recently introduced automated immunoassay demonstrates strong agreement with measurements obtained using a liquid chromatographytandem mass spectrometry reference method (LC-MS/MS) [126].Nonetheless, recent findings from DEQAS reveal that coefficients of variation within specific tests and mean 1,25(OH)2D levels between different test procedures can exhibit fluctuations of over 20 % [127]. According to a study conducted by Zittermann and colleagues, the measurement of circulating 1,25(OH)2D was carried out using two different methods: an LC-MS/MS method provided by Immundiagnostik and an automated immunoassay test provided by DiaSorin.The study found a correlation (r=0.534) and an agreement (62 %) between the two methods and highlighted the need for additional standardization studies [128].A recent meta-analysis has also revealed that the measurement procedure can significantly impact circulating 1,25(OH)2D levels.These differences in measurement make it challenging to compare results between labs and establish consistent reference values for circulating 1,25(OH)2D levels.Consequently, automation and standardization are crucial for improving the reliability of testing procedures [129].It is worth noting that 80.8 % of the 1,25(OH)2D assays included in the meta-analysis were RIA and radioreceptor assays. Upon closer examination, it is observed that there is a statistically insignificant or low correlation between 25(OH) D and 1,25(OH)2D in general.However, as can be seen in our results, the highest correlation in all three disease groups is found in HQMG.The highest correlations in the HQMG group are observed in all groups.The results in the renal diseases group are exciting.This may be due to this patient group taking Vitamin D supplements.However, the lower correlation in other diseases suggests that the relationship is highly complex and disrupted by different mechanisms in various diseases. Even though the HQMG shows the highest correlation values in the healthy group, this is still a weak correlation.However, vastly different and heterogeneous results are present.The most significant factors here are methodological challenges, the short half-life of 1,25(OH)2D, and its complex regulations.While there is a direct enzymatic transformation of 25(OH)D into 1,25(OH)2D, a relationship between their serum levels may be noted.1,25(OH)2D can directly suppress the production of 1α-hydroxylase and indirectly by reducing PTH levels and promoting FGF23 production.This feedback mechanism is crucial for preventing hypercalcemia.As a result, the level of 1,25(OH)2D is not influenced by the circulating amount of 25(OH)D [130,131].We anticipate that with improved methodologies, the correlation value could increase in the future. This study has notable limitations.In the search that was carried out, the term 'correlation' was explicitly looked for in the title, keywords, and abstract.There might be studies that do not mention the term "correlation" but discuss it within the text.While some studies may have used successful measurement procedures, they may not have been explicitly stated or may have been inadequately presented.Another significant limitation is that we did not consider age and gender differences in our analysis.We refrained from making distinctions based on age and gender as we believed it might reduce the number of studies in each group.In the studies included in this meta-analysis, different correlation analyses (Pearson correlation without or with log transformation, Spearman correlation, or regression analysis) were used.We included all of these correlation analyses in our study. When all the results are evaluated, this study is the first meta-analysis conducted considering differences in methodological and health situations between 25(OH)D and 1,25(OH)2D.Both in the examination of Vitamin D metabolism and the relationship between 25(OH)D and 1,25(OH) 2D, differences in methodological and health situations are crucial and must be considered. Figure 1 : Figure 1: Vitamin D metabolism and regulation. Articles published between 2005 and 2023 without language restrictions are in MEDLINE (via PubMed), Embase, and Web of Science.Using the keywords "(25-hydroxy D OR 25-hydroxycholecalciferol OR 25OHD OR 25-OH vitamin D OR calcidiol) AND (1,25-dihydroxy vitamin D OR 1,25-dihydroxycholecalciferol OR 1,25-dihydroxy vitamin D OR calcitriol)) NOT (animal)) NOT (review)) NOT (case report)) AND (correlation))." Vitamin D Standardization Certification Program and partnerships with the College of American Pathologists and Vitamin D external quality assessment scheme [103-105]. Figure 2 : Figure 2: The flowchart of the study is based on the PRISMA* statement.*PRISMA (preferred reporting items for systematic reviews and meta-analyses).From: Page et al. [29]. Figure 4 : Figure 4: Forest graph of the correlation between 25(OH)D and 1,25(OH)2D in the renal diseases group. Figure 5 : Figure 5: Forest graph of the correlation between 25(OH)D and 1,25(OH)2D in other disease groups. Figure 6 : Figure 6: Funnel plots, Egger's and Begg's test results of all groups. Table  : Detailed data from  articles and our laboratory results are included in this meta-analysis.These data consist of the year of the research, place where it was conducted, study design, sample size, gender, age, diseases, analytical measurement procedures, sample size, gender, correlation results between (OH)D and (OH)D, and analytical quality performances.
2024-05-12T15:18:29.840Z
2024-05-13T00:00:00.000
{ "year": 2024, "sha1": "8ef66faf39d1fc1edfa9f6cdecb659e109baab19", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/tjb-2023-0258/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "70738d2ffb25a06a1f84e939dd6b8334bd223d67", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246563461
pes2o/s2orc
v3-fos-license
Multiple Indolent Asymptomatic Yellow-Orange Patches and Plaques An 83-year-old Caucasian male presented with a history of asymptomatic yellow-orange macules and plaques concentrated on his trunk and proximal extremities that have been slowly progressing for the past three years. A punch biopsy revealed the presence of eosinophilic amorphous and fissured material within the superficial and interstitial dermis consistent with nodular amyloidosis. With the lack of concurrent systemic symptoms and negative systemic laboratory workup, the patient was diagnosed with disseminated primary localized cutaneous nodular amyloidosis (PLCNA). Due to the possibility of developing systemic progression, serial monitoring was recommended. This case highlights an under-reported and unusual presentation of a widely distributed form of PLCNA compared to the more common localized nodular and plaque variants. Introduction There are different subtypes of primary cutaneous amyloidosis (PCA), which include: macular, lichen, and nodular [1]. Primary localized cutaneous nodular amyloidosis (PLCNA) is the rarest form of cutaneous amyloidosis where the dermis, subcutis, and blood vessel walls are diffusely infiltrated with amyloid [2]. While the precise pathogenesis of PCA is not well understood, it is believed nodular amyloidosis consists of immunoglobulin ƴ light chains and β2-microglobulin produced by nearby plasma cells [3]. We present a unique case that highlights an unusual widespread distribution of cutaneous localized nodular amyloidosis compared to the more common localized nodular and plaque variants. Case Presentation An 83-year-old Caucasian male with a past medical history of hypertension, hyperlipidemia, gastroesophageal reflux disease, and two facial basal cell carcinomas presented with asymptomatic, scattered yellow-orange-hued non-scaly macules and thin plaques located on the chest, upper arms, and back coalescing into a reticular pattern ( Figures 1A-1B He reported the lesions began on his chest and gradually spread to cover a significant portion of his trunk and proximal extremities over the past three years. The patient denies any history of illicit substance use and has a family history of breast cancer in his sister. The patient recalls no preceding trauma and his ageappropriate cancer screenings are up to date. Complete blood counts, as well as a comprehensive metabolic panel, erythrocyte sedimentation rate, and urinalysis, were all within acceptable ranges. Serum protein electrophoresis, serum immunofixation, and urine protein electrophoresis were also within normal limits. Histopathology from two separate skin biopsies showed a homogenous collection of eosinophilic material within the superficial dermis along with a similar appearing substance encompassing dermal blood vessels and surrounding interstitium (Figures 2A-2B). The diagnosis of disseminated primary localized cutaneous nodular amyloidosis (DPLCNA) was confirmed with crystal violet staining which further accentuated the purple amorphous material (Figures 3A-3B). Since there is no standard of care for patients diagnosed with DPLCNA, surveillance on a semi-annual basis for signs of morphologic or laboratory progression was advised. Discussion There are different subtypes of primary cutaneous amyloidosis (PCA), which include: macular, lichen, and nodular amyloidosis [1]. PCA is characterized by extracellular deposition of fibrillar proteinaceous material in the skin without systemic involvement. PLCNA is the rarest form of cutaneous amyloidosis where the dermis, subcutis, and blood vessel walls are diffusely infiltrated with amyloid [2]. The precise pathogenesis of PCA is not well understood. Cutaneous macular and lichen amyloidosis originate from degenerated keratinocyte intermediate filaments while nodular amyloidosis consists of immunoglobulin ƴ light chains and β2-microglobulin, which are thought to be produced by plasma cells in the region of the cutaneous deposits [3]. Clinically, PLCNA presents as a single or multiple yellow-brown waxy nodules or plaques with well-defined borders. These plaques may occur focally and occur most commonly on the trunk and extremities with subtle areas of purpura [4]. Highlighted by our case, PLCNA can also present as yellow-orange coalescing reticulated macules and thin plaques in a disseminated distribution. The clinical differential diagnoses include cutaneous lymphoma, leukemia cutis, pseudolymphoma, sarcoidosis, and a xanthomatous process. There is no clear gender predominance, and the age of presentation has ranged from 20 to 87 years, with the 6th decade of life being the most common [5]. Some cases may be associated with Sjogren syndrome, CREST syndrome (calcinosis, Raynaud phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasia), dermatomyositis, and diabetes mellitus [4]. Routine histopathology of PLCNA with hematoxylin and eosin staining typically presents with a pale pink, sometimes fissured, amorphous material in the superficial and deep dermis which can sometimes be found amongst dermal blood vessels and surrounding interstitium [4][5]. Within the infiltrate one can typically find prominent plasma cells [4][5]. Congo red, thioflavin T, pagoda red, and crystal violet staining can all be utilized to highlight the material [5]. Treatment for PLCNA is only required pending the development of systemic manifestations. If the lesions are cosmetically disfiguring or symptomatic, they can be treated via surgical excision, resurfacing lasers, or other destructive methods [2]. All patients with PLCNA should have a systemic evaluation and undergo longterm clinical follow-up to help identify progression to systemic amyloidosis or plasma cell dyscrasias [4]. Fatigue, weight loss, paresthesia, dyspnea, and syncopal attacks due to orthostatic hypertension are all symptoms of systemic progression. Extracutaneous findings associated with systemic amyloidosis include macroglossia, carpal tunnel, and restrictive cardiomyopathy; all of which were not observed in our patient. Fortunately, long-term follow-up of patients with PLCNA has demonstrated that this localized disease has only a 7% progression to systemic amyloidosis [1][2]4,6]. Conclusions We present an under-reported subtype of primary cutaneous nodular amyloidosis with a very distinct disseminated appearance. Currently, there is no standard of care for patients diagnosed with DPLCNA. Due to the innumerable lesions, a destructive method was not reasonable or desired by our patient. With a lack of systemic manifestations, treatment with active surveillance on a semi-annual basis was advised. To monitor for systemic progression, we propose a semiannual physical evaluation for abrupt disease morphologic progression with assessment for macroglossia, carpal tunnel, or signs of restrictive cardiomyopathy. In addition, we suggest a yearly laboratory evaluation to monitor for alterations in his complete blood counts, comprehensive metabolic panel, urinalysis, serum and urine protein electrophoresis, and serum immunofixation. Further research should be conducted due to the rarity of DPLCNA, the lack of knowledge known about this variant, and potential differences in progression to systemic disease. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-02-06T17:02:41.437Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "570476bf2731be15a93efda98f7f9095ae0e7340", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/84097-multiple-indolent-asymptomatic-yellow-orange-patches-and-plaques.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a734fca3f92b39bfc10a51e1685441e61312acf1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269549985
pes2o/s2orc
v3-fos-license
Leading Light: The Impact of Advanced Lighting Technologies on Indonesia's Office Industry Addressing concerns over resource scarcity and environmental sustainability necessitates a global shift towards sustainable energy, notably facilitated by adopting Light-Emitting Diode (LED) lamps. This transition is pivotal for ensuring global energy security and aligning with sustainability goals. This study endeavors to comprehensively analyze potential energy savings achievable through the transition from Fluorescent (FL) lamps to LED lamps within industrial offices. Emphasis is placed on highlighting the central role of energy efficiency. Utilizing false color rendering as a visual guide, the study systematically identifies areas where FL lamps inadequately illuminate. The findings prompt recalculations for determining optimal room illumination achievable through implementing LED lamps. Lux calculations are then employed to showcase the superior illumination offered by LED lamps, revealing consistent monthly cost savings of 35%, particularly when harmonized with Building Management System (BMS) control in industrial office buildings. The study's results indicate that LED lamps provide superior illumination, yielding a noteworthy 35% monthly cost savings, especially when integrated with BMS control. Lamps contribute modestly (21-30%) to overall energy consumption, while air conditioning commands a substantial 60%, underscoring the critical need for advanced lighting technology. This need is emphasized, particularly with Solar PV as a sustainable energy source. Understanding technological developments, especially in BMS, is crucial to optimize energy efficiency in industrial offices. The imperative implementation of LED lighting technology is a critical solution to address resource scarcity and environmental concerns in industrial offices. The efficacy of LED lamps in achieving significant energy savings, especially when coupled with advanced systems like BMS and complemented by renewable energy sources such as Solar PV. The conclusion stresses the significance of staying abreast of technological advancements to foster sustained progress towards energy-efficient and environmentally conscious practices within industrial environments. Introduction Facing resource scarcity and environmental harm from reduced fossil energy use, a shift to sustainable energy is crucial.It not only safeguards global energy security but also aligns with sustainability goals [1][2][3].In Indonesia's evolving office landscape, a transformative wave integrates advanced lighting technologies, illuminating workspaces and ushering in energy efficiency.Smart sensors, responsive controls, and sustainable LED solutions redefine office aesthetics and significantly contribute to reduced energy consumption [4].As Indonesia's offices embrace this evolution, the luminous future promises innovation, efficiency, and a greener tomorrow. Energy efficiency, especially through technologies like replacing Fluorescent (FL) lamps with light-emitting diode (LED) lamps, is pivotal for wiser and more economical energy use, making a positive environmental impact [5].Switching from conventional to LED lighting is a pertinent choice for enhancing energy efficiency in industrial offices.This not only improves workspace quality but also yields significant energy savings.This transition, emphasizing cessation and strategic environmental impact reduction, is a crucial step in optimizing resource use. While Indonesia has yet to fully embrace LED lighting, global case studies highlight its effectiveness.Rapid advancements in LED technology, offering lower energy consumption and extended service life, support the quest for greater efficiency.LED lights, with optimal brightness and customizable designs, are increasingly favored for their energy-saving potential.These advancements represent a proactive stride toward achieving energy efficiency in industrial offices, laying the groundwork for a sustainable future [6,7]. LED lights outshine other types due to their extended lifespan, reaching approximately 30,000 to 50,000 hours, in stark contrast to fluorescent lights' 15,000 hours.Fluorescent lamps, with a mere 5,000 to 10,000-hour lifespan, are more susceptible to damage [8].The use of light-emitting diodes (LEDs) in LED lamps ensures sustained brightness without burning out.Additionally, LED lights contribute positively to the environment by avoiding the hazardous mercury found in fluorescent lamps.Mercury, present in various artificial sources like batteries and paints, poses a threat to human health and ecosystems [9].Switching from fluorescent to LED lights is a commendable step for environmental improvement. Numerous prior studies extensively explored implementing LED lights in industrial offices to save electrical energy, covering key concepts like energy efficiency, system design, simulation, and optimization for maximum efficiency.Zou et al. [10] developed a study on a Wi-Fi-based lighting control system for Smart Buildings in order to reduce energy efficiency.A separate study delves deeply into the simulation, offering valuable insights into the performance of LED lighting across diverse situational scenarios in industrial office spaces.Manolis et al. [11] provided an example of a study of lamps on the market to reduce energy efficiency in office buildings.In the context of direct applications, several studies highlight the successful implementation of LED lighting in achieving significant energy savings in various types of industrial offices.Montoya et al. [12] and Gnana et al. [13] summarized that LED lamps have a big impact on energy efficiency in the office sector.Vandenbogaerde et al. [14] explore the incorporation of lighting technology through Building Management Systems (BMS), aiming to enhance energy efficiency in office buildings.This comprehensive literature review delves into diverse dimensions to provide a holistic perspective on the influence of LED lighting on optimizing electrical energy utilization within industrial office environments.Additionally, it examines strategies for curbing energy consumption in office settings, offering a thorough analysis of the subject. Based on this review, many studies have discussed the energy-saving performance of lamps in industrial offices.This discussion focuses more on knowledge, comparisons, and frameworks to provide efficient measures to promote the sustainability of energy efficiency in LED lighting technology [15,16].However, no study in the office industry discusses in detail how to measure energy savings, especially using the method of energy con-sumed and comparison cost.Although Hemmerling et al. [17] and Vandenbogaerde et al. [14] have discussed energy-saving calculations using the software DIALux and the integration of lighting technologies with BMS, the discussion was, unfortunately, more about the individual process.Especially in the office industry, the elucidation currently lacks an incorporation of the expenditure cost comparison program. To address these gaps, this paper discusses the sustainable energy savings measurement in the LED lighting parts in office industries to provide a comprehensive view of potential energy savings in transitioning from FL lamps to LED lamps in industrial offices, contributing to energy efficiency in these buildings.The aim of the study is to bridge a noticeable gap within existing literature concerning the lack of detailed methodologies for accurately measuring energy savings, particularly in the context of transitioning from FL lamps to LED lamps within industrial office environments.This research endeavors to introduce a sustainable energy savings measurement framework specifically tailored for LED lighting in office industries, aiming to provide a comprehensive understanding of the potential energy savings achievable through such a transition. Material In this research, the author compared the types of lamps that were used in design simulations in industrial offices for the first type using TL5 Essential 28W/840 lamps and the second type using RC100B LED54S 840 lamps; material sample can be seen in Figure 1.Within the context of FL T5-type lamps, a notable distinction lies in the structural arrangement, where the armature and tube lamp panels manifest as different parts.In stark contrast, LED lamps exhibit a different configuration, wherein the armature and tube lamp panels seamlessly integrate into a singular unit.This design underscores a pivotal divergence between the two lighting lamp technologies. Table 1 explained the datasheet material regarding the two lamps that were to be used, highlighting that the performance over time for LED lamps was longer than for FL lamps.While the power consumption of each lamp is listed in the table, LED is 44W and FL is 55.6W. Layout: Result Assessment As shown in Figure 2, a comprehensive layout plan is presented for an industrial office simulation, delineating the architectural features and configuration of the structure.The building's dimensions are specified, measuring 24.3 meters in length and 12 meters in width while reaching a height of 3 meters.Notably, the design of this industrial office building exhibits uniformity across each of its three floors, thereby ensuring a consistent and cohesive layout throughout the entire structure.This design is crafted using a simulation method facilitated by DIALux software. Based on the International Standard (IEC), specifically by "IEC 60598," and Indonesian National Standard (SNI) "SNI 6197: 2011," it is explicitly delineated that the prescribed luminance standards for office spaces fall within the range of 300 to 750 lux.The precise lux requirements are contingent upon the distinct nature of the room under consideration within which the simulation calculations are slated to transpire.To gain a comprehensive understanding of the average lighting levels mandated for the accurate emulation of illumination in industrial office settings, a detailed breakdown is provided in Table 2 for reference and further elucidation. Simulation DIALux, a freely accessible and user-friendly computer software, is widely employed for simulating and visualizing lighting in various environments [20,21].This tool facilitates the creation of virtual spaces, encompassing both daylight and artificial light conditions, and offers versatile simulations for buildings, rooms, and outdoor areas [22][23][24].Users can analyze lighting system performance in specific locations and object environments, presenting results through numerical data, graphs, and images [25,26].As exemplified by Moadab et al. [27].Compared to Smart lighting versus conventional lighting in apartments using DIALux, the software proves valuable for assessing lighting quality and achieving energy cost savings.In summary, DIALux serves as an effective tool for comprehensive lighting evaluations. Method The determination of the number of light points required within a given space can be achieved through the application of the formula encapsulated in Equation 1, where various parameters and considerations are systematically integrated to ascertain an accurate and 65%] is plays a pivotal role, fluctuating between 50% and 65%, and is stands for the number of lamps essential to achieving the stipulated lighting conditions.This multifaceted formula integrates these diverse elements, ensuring a comprehensive and nuanced approach to determining the optimal lighting configuration for the specified room.As shown in Figure 3 illustrates the calculated distribution of light points for various rooms using DIALux software, such as 2 points for Offices n1, n2, and n3, 3 points for Offices n4, n5, n7, n8, n9, and n10, 2 points for Office n6, and 3 points for the corridor.for Office n4, 495 for Office n3, 440 for Office n2, 500 for Office n1, 462 for Office n6, 470 for Office n7, and 472 for Offices n8, n9, and n10.Notably, these calculated average lux values consistently adhere to the stipulated SNI standard criteria. Figure 4a provides an insightful depiction of the color rendering outcomes for industrial office buildings employing FL T5 lamps.This representation elucidates the distribution of lighting within the space by grouping it on an illumination or luminance scale and visually presenting it through an array of colors. Figure 4b visually presents the color rendering outcomes for industrial office buildings utilizing RC 100B LED lamps.These results underscore the superior quality of lighting distribution compared to FL T5 lamps, demonstrating an enhanced and commendable color rendering performance. The false color rendering depicted in both Figure 4a and 4b serves as a visual indicator of the lighting spectrum's effectiveness in achieving optimal distribution within a room.A visual cue is provided, where black hues signify areas where the lights fail to adequately illuminate corner points.In such instances, recalculations become imperative to ensure that each room receives adequate illumination from the lamps. Smart buildings employ a myriad of automated processes that seamlessly collaborate to oversee and manage essential building systems, encompassing HVAC, lighting, electricity, utility consumption, and closed-circuit cameras.At the core of this technology are interconnected devices and a BMS capable of effective communication and execution of intricate instructions tailored to the real-time conditions of diverse subsystems.Facilitating this dynamic communication network necessitates advanced software and hardware solutions that enable the coordination, monitoring, and control of various building elements over the interconnected network.Illustrated in Figure 5 is the technological advancement in the industrial office building, highlighting the implementation of an upgraded lighting control system through the BMS.Notably, this BMS enables dynamic control of the lighting system in real time, facilitating adjustments based on a predetermined schedule mutually agreed upon by the building owner. Employing BMS for lighting control, the author presents a simulation outlining temporal constraints, restricting the operation of lights from 8:00 am to 8:00 pm within a 12-hour timeframe. Results This study conducts a comparative analysis between FL lamps and LED lamps, as illustrated in Figure 1 for a sample of the lamps.Additionally, the research presents the findings of a cost comparison between FL lamps and LED lamps over a one-year duration and also provides a cost comparison of LED lamps using BMS and without BMS. The graphical representation in Figure 6a.delineates the lux calculations for FL lamps versus LED lamps, delineates the lux calculations for FL lamps versus LED lamps, revealing that LED lamps offer superior illumination compared to FL lamps.Importantly, the lux values for each lamp in this comparison consistently adhere to the national and international standard criteria.In Figure 6b, the comparison of load consumption between FL lamps and LED lamps reveals that the latter demonstrates commendable energy efficiency.The graph illustrates that LED lamps exhibit superior energy consumption values in contrast to FL lamps, with an average improvement of 21%.Emphasizing the potential for energy savings, especially in lamp load utilization, adopting LED lamps is recommended for enhanced energy efficiency within the building. The subsequent phase involves a detailed analysis of monthly cost expenditures, focusing on the comparison between FL lamps and LED lamps.Illustrated in Figure 6c.The yearly cost projection reflects an average daily light usage of approximately 18 hours, spanning from 6:00 am to 1:00 am.Notably, the lighting system employs conventional control methods, with lights being switched on or off using a simple ON/OFF mechanism.This operational approach translates into an average monthly cost savings of 21%. As shown in Table 5 are the outcomes of a meticulous monthly cost comparison between FL lamps and LED lamps, referencing the light point data outlined in Figure 3. Notably, the results demonstrate a consistent pattern where the monthly cost associated with FL lamps surpasses that of LED lamps.This discrepancy can be attributed to the higher power output of FL lamps as opposed to LED light a distinction further elucidated in Table 1. The next discussion shown in Figure 6d.show illustrates the comparison graph, showcasing notably substantial outcomes of advanced lighting control systems utilizing BMS operations, as opposed to conventional counterparts use switch on/off.The cost efficiency of LED lamps integrated with BMS surpasses that of controlled LED lamps in conventional systems. Examining Table 6 reveals a noteworthy average cost savings of 35% when comparing the use of LED lights with BMS control to LED lights with conventional control.This approach proves highly advantageous for achieving cost reductions, particularly in the context of LED lighting technology implementation within industrial office buildings. Discussion The strategic selection of LED technology aligned with needs, coupled with technological advancements on a BMS basis integrated with the Internet of Things (IoT).As per research [28], energy conservation is achievable by leveraging IoT to create smart offices, aiding remotely in electricity monitoring.Consequently, studying technological advancements becomes imperative for optimizing energy efficiency.Furthermore, various technologies are emerging that integrate smart lighting with the IoT, enabling the monitoring of electricity usage to facilitate energy conservation [29]. Involves a strategic emphasis on energy efficiency within industrial offices beyond the lighting system, encompassing discussions on other integral systems such as HVAC [30].This strategic approach stems from the recognition that the energy efficiency percentage within the lighting system discussion, while significant, does not constitute a substantial portion due to the predominant load imposed by air conditioning in industrial office buildings.The rationale behind this emphasis is rooted in the overarching goal of attaining optimal energy efficiency throughout the entirety of a building.Moreover, it is crucial to underscore the significance of optimizing other energy-consuming systems, such as air conditioning, for overall efficiency [30]. Imperative to delve into discussions concerning strategic technological advancements about Lighting Technologies seamlessly integrated with PV Solar as the primary power source.This discussion assumes paramount importance in the contemporary epoch marked by the transition from fossil energy reliance to the adoption of renewable energy sources [31].Furthermore, advancements in the utilization of solar photovoltaic (PV) technology within buildings not only contribute to energy cost savings but also yield environmentally beneficial effects, promoting a greener impact on the surrounding environment [32]. The research outcomes offer a comprehensive perspective, shedding light on the intricacies of energy efficiency endeavors.The initiative to transition from FL lamps to LED lamps, particularly within industrial offices and across various industries, yields profoundly positive impacts.This shift towards environmentally friendly and energy-efficient lighting not only contributes positively to the environment but also enhances economic sustainability [28]. The practical application of this research is not only in the office industry but can also be applied to other industries and building management with adjustments as necessary.This research contributes to achieving energy savings with a proper energy management control system (EMCS) and energy goal setting.Concept of Information Control Systems for Green Manufacturing Industries with IoT-Based Energy Efficiency and Productivity [33].Reducing energy and water consumption in the textile dyeing Industry with reuse wastewater [34], Potential energy efficiency and solar energy applications in a small industrial [35], and energy efficiency in aluminum parts industries with EMCS as well as energy efficiency with exhaust hot air for the scrap process [36,37]. Conclusions The development of energy-saving baselines for measurements in industrial offices has been comprehensively discussed.Basic data development must analyze process flows and data extensively for better understanding.Baseline uses the ratio of energy consumed per month and year by comparison.Some of the main findings can be summarized.First, based on comparative calculations between the energy consumption of TL lamps and LED LED lamps have a better efficiency value of 21%.However, the impact of lamps on energy efficiency is relatively modest, constituting only 21% to 30% of energy consumption on average.In contrast, air conditioning exerts the most significant influence, accounting for 60% of energy consumption, with elevators and other systems making up 6% and 4%, respectively.Beyond these considerations, there is a pressing need for the advancement of lighting technology in industrial office buildings, particularly in conjunction with Solar PV as an energy source. Apart from that, technological developments in industrial office spaces are very important; with BMS as the main example.BMS by efficiently monitoring and controlling various systems through the building network empowers precise regulation of time utilization within the electrical infrastructure of office industries, notably optimizing energy consumption in vital components such as AC and lighting systems.This system will help achieve energy efficiency in an industrial office building. Figure 3 . Figure 3. Plan of light points in an industrial office. Figure 6 . Comparison bar graph.(a) Lux comparison table for FL lamp and LED lamp, (b) Load consumption comparison table for FL lamp and LED lamp (c) Cost comparison FL lamp and LED lamp, (d) Cost comparison LED lamp without BMS and LED lamp with BMS system. Table 1 . Material datasheet comparison of TL lamps and LED lamps. Function Avg Lighting Level (Eavg) Min. (lux) Min. Color rendering Where, is representing the number of light points, [lux] is signifying the desired lighting intensity measured, [meter] is denote the length and width of the room, [lumen] is encapsulates the total lumen output of the lamps being considered a parameter readily available in Table 3 Table 4presents the subsequent simulation results employing a distinct luminaire, specifically the RC100B LED lamp, utilizing the DIALux software with reference to Figure3.The outcomes encompass various office spaces, revealing average lux values such as 479 for Office n5,400 Table 5 . Cost comparison FL T5 and LED RC 100. Table 6 . Cost comparison Lighting LED without BMS and LED with BMS system.
2024-05-04T15:37:29.229Z
2024-03-18T00:00:00.000
{ "year": 2024, "sha1": "270459abe8bf9a61d72eb2d1d313bb4f9ca18847", "oa_license": "CCBYNC", "oa_url": "https://heca-analitika.com/ljes/article/download/140/94", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c8fe49604d712d3a518a58c1eaf3855c5ce68196", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Business" ], "extfieldsofstudy": [] }
252148205
pes2o/s2orc
v3-fos-license
SYNTHESIS, X-RAY CRYSTAL STRUCTURE, SPECTROSCOPIC CHARACTERIZATION AND HIRSHFELD SURFACE ANALYSIS OF DICHLORO-BIS(3,5-DIMETHYL-4-AMINO-1H-PYRAZOLE) COBALT(II) The synthesis and characterization of mononuclear Co(II) complex based on 3,5-dime-thyl-4-amino-1H-pyrazole are reported. IR and UV/Vis spectroscopy characterization of the complex are described. The synthesis, results of IR, NMR spectroscopy and elemental analysis of 3,5-dimethyl-4-amino-1H-pyrazole are also reported. X-ray analysis of [Co(C 5 H 9 N 3 ) 2 Cl 2 ] complex reveals that the cobalt atom has a tetrahedral coordination environment formed by two nitrogen atoms belonging to the two 3,5-dimethyl-4-amino-1H-pyrazole ligands [Co1– N1 = 2.005(3) and Co1–N5 = 2.006(3)Å] and two chlorine atoms [Co1–Cl2 = 2.2400(11) and Co1–Cl1 2.2863(12) Å]. In the crystal structure the molecules are linked through intermolecular (N–H···N, N–H···Cl) and intramolecular non-classical (С–H···Cl) hydrogen bonds. Hirshfeld surface analysis of the intermolecular contacts reveals that the most important contributions for the crystal packing are from H···H (47.1%) and H···Cl/Cl···H (28.5%) contacts. INTRODUCTION. Pyrazole ligands are widely used in different areas of coordination, bioinorganic, supramolecular chemistry and in molecular electronics because of their marked tendency to form high nuclearity species exhibiting specific magnetic properties which also can be used for the development of structural and functional models of active sites of some metalloenzymes [1][2]. Due to the presence of N-N bridging function in the pyrazole ring, these ligands can form polynuclear complexes with specific molecular topology. In this case, triangular azametallacrowns have been a much more common structure type [6][7][8][9][10]. The neutral pyrazole ligands usually bind to metal ions via the pyridine-type nitrogen atom and provide to form of the mononuclear complexes [11][12][13][14]. Despite the considerable amount of avail able data on the 3,5-substituted pyrazole compounds, 3,4,5-substituted pyrazole-based complexes have been studied very unevenly. We have been interested in how the steric factors, namely the presence of NH 2 substitute in the fourth position of the 3,5-dimethyl-1Н-pyrazole, can influence the final crystal structure. In this regard, in this work, we performed the synthesis of 3,5-dimethyl-4-amino-1Hpy razole and Co(II) complex with it was also studied. Although the X-ray structure of the title compound was previously reported [15], we used another solvent for the preparation of [Co(C 5 H 9 N 3 ) 2 Cl 2 ]. Additionally, IR and UV/Vis spectroscopic characterization of the complex are described. The synthesis, results of IR, NMR spectroscopy and elemental analysis of 3,5-dimethyl-4-amino-1H-pyrazole are also reported. Hirshfeld surface analysis of the intermolecular contacts of the title compound was also performed. EXPERIMENT AND DISCUSSION OF THE RESULTS. All chemicals and solvents were commercial products of reagent grade and used without further purification. Microanalyses were performed with a Perkin -Elmer 2400 CHN. IR spectra (KBr pellets) were recorded with a Perkin -Elmer Spectrum BX FT-IR in the range of 400-4000 cm -1 . Absorbance UV/ Vis spectra were registered with a Varian Cary 50 spectrophotometer in the range of 200-1000 nm at room temperature. 1 H NMR spectra were recorded on a Bruker AC-400 spectrometer (400 MHz) at room temperature. For X-ray structure determination, single crystals of [Co(C 5 H 9 N 3 ) 2 Cl 2 ] were mounted on a Nonius four circle diffractometer equipped with a CCD camera and a graphite monochromated Mo К α radiation source (λ = 0.71073 Å). Data were collected at 293K. Effective absorption correction was performed (SCALEPACK [16]). The structure of the complex was solved with the direct method using SIR-97 [17] software and was refined with full matrix least squares method on F2 using SHELXL-97 program [18]. H atoms were treated by a riding model. Crystallographic data are summarized in Table 1 The synthesis of 3,5-dimethyl-4-amino-1Hpyrazole (L) was carried out according to the following method. 50 g (0.50 mol) of freshly distilled acetylacetone, 45 mL of concentrated HCl and 250 mL of water were placed into a four-neck 1L round bottom flask equipped with a stirrer, reflux condenser, thermometer, and addition funnel. The mixture was cooled with stirring in an ice bath to 5 o C and a solution of NaNO 2 (36 g; 0.52 mol) in 100 mL of water was dropwise added to it over 10-15 minutes. After consecutive stirring for 20 min, 27.5 g (0.55 mol) of 85% hydrazine hydrate was added to the reaction mixture. After that 85 g of NaCl was added to the reaction flask to preci pitate the product and the mixture was stirred for 1.5 hours at r.t. The obtained blue crystals were filtered off and air-dried. Subsequent treatment of the isolated product by refluxing of its mixture with anhydrous benzene (800 mL) during 5-7 min allowed to selectively dissolve 3,5-dime thyl-4-nitroso-1H-pyrazole (1), in Scheme 1, whereas NaCl was insoluble in these conditions and was filtered off from the hot solution. Slow cooling of the filtrate to 10 o C led to the crystallization of 3,5-dimethyl-4-nitroso-1H-pyrazole (m.p. 128-129 o C), which turned green in the air. To increase the yield of the product, the filtered off (on the first purification step) solid was mixed with the filtrate and obtained mixture was refluxed, filtered and the new filtrate was cooled to 10 o C, which led to the second portion of the product. Yield: 62.5 g; 90%. Hydrazine hydrate (90%, 93 mL; 0.5 mol) was added to 3,5-dimethyl-4-nitroso-1H-pyrazole (62.5 g; 0.5 mol) in ethanol (650 mL) and placed into a three-neck round-bottom flask equipped with a stirrer, reflux condenser, and thermometer. At that temperature of reaction mixture increased to 60-65 o C. The mixture was refluxed under constant stirring for 3 hours, which was accompanied by its color change from green to yellow-brown. The solvent was completely distilled off and the resulting crystalline residue was washed with cold ethanol (20-25 mL) and air-dried. The yield of the crystalline product (2), in Scheme 1, was 52.0 g (93%). 1 INORGANIC CHEMISTRY The preparation of the title compound was carried out according to the following me thod. Cobalt (II) chloride hexahydrate (0,0071 g, 3·10 -5 mol) and 3,5-dimethyl-4-amino-1H-pyrazole (0,0033 g, 3·10 -5 mol) were dissolved in methanol (1 mL To confirm the fact of coordination of the ligand to the metal ion, an IR study of the obtained complex [Co(C 5 H 9 N 3 ) 2 Cl 2 ] was compared with the spectra of the ligand. (Figure 1). In the comparative analysis of the IR spectra of the synthesized complex and ligand, the main attention was paid to the absorption bands corresponding to the oscillations of the functional groups of the ligand. Accordingly, in the IR spectra of [Co(C 5 H 9 N 3 ) 2 Cl 2 ] and 3,5-dimethyl-4-amino-1H-pyrazole observed an intense sharp band of the valence oscillations of ν(NH) in the region of 3350 cm -1 (for complex), 3347 cm -1 (for L). Both spectra have several bands in the region of 2880-3280 cm -1 corresponding to the asymmetric and symmetric stretch of ν(NH 2 ); scissoring oscillations of (NH 2 ) for complex 1561 cm -1 , for 3,5dimethyl-4-amino-1H-pyrazole -1537 cm -1 . The re are several bands in the region of 2700-2900 cm -1 corresponding to the oscillations of ν(СH 3 ) asymmetric and symmetric stretch, in the region of 1386-1427 cm -1 of ρ(СH 3 ) asymmetric and symmetric stretch. A rather intense and sharp peak in the region of 1232 cm -1 (for complex) and 1299 cm -1 (for L) indicates the presence of ν(С-N) oscillations. A band is pre-sent in the absorption region of ν(С=N) oscillations: 1609 cm -1 (for complex) and 1607 cm -1 (for L) [19]. In view of the above facts, it can be stated indicating the presence of a coordinated but not deprotonated ligand. The electronic spectra of the title compound (Figure 2, B) and CоCl 2 ·6H 2 O (Figure 2, A) in the same solvent (methanol solution) indicate absorption maxima at 600 and 670 nm for CоCl 2 ·H 2 O, and at 600 and 650 nm for the complex. The first of these bands in each case appear as shoulders and may be attributed to charge transfer from the ligand to cobalt, and the second can be assigned as the d-d transitions of cobalt(II) ions. During complex formation, the most intense band is shifted to the short-wavelength region by 20 nm in compa rison with Cо-Cl 2 ·6H 2 O. Since the expected data for octahedral cobalt complexes are 400-500 nm (chromophore CoO6 -CoN6), and for tetrahedral complexes are 600-700 nm (chromophore CoN4), the results suggest the tetrahedral structure of the obtained complex [20]. Due to such organization of crystal packing the intermolecular hydrogen bonds N-H···N take part in forming the sixteen-and twenty-membered cycles, which are interchangeable in the molecular structure. The four mo-lecules of the ligand with two Co atoms are connected by hydrogen bonds N4-H4···N3 and take part in forming the sixteen-membered cycle. The interchangeable hydrogen bonds N2-H2···N6 and N4-H4···N3 from four molecules of the ligand take part in the formation of the twenty-membered cycle. In the crystal packing the shortest intrachain Co... Co separations are 7.407(27) Å). The Hirshfeld surface analysis and the associated two-dimensional fingerprint plots were performed using Crystal Explorer 17.5 software [23], with a standard resolution of the three-dimensional d norm surfaces plotted over a fixed color scale of -0.6232 (red) to 1.6217 (blue) a.u. There are 12 red spots on the d norm surface ( Figure 5). The dark-red spots arise as a result, of short interatomic contacts and represent negative d norm values on the surface, while the other weaker intermolecular interactions appear as light-red spots. The Hirshfeld surfaces mapped over d norm are shown for the H···H, H···Cl/Cl···H, H···N/N···H and H···C/C···H contacts, the overall two-dimensional fingerprint plot and the decomposed two-dimensional fingerprint plots are given in Figure 6. All short interatomic contacts are in the range of 1.829-2.782 Å. The shortest contacts are NH···N and the longest contacts are NH···Cl. The most significant contributions to the overall crystal packing are from H···H (47.1%), H···Cl/Cl···H (28.5%), H···N/N···H (12.9%) and CONCLUSIONS. The present work describes the synthesis and characterization of the mononuclear coordination compound [Co(C 5 H 9 N 3 ) 2 Cl 2 ]. X-ray analysis of the complex reveals that the cobalt atom has a tetrahedral coordination environment formed by two nitrogen atoms belonging to the two 3,5-dimethyl-4-amino-1H-pyrazole and two chlorine atoms. In the crystal structure the molecules are linked through intermolecular hydrogen bonds (N-H···N, N-H···Cl) and non-classical intramolecular hydrogen bonds (С-H···Cl). It was found that the introduction to the 4th position of the 3,5-dimethyl-1H-pyrazole NH 2 group does not contribute to the participation of the latter in coordination. The reason for this fact may be the weak σ-donor capacity of the NH 2 group in the pyrazole cycle. Hirshfeld surface analysis was used to study intermole cular interactions in the crystal. The 2D-fingerprint plot calculations displayed the H⋯H and H···-Cl/Cl···H contacts that were the most significant interaction with the Hirshfeld surface.
2022-09-09T16:08:14.350Z
2022-07-27T00:00:00.000
{ "year": 2022, "sha1": "20320117ada6422b1a382d18b37a2464794a0dbb", "oa_license": null, "oa_url": "https://ucj.org.ua/index.php/journal/article/download/460/236", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c18ed056502f411794bb1aff3eb7997e4e6fc624", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
220721961
pes2o/s2orc
v3-fos-license
Author Correction: Analysis of human metabolism by reducing the complexity of the genome-scale models using redHUMAN An amendment to this paper has been published and can be accessed via a link at the top of the paper. A n altered metabolism is a hallmark of several human diseases, such as cancer, diabetes, obesity, Alzheimer's, and cardiovascular disorders 1,2 . Understanding the metabolic mechanisms that underlie this reprogramming guides the discovery of new drug targets and the design of new therapies. To this effect, tremendous efforts are now being made to use the large amounts of now-available multi-omics experimental data to gain insight into the metabolic alterations occurring in different phenotypes. Unfortunately, current mathematical models can be too complex for this analysis, rendering them too cumbersome to employ for many systems biology studies. In the field of systems biology, genome-scale metabolic models (GEMs) integrate available omics data with genome sequences to provide an improved mechanistic understanding of the intracellular metabolism of an organism. GEMs have been reconstructed for a large diversity of organisms spanning from bacteria to mammals [3][4][5] and are valuable tools for studying metabolism 6,7 . The mathematical representation of GEMs through the stoichiometric matrix 7 is amenable to methods such as flux balance analysis (FBA) 8 and thermodynamic-based flux balance analysis (TFA) [9][10][11][12][13] , which ensure that the modeled metabolic reactions retain feasible concentrations and their directionalities obey the rules of thermodynamics, to predict reaction rates and metabolite concentrations when optimizing for a cellular function, such as growth, energy maintenance, or a specific metabolic task. Additionally, GEMs can be used for gene essentiality 14 , drug off-target analysis 15 , metabolic engineering [16][17][18] , and the derivation of kinetic models [19][20][21][22] . The first human GEM was reconstructed in 2007 23,24 . Since then, the scientific community has been working to develop highquality human GEMs, including HMR 2.0 25 , Recon 2 26 , Recon 2.2 27 , and Recon 3D 28 . The human GEMs used for the analysis in this article are Recon 2 and Recon 3D. Recon 2 is composed of 7440 reactions with 4821 of them associated to 2140 genes, and 2499 unique metabolites across seven compartments: cytosol, mitochondria, peroxisome, Golgi apparatus, endoplasmic reticulum, nucleus, and lysosome. Recon 3D is the latest consensus human GEM. It is an improved more comprehensive version of the previous GEMs consisting of 10,600 reactions, with 5938 of them associated with 2248 genes, and 2797 unique metabolites compartmentalized as Recon 2 with an additional compartment for the mitochondria intermembrane space. Human GEMs reconstruct the metabolic reactions occurring in several human cell types. However, a given cell type only leverages a portion of these reactions. This motivates the development of methods to generate context-specific metabolic models that can be used to study the differences in metabolism for different cell types 29 , for healthy and diseased cells 30,31 , and for cells growing under diverse extracellular conditions. Some examples of such methods are (1) GIMME 32 , mCADRE 33 , and tINIT 34 to reconstruct tissuespecific models based on omics data and a set of tasks or a specific objective function; (2) redGEM-lumpGEM 35,36 to reconstruct models around a specific set of subsystems of interest for the study; and (3) iMM 37,38 to characterize the extracellular medium and the metabolites that are essential for growth under each condition. Context-specific metabolic models have been extensively used to understand the differences in metabolism between cancer cells and their healthy counterparts [39][40][41][42][43][44][45] . In this article, we present redHUMAN, a workflow to reconstruct thermodynamic-curated reductions of the human GEMs Recon 2 and Recon 3D. We integrate the thermodynamic properties of the metabolites and reactions into the GEMs and use redGEM-lumpGEM to reconstruct reduced models around specific subsystems. Furthermore, we introduce redGEMX, a method to identify the pathways required to connect the extracellular compounds to a core network. redGEMX guarantees that the reduced models have all the feasible pathways that consume and produce the components of the extracellular environment of the cell. Finally, we use metabolic data for leukemia as an example of how to integrate experimental data to derive disease-and tissuespecific metabolic models. Results Overall workflow. In order to generate reduced models from human GEMs, we developed redHUMAN, a six-step workflow that can be applied to any GEM or desired model system. The overall workflow is briefly described here and shown in Fig. 1, and the details of each step in its application to the human GEMs Recon 2 and Recon 3D to generate thermodynamic-curated reductions are provided in the subsequent sections. For the workflow, the thermodynamic information for compounds and reactions, which is assembled from earlier studies or estimated using established group contribution methods, is first integrated into the GEM. Second, the subsystems, or families of pathways with a specific functional role for a biological process, are selected based on the objectives of the specific study. These pathways are explicitly represented and constitute the core of the reduced model. For example, when studying cancer metabolism, this can include reported subsystems that are deregulated in cancer cells in addition to the standard central carbon pathways. Third, these subsystems are expanded using reacti\ons from the GEM to create a connected core network. In this step, we include every reaction that connects core metabolites and that is not a member of the formal definition of the selected subsystems in the core model. In steps four and five, we include the shortest pathways to connect the extracellular metabolites from the defined medium as well as the shortest pathways to generate the biomass components from the core network. These steps guarantee that the model has all pathways that are essential for survival and growth of the cells based on the availability of nutrients. In the sixth step, experimental data for a specific physiological state is integrated in the model, and the final model is verified through checks that ensure the consistency of the reduced model with the original GEM. Thermodynamic curation of the human GEMs (Step 1). We first determine the directionality of the chemical reactions of the network, which is directly associated with their corresponding Gibbs free energy. The Gibbs free energy of a reaction can be estimated from the thermodynamic properties of its reactants and products. Therefore, we curated the GEMs Recon 2 and Recon 3D (see "Methods") and integrated the thermodynamic properties for 52.4% of the 2499 unique metabolites from Recon 2 and 67.5% of the 2797 unique metabolites from Recon 3D ( Fig. 2a and Supplementary Data 1). Three main reasons prevented the estimation of the thermodynamic properties of the metabolites: (1) an unknown molecular structure (SMILE), (2) an incomplete elemental description (for example, an R in the structure), and (3) groups in the structure for which an estimated free energy does not exist (for example, >N − group). We observed that as the number of metabolites increases from Recon 2 to Recon 3D, the percentage of thermodynamic coverage increases as well. This is due to the improved annotation of the metabolite structures in Recon 3D. Using the thermodynamic properties of the compounds as constraints (see "Methods"), we estimated the Gibbs free energy for 51.3% of the 7440 reactions present in Recon 2 and 61.6% of the 10,600 reactions in Recon 3D. These constraints ensured that the reactions in the computed flux distributions operated in thermodynamically feasible directions. Subsystem selection to build the core (Step 2). A proper metabolic model contains the pathways that are essential for the survival of the cell as well as the pathways that are informative of a specific metabolic behavior. In this work, we were interested in the metabolism of cancer cells. Thus, we selected as core subsystems: (a) the central carbon pathways that provide the energy, redox potential, and biomass precursors, and (b) the subsystems that have been reported to be altered in cancer cells [46][47][48][49] . Consequently, the core subsystems for our models were glycolysis, pentose phosphate pathway, citric acid cycle, oxidative phosphorylation, glutamate metabolism, serine metabolism, urea cycle, and reactive oxygen species detoxification. We have estimated the thermodynamic properties for the metabolites and the reactions in these initial subsystems. In the case of Recon 2, we provide an estimate for the Gibbs free energy of formation for 236 metabolites (94.4% of the total in the initial subsystems) and the Gibbs free energy of reaction for 143 reactions (83.1% of the reactions in the initial subsystems Network expansion (Step 3). Subsequently, to reconstruct the core network we pairwise connected the chosen subsystems using redGEM (see "Methods"). The algorithm first performed an intra-expansion of the initial subsystems. In this process, each initial subsystem was expanded to include additional reactions from the GEM whose reactants and products belong to that subsystem. These reactions can be assigned to different subsystems in the GEM which are not any of the initial subsystems and the core network would miss these additional reactions if we had considered the formal definition of the initial subsystems. The initial core subsystems of Recon 2 contained a total of 180 reactions. After the intra-expansion, 135 reactions from 21 subsystems were added. Examples of these added reactions included three from pyruvate metabolism that interconvert acetyl-CoA, acetate, malate, and pyruvate, which are all metabolites that participate in the citric acid cycle subsystem. For Recon 3D, 171 reactions from 24 subsystems were added to the 211 reactions from the initial core subsystems. Next, the algorithm performed a directed graph search to find the reactions from the GEM that connected the subsystems for different degrees D ( Fig. 2b and Supplementary Table 1), wherein D represents the distance (in number of reactions) between pairs of metabolites from the subsystems. Our final models included the connections for degree D = 1, that is, all the reactions that in one step connect two metabolites (excluding cofactors) belonging to any of the initial subsystems. A degree D = 1 was enough to pairwise connect all the initial subsystems (Fig. 2c). This resulted in a Recon 2 core network of 356 metabolites and 617 reactions and a Recon 3D core network of 440 metabolites and 796 reactions. Extracellular medium connection (Step 4). Cells adapt their metabolism to the available nutrients in their extracellular environment. Consequently, a correct definition of the medium in the metabolic model is fundamental for an adequate representation of the intracellular metabolism. Given the complexity of the extracellular medium, it is particularly important to identify and classify the essentiality of the medium components and the pathways used for their metabolism. To this end, we curated the representation of the interactions of the cell with its environment into the human GEMs. First, we did not allow the exchange of intracellular metabolites lacking associated transport reactions or transport molecules containing P, CoA, or ACP (acyl carrier protein). Secondly, we allowed the synthesis of generic fatty acids from palmitate, with reactions from Recon 2 and Recon 3D (Supplementary Note 1). We next characterized the in silico minimal medium composition required for growth in the human GEMs by applying iMM (see "Methods"), which identifies the minimal set of metabolites that need to be uptaken to simulate growth. The results showed that Recon 2 required a medium with glucose, the nine essential amino acids, and some inorganics (PO 4 , NH 4 , SO 4 , O 2 ), and Recon 3D simulated growth in a medium with glucose, the nine essential amino acids, the same inorganics as Recon 2, and one of the two essential fatty acids (alpha-linolenic acid and linoleic acid). The presence of the two essential fatty acids in the iMM of Recon 3D is a consequence of the improvement of the lipid metabolism 28 , where the essential fatty acids participate in the synthesis of phospholipids. This demonstrates how the algorithms and workflow can be used to compare and validate updated model reconstructions for the same organisms or between different organisms. Seeking to identify the pathways that human cells use to uptake and secrete extracellular metabolites, we next developed the method redGEMX (see "Methods"). This algorithm finds the pathways from the GEM that are needed to connect the extracellular metabolites to the core network defined by redGEM. In this work, we considered a complex medium composition of 34 metabolites (Fig. 3a), and redGEMX found the corresponding GEM reactions that connected 26 of these extracellular metabolites (we excluded the inorganics and the fatty acids) to the core network. An example of one of these connected metabolites is the essential amino acid L-histidine which affects many aspects of human physiology, including cognition functions and allergic reactions. The classical pathway to metabolize L-histidine consists of four steps that sequentially convert it into urocanate, 4-imidazole-5-propanoate, N-formimidoyl-L-glutamate, and Amino Acids Fig. 3 Extracellular medium utilization. a Extracellular medium composition defined in the models. b Graph of the subnetwork from Recon 2 for the uptake of L-histidine and the medium components required for its metabolism. Green represents the metabolites from the subnetwork, and orange represents the metabolites of the core network where the subnetwork is connected. In blue, the medium metabolite under study (L-histidine) and in pink, the extracellular metabolites co-utilized to metabolize L-histidine. The pathway starts with the transport of L-histidine from the extracellular space to the cytosol, where it is sequentially transformed into urocanate (urcan_c), 4-imidazolone-5-propanoate (4izp_c), N-formimidoyl-L-glutamate (forglu_c), L-glutamate (glu_L_c), 5formiminotetrahydrofolate (5forthf_c), 5-10-methenyltetrahydrofolate (methf_c), and 10-formyltetrahydrofolate (10thf_c). 4-Aminobutanoate (4abut_c) is converted to L-glutamate through a reaction from the subsystem glutamate metabolism, and finally, L-glutamate is connected to the TCA cycle. ultimately, L-glutamate 50 . Interestingly, the resulting redGEMX subnetwork for L-histidine uses this classical pathway to connect it to the Recon 2 core metabolites L-glutamate and 4aminobutanoate, both from the subsystem glutamate metabolism. The subnetwork is composed of 22 reactions, and it contains not only the classical pathway but also all the additional reactions required to balance the cofactors and by-products (Fig. 3b). These additional reactions are essential for an active main pathway, as they include the utilization of NH 4 , the sources of water and tetrahydrofolate, and the conversion of the by-product 5formiminotetrahydrofolate to 10-formyltetrahydrofolate, which regenerates tetrahydrofolate. Cellular metabolism has evolved to give flexibility to the cells to survive and function under different conditions. This flexibility is captured in the metabolic networks with the existence of alternative pathways. For this reason, using redGEMX we found three alternative pathways of minimum size (22 reactions) to connect L-histidine to the core network of Recon 2. The alternatives emerge from the existence of different transport reactions for the extracellular metabolites. In the case of Recon 3D, L-histidine is connected to the core network using 20 reactions, and there exist two pathways of minimum size. The subnetworks connect L-histidine to the Recon 3D core metabolites L-glutamate, 5-10-methylenetetrahydrofolate, 2-oxoglutarate, and pyruvate using the classical pathway to metabolize L-histidine. The different topology of the Recon 2 and Recon 3D networks manifests in differences in the pathways used to metabolize and synthesize the compounds, thus, it is important to characterize which are the pathways used in the models. Following this approach, we added the reactions that compose all the alternative subnetworks of minimum size to the core networks to connect the 26 extracellular metabolites (Supplementary Table 2 and Supplementary Data 2). The subnetworks generated with redGEMX provide a new perspective on the current understanding of metabolic pathways, as they not only contain the main pathway but they also include other reactions necessary to supply and consume all the cofactors and by-products. Moreover, the alternatives can be used to hypothesize which pathways cells use when growing under different conditions, such as when different nutrients are present in the environment or under different intracellular regulations when different enzymes are operational. If metabolomics data are available, the subnetworks generated with redGEMX can be classified based on pathway favorability as it has been recently done in refs. 9,51,52 . Biosynthetic reactions generation (Step 5). Cellular metabolic functions, such as growth, structure maintenance, and reproduction, require the synthesis of several metabolites. In metabolic models, this is represented using the biomass reaction 53 , whose reactants, named biomass building blocks or BBBs, are the metabolites that the cell needs to survive and perform its functions. Therefore, the last step necessary for reconstructing the reduced models is the integration of the pathways necessary to synthesize the 37 BBBs that compose the defined biomass in Recon 2 and Recon 3D. Among them, 19 are uptaken directly from the extracellular medium or produced within the core network. To find the minimum number of reactions in the GEM that we need to add to the core network for the synthesis of the remaining 18 BBBs, we used lumpGEM (see "Methods"). Similarly to redGEMX, lumpGEM generates subnetworks that account for the synthesis, degradation, and balancing of all the by-products and cofactors required by the main pathway. The alternative subnetworks generated with lumpGEM can assess the flexibility of the cells to use alternative pathways to produce the BBBs, which can lead to survival in different conditions and drug resistance. Using lumpGEM, we calculated all the alternative subnetworks (set of reactions) of minimum size to capture the flexibility of the network for the biosynthesis of the BBBs (Fig. 4a, Supplementary Table 3, and Supplementary Data 3). The reactions that compose each of these subnetworks were summed up together to form an overall reaction that represented the subnetwork. These lumped reactions were then added to the core network. The subnetworks generated with lumpGEM have the same size and number of alternatives in both Recon models for most of the BBBs, indicating that both models have the same level of flexibility for synthesizing the BBBs, with the exception of L-cysteine, dTTP and the purine nucleotides (ATP, GTP and their deoxy equivalents), cholesterol, and the phospholipids and sphingolipids. The core network of Recon 2 contains a reaction that produces L-cysteine, however, the core network of Recon 3D requires two reactions to produce it. The subnetworks that produce dTTP have the same size in both models, but a different number of alternatives. The subnetworks to produce the purine nucleotides have one more reaction and more alternatives in Recon 3D. Cholesterol is another BBB whose subnetworks agree in size for both models, but Recon 3D has more alternatives than Recon 2. The explosion of alternatives in Recon 3D is due to the parallel description of the synthesis of cholesterol in three compartments, namely cytosol, peroxisome, and endoplasmic reticulum. The differences in the lumped reactions for the phospholipids and sphingolipids between both models are due to the introduction of the essential fatty acid in their synthesis in Recon 3D. As an example of the subnetworks that produce the BBBs, we show the synthesis of the phospholipid phosphatidylserine (Fig. 4b, c). The standard KEGG pathway 54 for the synthesis of phosphatidylserine comprises four steps, wherein glycerol 3phosphate is converted to lysophosphatidic acid, phosphatidic acid, CDP-diacylglycerol, and phosphatidylserine. In Recon 2, the subnetwork generated with lumpGEM for the synthesis of phosphatidylserine was composed of eight reactions. It included the KEGG pathway with the exception of the CDP-diacylglycerol intermediate, which was not connected to phosphatidylserine in the GEMs. Instead, phosphatidylserine was produced directly from phosphatidic acid by attaching serine. Additionally, the subnetwork contained the reactions required to generate from acetyl-CoA the fatty acids that would attach to glycerol 3phosphate and to lysophosphatidic acid, which are important to consider for the final synthesis of phosphatidylserine. All the reactions involved in the synthesis of phosphatidylserine were lumped together in one reaction. For Recon 3D, the phosphatidylserine synthesis subnetwork was generated with the same eight reactions, but in this case, four alternative subnetworks existed ( Fig. 4c and Supplementary Table 4), indicating that Recon 3D has a higher flexibility in producing this BBB. The alternatives emerged from the presence of two reactions in Recon 3D that could be substituted by two other reactions in the subnetwork. One of these reactions arose from the participation of the essential fatty acid linoleate in phospholipid generation, resulting in an alternative form of synthesizing one of the tails of phosphatidic acid. Specifically, the reaction ARTPLM2, which converts palmitoyl CoA into a generic fatty acid, is not required, and instead, the essential fatty acid linoleate is transported from the extracellular medium, transformed into linoleyl-coA and attached to the lysophosphatidic acid to form phosphatidic acid. Because the core network of Recon 3D included a reaction that transforms phosphatidylcholine in phosphatidylserine, the other substitution occurred in the last step, where serine was replaced by choline and phosphatidylcholine was synthesized. The lumped reactions Fig. 4 Biosynthesis of biomass building blocks. a Size of lumped reactions for Recon 2 and Recon 3D, and the corresponding number of alternatives to synthesize the BBBs that cannot be produced by the core nor uptaken from the extracellular medium. b, c Subnetwork for the synthesis of phosphatidylserine. Orange represents the metabolites from the core network. Blue represents the metabolites from the subnetwork for phosphatidylserine synthesis. Pink represents the extracellular metabolites. Phosphatidylserine synthesis starts from the core metabolites glycerol 3-phosphate (glyc3p_c), from glycolysis, and acetyl CoA (accoa_c), from TCA. In the first reaction, acetyl CoA is transformed into malonyl CoA (maloca_c). The next reaction (KAS8) represents the synthesis of palmitate (hdca_c) in the elongation cycle 74 . A CoA molecule is attached to palmitate to form palmitoyl CoA (pmtcoa_c), from which the two generic fatty acids are derived. These two generic fatty acids are attached to glycerol 3-phosphate to form lysophosphatidic acid (alpa_hs_c) and phosphatidic acid (pa_hs_c). Finally, serine (ser_L_c) is attached to phosphatidic acid to form phosphatidylserine (ps_hs_c). b Subnetwork from Recon 2 and corresponding lumped reaction. c The four alternative subnetworks of minimum size from Recon 3D. Phosphatidic acid can be produced with two generic fatty acids or with one generic fatty acid and the essential fatty acid linoleic acid (lnlc_e) (light blue reactions). Phosphatidylserine can be directly produced from phosphatidic acid by attaching serine (green reaction) or through the formation of phosphatidylcholine (red reaction) and then changing choline (chol_c) for serine (orange reaction). can be classified based on the thermodynamic favorability of their subnetworks, if metabolomics data are available, as in refs. 9,51,52 . The analysis performed with lumpGEM allows to characterize and classify the metabolic pathways and their alternatives, leading to an in-depth understanding of the flexibility of metabolism. In the context of GEMs, such detailed analysis of the subnetworks is often a difficult task due to their large size and interconnectivity. By applying the redHUMAN workflow, we reconstructed four reduced metabolic models for human metabolism ( Table 1). Two of them have Recon 2 as the parent GEM, and the other two are generated from the Recon 3D GEM. For both GEMs, we generated one model with the minimum set of pathways required to simulate growth, that is, one lumped reaction per BBB with subnetworks of minimum size, and another model with higher flexibility containing all the alternative pathways of minimum size required to simulate growth. The reduced models have a thermodynamic coverage of more than 92% of the compounds and more than 61% of the reactions. Data integration and metabolic tasks (Step 6). Once the reduced models were generated, we investigated the metabolic tasks captured by the reduced models and we identified how the models should be curated to recover the tasks that they could not perform. First, we sequentially tested in the generated reduced models the thermodynamically feasibility of 57 metabolic tasks defined by Agren et al. 34 . The four models captured 45 of the 57 tasks, including rephosphorylation of nucleoside triphosphates, uptake of essential amino acids, de novo synthesis of nucleotides, key intermediates and cholesterol, oxidative phosphorylation, oxidative decarboxylation, and growth (Fig. 5a). The tasks not captured by the models encompassed the synthesis of protein from amino acids, beta oxidation of fatty acids, inositol uptake, and vitamin and co-factor metabolism. We classified the causes behind their limitation into two categories: (1) the model reconstruction, specifically the definition of the biomass, or (2) the reduction properties, that is, the subsystems included in the reduction and the representation of parts of the network as lumped reactions. To recover these tasks such that they are captured by the model, the following actions should be performed: the synthesis of proteins from amino acids and vitamin and co-factor metabolism can be recovered by modifying the biomass to account for their synthesis and utilization; the inclusion of lipid metabolism subsystems can recover the beta oxidation of fatty acids; and finally, the utilization of inositol can be recovered by adding the explicit reactions that compose the subnetworks, as it was found to be hidden in the lumped reactions of phosphatidyl-inositol. This demonstrates that red-HUMAN allows to build reduced models consistent not only with the GEM but also with the metabolic tasks, and these models are suitable for targeted modifications and expansions. We next demonstrated how generic reduced models were used to integrate data to study disease physiology. We first integrated experimental data from the NCI60 cell lines in the reduced models to define the physiology of leukemia cells. In particular, we considered the exometabolomics of the cell lines HL-60, K-562, MOLT-4, CCRF-CEM, RPMI-8226, and SR, which correspond to leukemia 40,55 . Additionally, we limited the maximal growth to the doubling time reported for leukemia cells, which is 0.035 h −1 , and we constrained according to literature values the maximum uptake rate of oxygen to 2 mmol·gDW −1 ·h −1 40 and the ATP maintenance to 1.07 mmol·gDW −1 ·h −1 56 (Supplementary Tables 5 and 6). We tested that all the models achieved the maximum growth when maximizing for the biomass reaction using TFA. Next, to analyze the impact that the deletion of each gene had on the network, we performed in silico gene knockout by artificially removing a gene and measuring how the network was affected. The genes whose knockout prevented the synthesis of biomass could then be investigated as potential targets for limiting cell proliferation. The consistency of the workflow used to generate the reduced models ensures that they capture the essentiality from the GEM, that is, the genes that are part of the reduced models and are essential in the GEM they are also essential in the reduced model ( Fig. 5b and Supplementary Tables 7 and 8). Furthermore, the reduced models allow the assignment of functionality to the essential genes using the lumped reactions. For example, the gene GART is associated with the enzymes phosphoribosylglycinamide formyltransferase, phosphoribosylglycinamide synthetase, and phosphoribosylaminoimidazole synthetase, which are all part of the subnetworks for the synthesis of the nucleotides ATP, GTP, dATP, and dGTP. Silencing this gene prevents the synthesis of these BBBs, and consequently, the models cannot synthesize biomass. Finally, because the model reduction affects the flexibility of the network with respect to the GEM, we performed thermodynamic flux variability analysis (TVA) on the common reactions between the GEM and the reduced model. The top 20 reactions whose rate ranges changed the most in absolute value included reactions from glycolysis, the pentose phosphate pathway, folate metabolism, and nucleotide interconversion among others (Fig. 5c). For reactions such as phosphoglycerate kinase (PGK), transaldolase (TALA), and methenyltetrahydrofolate cyclohydrolase (MTHFC), the ranges of reaction rates in the reduced model decreased with respect to the corresponding reaction rates in the GEM. Some reactions, such as nucleoside-diphosphate kinase (NDPK9), were bidirectional in the GEM and became unidirectional in the reduced models. On the other hand, there were also reactions such as fumarase, (FUM) lactate dehydrogenase (LDHL), or ribose-5phosphate isomerase (RPI) whose flux ranges fully agreed between the reduced model and the GEM. Interestingly, if we look at the percentage of rate flexibility change, the reactions from the initial subsystems did not experience a large relative change in their rates, with the exception of the reactions whose participants are precursors for the lumped reactions of the BBBs as their reaction rates are now constrained closer to the physiological state. A final calibration of the models is done using the transcriptomics data from the NCI data repository (https://www.ncbi.nlm.nih.gov/sites/ GDSbrowser?acc=GDS4296) for the corresponding leukemia cell lines. We have identified that, in the four models presented in this study, over 99% of the enzymes with gene associations (more than 75% of the total enzymes) are expressed in the NCI60 leukemia cell lines (Supplementary Table 9). This suggests that the pathways selected for initializing and expanding the metabolic core network are highly relevant for the specific physiology, which are also consistent with the important pathways identified in the experimental and medical studies 46,48,57 . Physiology analysis. redHUMAN helps to navigate large human genome-scale metabolic models to explore and classify the metabolic pathways that cells use to function and survive under specific conditions. The thermodynamic curation performed in the genome-scale models guarantees that the reactions obey the laws of thermodynamics, discarding possible pathways that would not be compatible with the bioenergetics of the cell. As an example of how thermodynamics reduces the space of solutions to the thermodynamically feasible pathways, we analyzed the flux variability with and without thermodynamic constraints in the Recon 3D reduced model that has all the alternative lumped reactions of minimum size (Smin). The reactions L-glutamate 5-semialdehyde dehydratase (from arginine metabolism) and L-glutamate 5-semialdehyde:NAD+ oxidoreductase (from urea cycle) are bidirectional when flux variability is performed without thermodynamics and become unidirectional when their thermodynamic information is taken into account. Therefore, integrating thermodynamic information reduces the space of reaction directionality and the physiological solution space, and it eliminates thermodynamic infeasible reactions, excluding some pathways. The leukemia-specific models generated in this study are powerful tools to analyze how the metabolic pathways are altered with respect to other cancer cells or normal cells. In particular, we can analyze how leukemia cells utilize the nutrients available in the microenvironment to biosynthesize the precursors required for growth and cellular functionality. As an example, we identified the minimal number of reactions that are required for the synthesis of phosphatidyl-serine in the reduced Recon 3D model with all the alternative lumped reactions of minimum size. We found that at least 76 reactions should be active for the production of phosphatidyl-serine including the interactions with the extracellular medium, i.e., for some alternatives the uptake of glucose, histidine, linoleic acid, oxygen, and phosphate, and the secretion of succinate, ammonia, carbon dioxide, and water. The main pathways active within the subnetwork of 76 reactions are glycolysis, the citric acid cycle, serine metabolism, and the electron transport chain. This type of analysis will enlighten our knowledge on how cells adapt their metabolism to the microenvironment allowing researchers to hypothesize how and why the cancer cells change their expression profile to adapt and survive. Discussion For a better understanding of the altered metabolisms that accompany many human diseases, we have herein presented a workflow to generate reduced models for common human GEMs that can reduce the complexity of these systems to the relevant processes to be studied, making detailed in silico analyses of metabolic changes possible. During the last years, there has been an increased generation of metabolomics data that better study what is happening in the physiology of cell metabolism compared to other omics data. This has created a need to expand the classical constraint-based modeling methods to include metabolomics information. Our thermodynamic formulation and application of TFA 12,51,58,59 in redHUMAN allows to integrate endo-and exo-metabolomics in the models, constraining the concentration of the metabolites according to physiological data. The size of the model is directly related to the percentage of metabolites that need to be measured. Therefore, the continuous expansion in the size of genome-scale models increases the demand of larger sets of metabolomics, and such data are not always available. In addition, there is a community effort to expand constraint-based models to include information on enzyme abundancy relating the metabolic fluxes with enzymatic data and allowing to integrate transcriptomics and proteomics data into the models. These data are currently limited, but they can be continuously updated and integrated as they become available 60,61 . Moreover, most of the existing methods to build contextspecific models are data-driven, that is, the reduced models are extracted from a GEM by considering only the enzymes associated to highly expressed data, or literature-based pathways. Then, they include additional reactions that are required to simulate growth and cellular functions 33,34,62 . The main difficulty with these methods is the large amount of data required to fully characterize the initial set of reactions or core reactions. The lack of data could lead to unconnected parts and the impossibility to include reactions that could be important for the specific physiology, affecting the final model and the predictions. redHUMAN reconstructs reduced models considering only the pathways of interest and their stoichiometric connectivity. The reduced models are built unbiased from the data, guaranteeing thermodynamic feasibility and consistency with the GEM and the metabolic tasks. The reduced models can then be used to construct context-specific models by integrating omics data, accommodating to also integrate partial data without sacrificing reactions from the network. Overall, the reduced size of the new models and their conceptual organization overcomes some of the main challenges in building genome-scale context-specific models as for example, the barrier of data network coverage. The reduced models generated with redHUMAN are powerful representations of the specific parts of the network, and they have promising applications as they are suitable to use with existing methods including MBA 62 , tINIT 34 Based on our results, we propose the following approach to using these models as tools to explain and compare phenotypes. First, generate a reduced model around a desired set of subsystems and for a defined extracellular medium, and check that the model captures the metabolic tasks. Subsequently, build physiologyspecific models by integrating experimental data into the reduced models. Then, test the consistency of the reduced network with respect to its parent GEM. Finally, integrate different sets of omics data, including expression, to compare different physiologies, such as diseased vs healthy or within several types of cancers. This approach will help to better investigate the alterations in metabolism that occur as diseases develop and progress. Moreover, the same procedure can be used to analyze systematically and consistently metabolic models for the same organism and to compare metabolic models of different organisms, enhancing our understanding of their similarities and differences. Throughout this paper, we have considered a specific set of subsystems, a specific medium, and the biomass definition from the GEMs. In the future, the reduced models could be further expanded to include other pathways, a more complex medium, or more biomass components. To introduce new subsystems or pathways into the core network, redGEM should be run to find the pairwise connections between the added pathways and the rest of the core. For an expansion of the medium, redGEMX would find the connections necessary for using the new extracellular metabolites. In a similar manner, a further curation of the biomass reaction could increase the number of BBBs, requiring lumpGEM to be run to find the biosynthesis pathways for those compounds. If a higher consistency was required between the GEM and the corresponding reduction, we could find the reactions missing in the reduced model to satisfy that condition. Moreover, we have selected a set of metabolic tasks to test the generated reduced model based on the definition within the original GEM. However, these sets of tasks can be expanded or redefined according to the needs of the specific studies, which can be based on expert knowledge or experimental data, as done in ref. 68 . Furthermore, in this study, we have used metabolomics, proteomics, and growth data from the NCI60 cell lines to define a generic physiology for leukemia cells. The core networks of the reduced models are structurally the same across growth conditions and depend only on the structure of the corresponding GEMs. Therefore, these generic models are robust to variations in growth or data for the same physiology, and thus data for individual leukemia cell lines can be used without changing the workflow. However, if there are important differences in the data, for example across different physiological conditions, the authors suggest running the lumpGEM workflow with data integration and generate alternative subnetworks and lumped reactions, which in turn will capture the different flux profiles for each physiological state. Overall, our analysis demonstrates how redHUMAN facilitates the characterization of differences in metabolic pathways across models and phenotypes. Methods Thermodynamic curation of the genome-scale models (GEMs). The thermodynamic curation of the human GEMs Recon 2 and Recon 3D aims to include thermodynamic information, i.e., the Gibbs free energy of formation for the compounds and the corresponding error for the estimation, into the model. The workflow to obtain this information is as follows. We first used MetaNetX (http://www.metanetx.org) 69 to annotate the compounds of the GEMs with identifiers from SEED 70 , KEGG 54 , CHEBI 71 , and HMDB 72 . We then used Marvin (version 18.1, 2018, ChemAxon http://www.chemaxon.com) to transform the compound structures (canonical SMILES) into their major protonation states at pH 7 and to generate MDL Molfiles. We used the MDL Molfiles and the Group Contribution Method to estimate the standard Gibbs free energy of the formation of the compounds as well as the error of the estimation 59 . Since the model for Recon 3D already incorporates the structure for 82% of the metabolites in the form of SMILES, we used those SMILES and followed the previous workflow from the point of obtaining the major forms at pH 7 using Marvin. Furthermore, we have integrated in the models the thermodynamic properties for the compartments of human cells, including, pH, ionic strength, membrane potentials, and generic compartment concentration ranges from 10 pM to 0.1 M (Supplementary Table 10). Thermodynamics-based flux analysis (TFA). TFA estimates the feasible flux and concentration space according to the laws of thermodynamics [11][12][13] . TFA is formulated as a mixed-integer linear programming (MILP) problem that incorporates the thermodynamic constraints to the original FBA problem. The Gibbs free energy of the elemental and charge balanced reactions is calculated as a function of the standard transformed Gibbs free energy of formation (depending on pH and ionic strength) and the concentrations of the products and reactants. Considering a network with m metabolites and n reactions, the Gibbs free energy,Δ r G 0 i ; for reaction i is: where i ¼ 1; ; n; j ¼ 1; ; m: n i,j is the stoichiometric coefficient of compound j in reaction i; Δ f G 0o j is the standard Gibbs free energy of formation of compound j; x j is the concentration of the compound j, R is the ideal gas constant, R ¼ 8:31 Á 10 À3 KJ K mol , and T is the temperature. In this case, T = 298 K. The value of the Gibbs free energy determines the directionality of the corresponding reaction and the thermodynamically feasible pathways. With this formulation, we included the concentrations of the metabolites as variables in the mathematical formulation. TFA allows the integration of metabolomics data into the model. Characterizing the extracellular in silico minimal media (iMM). iMM is formulated as a MILP problem that introduces new variables and constraints to the TFA problem to find the minimum set of extracellular metabolites necessary to simulate growth or a specific metabolic task with the GEM 37,38 . iMM identifies the minimum number of boundary reactions (uptakes and secretions) that need to be active. The method defines new binary variables in the TFA problem that represent the state of each boundary reaction, active or inactive. New constraints link the new binary variables to the corresponding reaction rates such that if the reaction is inactive, then it should not carry flux. The objective of the problem is to maximize the number of inactive reactions. Assuming a network with m metabolites and n reactions, the mathematical formulation of the iMM problem is the following: where n b is the total number of boundary reactions in the model, z k are new binary variables for all the boundary reactions, S is the stoichiometric matrix, v are the net fluxes for all the reactions and v F i ; v R i are the corresponding net-forward and netreverse fluxes, so that ; n. v L and v U are the lower and upper bound, respectively, for all the reactions in the network. Δ r G 0 is the Gibb's free energy of the reactions defined in TFA. b F and b R are the binary variables for the forward or reverse fluxes of all the reactions (coupled to TFA). M is a big constant (bigger than all upper bounds) and C is an arbitrary large number. In this case, if z k ¼ 0, then reaction k is active. redGEM, redGEMX, and lumpGEM. The redGEM, redGEMX, and lumpGEM algorithms seek to generate systematic reductions of the GEMs starting from chosen subsystems (or lists of reactions and metabolites, such as the synthesis pathway of a target metabolite), based on the studied physiology and the specific parts of the metabolism that are of interest. redGEM is a published algorithm 35 that extracts the reactions that pairwiseconnect the initial subsystems from the GEM, generating a connected network named the core network. The inputs for redGEM are (i) the GEM, (ii) the starting subsystems or an initial set of reactions, (iii) the extracellular medium metabolites, (iv) a list with the GEM cofactor pairs, and (v) the desired degree of connectivity. The algorithm then performs an expansion (by graph search) of the starting subsystems by finding the reactions that pairwise-connect the subsystems up to the selected degree (see ref. 35 for further details). For example, for a degree equal to 2, it will connect the metabolites from the starting subsystem that are one and two reactions away in the GEM. redGEMX is a formulated algorithm that finds the pathways in the GEM that connect the extracellular medium to the core network generated with redGEM (Fig. 6). These pathways are added to the core network. The redGEMX method involves five steps: (1) Classify the extracellular metabolites of the GEM into 3 classes: (a) Those that are part of the medium that we want to connect, Fig. 6 redGEMX method. a Classification of the reactions from the GEM into core (green) and non-core reactions (orange), and classification of the extracellular metabolites from the GEM into those that are part of the medium that we want to connect (blue), those that are present in the core (pink), and the others (gray). The algorithm will block the non-core reactions that involve only extracellular metabolites as well as the boundary and transport reactions of the metabolites that are not part of the medium (gray). b The algorithm finds the minimal set of reactions that are required to connect each of the medium metabolites (blue) to the core network, uses the core network to balance the reactions, and secretes metabolites from the medium (blue or pink). (b) Those that are already present in the inter-connected subsystems network, (c) Those that do not belong to (a) nor (b). (2) Classify the reactions from the GEM into 2 classes: (a) Those that belong to the inter-connected subsystems network (corereactions), (b) those that do not belong to the inter-connected subsystems network (non-core reactions). (3) Block the flux through the reactions in the GEM that involve only extracellular metabolites. (4) Block the flux through the boundary reactions of other metabolites in the GEM (1c). Steps (3) and (4) guarantee that the subnetwork reaches the core network. (5) Force the uptake of a medium metabolite (1a, one-by-one) and minimize the number of non-core reactions (2b) required to connect this extracellular metabolite to any core metabolite participating in a core reaction (2a). Note that the subnetwork will contain any reaction required to balance the byproducts secreted by the subnetwork and/or the core network. The redGEMX is a MILP problem that is formulated as follows: (i) Consider the TFA problem of the model that we want to reduce. (ii) Create binary variables z i for each non-core reaction (2b). Non-core reactions are denoted as R nc . (iii) Generate a constraint that controls the flux for each non-core reaction: where b F and b R are the binary variables for the forward and reverse fluxes of all the reactions (coupled to the TFA constraints); when z i = 1, the corresponding reaction is inactive. (iv) Build the following MILP problem for each extracellular medium metabolite (1a) max X R nc i¼1 z i ð4Þ where v eM,j is the flux of the jth extracellular medium metabolite (1a), and c is a small number. lumpGEM is a published algorithm 36 that generates elementally balanced lumped reactions for the synthesis of the biomass building blocks (BBBs). Using a MILP formulation, lumpGEM identifies the smallest subnetwork (minimum number of reactions from the GEM) required to produce each BBB from metabolites that belong to the core network using reactions from the GEM that are not part of the core. With this formulation, we can identify all the alternative subnetworks (of minimal size or larger) for the synthesis of each BBB (one by one). lumpGEM generates, for each BBB, an overall lumped reaction by adding all the reactions that constitute each subnetwork (see ref. 36 for further details). Note here, different subnetworks can give rise to the same overall lumped reaction. This implies that although we produce all the alternative subnetworks with their associated lumped reactions, only the unique lumped reactions will be added to the final reduction. Software. The simulations of this article have been done with Matlab 2017b and CPLEX 12.7.1. Escher 73 has been used to draw the subnetworks in the figures. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The models generated in this work and the data integrated in the models to define the physiology of leukemia cells are available under the APACHE 2.0 license at https:// github.com/EPFL-LCSB/redhuman. Code availability The scripts to generate the results for this paper are available under the APACHE 2.0 license at https://github.com/EPFL-LCSB/redhuman. The code for TFA is available at https://github.com/EPFL-LCSB/mattfa. The code to reduce the human GEMs (redGEM), to connect the extracellular medium to the core (redGEMX), and to generate the biosynthetic lumped reactions (lumpGEM) are available at https://github.com/EPFL-LCSB/redgem.
2020-07-23T15:06:20.925Z
2020-07-23T00:00:00.000
{ "year": 2020, "sha1": "ef6593d3f85a25993f887c6f5db595c3ed5634a8", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-17694-4.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "393a832ce21b502359bee9e3be0a934de46a81fa", "s2fieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253553086
pes2o/s2orc
v3-fos-license
Characteristics and difference of respiratory diseases in Korean adults aged ≥40 years: A cross‐sectional study Abstract Purpose National big data pertaining to the status of common respiratory diseases is essential to devising appropriate policies to promote proper treatment and prevention of respiratory diseases amid the prolonged coronavirus disease 2019 (COVID‐19) pandemic. The aim of this study is to investigate the prevalence of common respiratory diseases and their association with sociodemographic characteristics, comorbidities, and medical history using 11 years (2008–2018) of the Korea National Health and Nutrition Examination Survey (KNHANES) data, ultimately to present foundational data for policy decision making and disease prevention measures. Methods Among the participants of the KNHANES survey (2008–2018), 93 028 adults aged ≥40 years who underwent a lung function test were included in this cross‐sectional study. The participants were divided into four groups: Asthma, chronic obstructive pulmonary disease (COPD), asthma + COPD, and no respiratory disease. Their data were analyzed for demographic factors, health behavior, and disease‐related factors. Multiple logistic regression was used to calculate the odds ratio (OR) adjusted for sex, age, household income, educational level, occupation, body mass index (BMI), smoking status, alcohol consumption, physical activity, and comorbidities. Results Of all participants, 1.83%, 12.63%, and 1.27% had only asthma, only COPD, and asthma + COPD, respectively. With respect to the patients with asthma who also had asthma + COPD, the OR of asthma + COPD was 5.272 in underweight patients and 6.479 in patients aged ≥70 years. Meanwhile, a high association between COPD and asthma + COPD was found in female patients, whereas asthma was more highly associated with asthma + COPD in male patients. Conclusion The study confirmed that old age, sex, smoking status, BMI, previous history of atopic dermatitis, and lung cancer were independent risk factors for asthma, COPD, and asthma + COPD. The present study demonstrated the need for a multidisciplinary integrative approach to respiratory diseases, and the findings could be used for developing policies for the treatment of COVID‐19 and respiratory diseases and the prevention of infectious diseases. | INTRODUCTION The prevalence of respiratory diseases is gradually increasing due to worsening air pollution in recent years and the aging of society. Global Burden of Disease Study 2020 reported that deaths associated with lower respiratory tract infections have decreased over the past 20 years, and yet many deaths are still caused by respiratory diseases. 1,2 Asthma is a heterogeneous chronic inflammatory disease with several etiologies, and its symptoms include breathing difficulty and coughing. 3 In 2015, asthma was suspected to be one of the most common chronic respiratory diseases, affecting 358 million people worldwide. The prevalence rate of asthma has been reported to range between 0.7% and 11.9% in Asian countries, including South Korea. 4 Chronic obstructive pulmonary disease (COPD) is a lung disease characterized by irreversible airflow limitation that has become a public health issue with a high prevalence and mortality rate. According to the World Health Organization (WHO), COPD will become the third leading cause of death by 2030. The socio-economic burden associated with COPD is substantial in Korea. The direct medical cost per COPD patient was $2800 in 2009, which was slightly higher than that of Canada and the United States. 5 COPD has a direct correlation with smoking status, but in some countries, air pollution, occupational exposure, and indoor air pollution due to the use of biomass are also key causes. 6 The definition of asthma + COPD remains controversial, with no commonly accepted definition. In clinical practice, diagnosis of asthma or COPD is suggested if three or more of the symptoms corresponding to asthma or COPD are present. On the other hand, diagnosis of asthma + COPD should be considered when the patient presents symptoms of asthma and COPD alike. 7 In general, asthma + COPD is diagnosed when asthma features are accompanied by irreversible airflow limitation. 8 asthma + COPD progresses faster and is associated with poorer health-related quality of life, higher frequency of exacerbation, comorbidities, and health care utilization rate than asthma or COPD alone. The high economic burden of asthma + COPD, as compared with asthma and COPD, is well documented. 9 However, recommendations for managing asthma + COPD are ambiguous and are inferred from guidelines for asthma or COPD alone. 10 Previous studies have used the Korean National Health and Nutrition Examination Survey (KNHANES) data to investigate respiratory diseases. A study used data from the 2007-2012 KNHANES to divide patients with asthma + COPD into asthma-dominant and COPDdominant groups to investigate their socio-economic status and quality of life. The results showed that the asthma-dominant asthma + COPD group had lower socio-economic status and poorer quality of life than the COPD-dominant asthma + COPD group. 11 Another study used data from the 2007-2011 KNHANES to assess the risk factors for COPD among Korean nonsmokers. The results showed that older age, male sex, low body mass index (BMI), asthma, and tuberculosis were risk factors for COPD among nonsmokers. 12 The COVID-19 pandemic remains active since the index case in late 2019. COVID-19 is known to have a greater repercussion on individuals with preexisting respiratory conditions, such as asthma and COPD. 13 This study investigated the status of respiratory diseases in South Korea until 2018, before the advent of COVID-19. This data can be used in subsequent studies to comparatively analyze the status of respiratory diseases prior to and after the advent of COVID-19. As described, typical respiratory diseases include asthma, COPD, and asthma + COPD, but each disease has various risk factors according to its own characteristics. Previous studies have focused on the associations of a single factor, but no study has investigated that specific factor, among the multitude of complex factors, which shows the strongest association. Moreover, while previous studies have investigated asthma, COPD, and asthma + COPD separately, and some have investigated two of these diseases, no study has compared all three diseases during the period between 2008 and 2018. Accordingly, the present study aims to analyze the risk factors that influence asthma, COPD, and asthma + COPD and compare the risk factors using data from Korean adults aged ≥40 years from the KNHANES 2008-2018. | Study design KNHANES is a nationwide survey conducted by the Korea Centers for Disease Control and Prevention (KCDC), and the present study used data from the fourth to seventh KNHANES survey (KNHANES IV-VII). Starting from 1998, KNHANES has been conducted in 3-year waves. It is a nationwide health and nutrition survey with representativeness and reliability that has been conducted annually since 2007. The survey collects information on the general characteristics, health behavior, chronic disease status, and nutritional intake status of the Korean population. The findings are used as basic data for establishing health care policies. | Study participants The present study used data from the KNHANES IV-VII, collected between January 2008 and December 2018. In the present study, 93 028 individuals were surveyed, of which 35 235 adults aged ≥40 years who took part in a lung function test were included in the final analysis. Of the 93 028 surveyed individuals, 42 619 individuals aged <40 years and 15 174 individuals with missing data regarding asthma and COPD were excluded. Consequently, the analysis of the present study included a total of 35 235 individuals ( Figure 1). | General characteristics Among the health questionnaire survey items in the KNHANES IV-VII, the present study analyzed sex, age, household income, educational level, occupation, BMI, smoking status, alcohol consumption, physical activity, previous history, lung function test, and the Global Initiative for Chronic Obstructive Lung Disease (GOLD) criteria. Sex, age, household income, educational level, and occupation were selected as demographic characteristics. Age was divided into 40-49, 50-59, 60-69, and ≥70 years, whereas the educational level was divided into elementary school graduate or lower, middle school graduate, high school graduate, and college graduate or higher for individuals aged ≥40 years. For household income, the household equivalent income quartiles from the KNHANES ("low," "middle-low," "middle-high," and "high") were used. The occupation was divided into unemployed; professional; office work; sales and services; F I G U R E 1 Flow diagram showing the number of participants who were excluded and the number of data that were analyzed. COPD, chronic obstructive pulmonary disease agriculture, forestry, and fishery; machine fitting and simple labor; and others. BMI, smoking status, alcohol consumption, physical activity, previous history, lung function test, and GOLD criteria were analyzed as health-related characteristics. BMI was divided into underweight (BMI < 18.5), normal (18.5 ≤ BMI < 25), and overweight (25 ≥ BMI) based on anthropometric results to calculate body weight (kg)/ height 2 (m 2 ), in accordance with the WHO standards. 12 Smoking status was divided into current smoker, former smoker, and nonsmoker, whereas alcohol consumption was divided into none, ≤1, 2-3, and ≥4 drinks based on the monthly frequency of consumption. Physical activity was divided based on the frequency of walking during 1 week, with "No" for not walking at all and "Yes" for just once or two to seven times. Excluding asthma and COPD, we identified individuals who indicated a previous diagnosis of atopic dermatitis and lung cancer, which are typically associated with lung disease. Lung function test results were divided into four parameters: Predicted percentage of forced vital capacity [FVC (% pred)], predicted percentage of forced expiratory volume [FEV1 (% pred)], FEV1/FVC, and FEV1/FVC < 0.7. The four stages of the GOLD criteria were used as follows: stage 1 (mild) = FEV ≥ 80%; stage 2 (moderate) = 50% ≤ FEV < 80%; stage 3 (severe) = 30% ≤ FEV < 50%; and stage 4 (very severe) = FEV < 30%. 14 | Definition of asthma group The asthma group was defined as patients answering "Yes" to whether they had been diagnosed with asthma by a doctor. | Definition of COPD group The COPD group was defined based on the results of the lung function test. FVC (% pred; mean AE SE) and FEV1 (% pred; mean AE SE) were determined and used to calculate FEV1/FVC (mean AE SE). Patients with FEV1/ FVC < 0.7 were defined as having COPD. 15,16 2.3.4 | Definition of asthma + COPD group Individuals with both asthma and COPD were defined as the asthma + COPD group. Hence, this group was comprised of those who responded "Yes" to the question of whether they had been diagnosed with asthma by a doctor and who had an FEV1/FVC < 0.7. 17 | Statistical analysis KNHANES is a nationwide sample survey that used complex sample analysis considering weights, stratification variables, and cluster variables. The distribution of the characteristics of the study participants was expressed as frequency and percentage, while continuous data were expressed as mean and 95% confidence interval (95% CI). To identify the risk factors for asthma + COPD among patients with respiratory diseases, we performed logistic regression analysis with asthma + COPD as a binary dependent variable. Additionally, the risk factors for each respiratory disease were also analyzed using each respiratory disease (Asthma, COPD, asthma + COPD) as the dependent variable. Sex, age, household income, education, occupation, BMI, smoking status, alcohol consumption, physical activity, and type of comorbidity were included as covariates in the logistic regression analysis. Furthermore, the annual status of asthma, COPD, and asthma + COPD from 2008 to 2018 was investigated. The logistic regression results were presented as odds ratio (OR) and 95% CI. The data used in the present study were statistically analyzed using SAS V9.4 (SAS Institute Inc, Cary, NC, USA) with the significance level set to <0.05 (two-tailed). | RESULTS Among all Korean adults aged ≥40 years who underwent a lung function test, 1.83%, 12.63%, and 1.27% had asthma, COPD, and asthma + COPD, respectively. In the asthma group, the prevalence was three times higher in women than in men, whereas the prevalence among men was 3.2 and 1.2 times higher than that in women in the COPD and asthma + COPD groups, respectively. Table 1 shows the basic characteristics of the participants. Among the demographic characteristics, the age brackets with the highest number of participants were 40-49 years (31.28%), 60-69 years (32.58%), and ≥70 years (34.86%) in the asthma, COPD, and asthma + COPD groups, respectively. Regarding household income, asthma showed the highest prevalence in the high-income group (26.15%), whereas COPD and asthma + COPD showed the highest prevalence in the low-income group (29.25% and 39.48%, respectively). Whereas the most common education level was high school in the NRD group (35.77%), that in each of the three respiratory disease groups was elementary school (Asthma: 36.81%, COPD: 37.45%, asthma + COPD: 49.99%). In all groups, unemployed individuals accounted for the greatest percentage, and most patients had normal weight. While many individuals in the NRD and asthma group were nonsmokers, the percentage of smokers was relatively high in the COPD groups. Similarly, many individuals in the NRD, asthma, and asthma + COPD groups were nondrinkers (NRD: 26.88%, asthma: 41.74%, asthma + COPD: 40.9%), whereas the COPD group showed a higher proportion of current drinkers (36.15%). Concerning physical activity, all the four groups showed a higher proportion of subjects who regularly participated in physical activity (NRD: 82.69%, asthma: 82.54%, COPD: 79.43%, asthma + COPD: 78.86%). Among the disease-related factors, the lung function test results showed a decreasing mean FEV1/FVC value in the order of the NRD (0.80), asthma (0.78), COPD (0.64), and asthma +COPD (0.58) groups. With respect to the GOLD criteria, the NRD and asthma groups showed the highest prevalence in stage 1; the COPD group, in stages 1 and 2; and the asthma + COPD group, in stage 2. Table S1 shows the logistic regression analysis results on the association between asthma and asthma + COPD, as well as the association between COPD and asthma + COPD. The results were adjusted for all demographic factors, health behavior, and disease-related factors. The OR of having asthma + COPD among females with asthma, as compared with males with asthma, was 0.234 (95% CI 0.140-0.392), indicating a significantly negative association of asthma + COPD among females with asthma. With respect to age, the association between asthma and asthma + COPD increased with increasing age compared with those aged 40-49 years. In particular, the OR of having asthma + COPD among those aged ≥70 years was 6.479 (95% CI 3.327-12.615) in the asthma group, which showed the highest association. Regarding BMI, the OR of having asthma + COPD was high, with a value of 5.272 (95% CI 1.444-19.246) in underweight patients with asthma as compared with those with normal BMI, while the association was lower in overweight subjects (OR: 0.627; 95% CI 0.445-0.884). The OR of having asthma + COPD among females with COPD, as compared with males with COPD, was 2.235 (95% CI 1.397-3.577), indicating a significant association between asthma + COPD and females with COPD. In contrast to asthma, the association with asthma + COPD appeared higher among women with COPD than among men with COPD. With respect to age, the association between COPD and asthma + COPD decreased with increasing age as compared with those aged 40-49. Concerning BMI, the OR of having asthma + COPD in patients with COPD who were underweight was 1.804 (95% CI 1.024-3.177), whereas patients who were overweight were thought to have a higher risk of asthma + COPD (OR: 1.348, 95% CI 1.035-1.757). Table 2 shows the results of the logistic regression analysis (adjusted for all demographic factors, health behavior, and disease-related factors) on the associations among the asthma, COPD, and asthma + COPD groups. Among the demographic factors, the female sex in the asthma group showed a higher association than the male sex, with an OR of 2.550 (95% CI 1.764-3.687), whereas for females in the COPD group, the association was significantly lower, with an OR of having asthma of 0.294 (95% CI 0.255-0.338). In the asthma group, there was no significant difference according to age. In the COPD group, the OR of having COPD increased with age: 50-59 years (2.592, 95% CI 2.205-3.046), 60-69 years (6.676, 95% CI 5.678-7.848), and ≥70 years (12.327, 95% CI 10.383-14.634), relative to the 40-49 years age bracket. In the asthma + COPD group, the OR of having asthma + COPD also increased with age: 60-69 years (3.720, 95% CI 2.304-6.008) and ≥70 years (6.399, 95% CI 3.824-10.709) relative With respect to educational level, the association with asthma was higher among elementary school graduates (1.482, 95% CI 1.043-2.106) than among college graduates, whereas the association with COPD was higher among elementary school graduates (1.478, 95% CI 1.267-1.723), middle school graduates (1.303, 95% CI 1.107-1.534), and high school graduates (1.166, 95% CI 1.012-1.343) than among college graduates. In the asthma + COPD group, there was no statistically significant association. Based on occupation, machine fitting and simple labor (0.684, 95% CI 0.472-0.991) showed a lower association than being unemployed in the asthma + COPD group. Among health behavior and disease-related factors, overweight subjects (1.255, 95% CI 1.038-1.518) showed a higher association than those with normal weight in the asthma group, whereas underweight subjects (1.405, 95% CI 1.042-1.896) showed a high association with COPD, and overweight subjects (0.693, 95% CI 0.633-0.759) showed a low association with COPD. In the asthma + COPD group, underweight subjects (2.463, 95% CI 1.426-4.253) showed a high association with asthma + COPD. In the asthma group, there was no significant association with smoking status, whereas, in the COPD group, former smokers (1.546, 95% CI 1.336-1.790) and current smokers (2.336, 95% CI 2.029-2.689) showed a higher association than nonsmokers. A similar pattern was also found in the asthma + COPD group (former smokers: 1.728, 95% CI 1.095-2.726; current smokers: 2.145, 95% CI 1.320-3.484). With respect to alcohol consumption, individuals who consume any alcohol showed significantly lower associations with asthma and COPD than those who do not drink at all. In the asthma + COPD group, individuals who consume ≤1 drink per month showed a lower association. Individuals with a previous history of atopic dermatitis showed a higher association with asthma (3.175, 95% CI 1.792-5.624) and asthma + COPD (2.093, 95% CI 1.050-4.171), whereas individuals with a previous history of lung cancer showed a higher association with COPD (3.122, 95% CI 1.650-5.907). | DISCUSSION The present study used reliable and nationally representative data to analyze the demographic characteristics, health behavior, disease-related factors, and risk factors associated with respiratory diseases among 35 235 Korean adults aged ≥40 years. In this study, women showed a high association with asthma than men. Such findings supported the results from a previous study reporting that 71% of adult asthma patients are women. 3 Although the exact role of sex hormones in the regulation of asthma has not been identified, it has been reported that ovarian hormones exacerbate and testosterone alleviates asthmatic airway inflammation. 18 Being overweight showed a higher association with asthma than being normal weight. This finding supports the results from a previous study reporting that obesity increased the risk of adulthood asthma by approximately 50% in both men and women. 19 People with a previous history of atopic dermatitis showed a significantly high association with asthma (3.175, 95% CI 1.792-5.624), which appears to be associated with atopic march, a phenomenon involving atopic dermatitis progressing to allergic asthma and allergic rhinitis. 20 Furthermore, women showed a lower association with COPD than men. Such findings support the fact that COPD is more frequent in men than women due to COPD being historically linked to smoking status and occupational exposure. 21 This association increases with age, as compared with those aged 40-49 years, and in particular, the OR of COPD in individuals aged ≥70 years was approximately 12 times higher. These findings support results from previous studies indicating that the risk of COPD increases with age. 5,12 Concerning income quartiles, the association with COPD was lower in highincome groups than that in the low-income group. With respect to BMI, underweight individuals showed a high association with COPD, whereas individuals who are overweight actually showed a lower risk. The findings were similar to those of a study in 2011 reporting that low BMI is associated with COPD. 5 Similar to previous studies, former and current smokers showed a higher association with COPD than nonsmokers. 22 In particular, individuals with a previous history of lung cancer showed a higher association with COPD, which had also been reported in previous studies. 23 Finally, individuals aged 60-69 and ≥70 years showed a higher association with asthma + COPD than those aged 40-49 years. These findings support the results from previous studies indicating that the prevalence of asthma + COPD increases significantly with age and that age is associated with asthma + COPD. 24,25 With respect to income quartiles and educational level, the risk factors of asthma + COPD showed largely similar characteristics as the risk factors of COPD, which supports previous studies reporting that many patients with COPD also have asthma + COPD and that patients with asthma + COPD share the same demographic characteristics and exhibit similar lung function test results as patients with COPD alone. 26 The results also revealed that underweight individuals had a high association with asthma + COPD, which was contradictory to previous studies reporting that high BMI is associated with asthma + COPD. 27 In the present study, the subjects were limited to adults aged ≥40 years. As a result, the association between asthma + COPD and COPD was even more pronounced. Consistent with COPD, former and current smokers showed a higher association with asthma + COPD than nonsmokers. Such findings could be considered similar to the results from previous studies showing no significant differences between the COPD and asthma + COPD groups in relation to smoking habits (former, current, and nonsmokers). 28 Lastly, individuals with a previous history of atopic dermatitis showed a high association with asthma + COPD. 26 Studies in the US have also used data from the KNHANES to investigate the status of respiratory diseases. The results revealed that the prevalence of asthma + COPD had increased from 0.96% to 1.05% between 2007 and 2012. The age-standardized prevalence of asthma + COPD was higher among individuals with low socioeconomic status and previous history of myocardial infarction or stroke. Old age and smoking status showed even higher associations with the prevalence of asthma + COPD. Among the participants with COPD, a higher prevalence was correlated with being non-Hispanic Black, being obese, and having a history of myocardial infarction or stroke. asthma + COPD is associated with the use of oxygen therapy for severe asthma and COPD. The participants with asthma + COPD showed a poorer FEV1 in the lung function test results than those with asthma or COPD alone. 29 With respect to risk factors, old age, smoking status, and obesity showed similar results as the present study. Participants with a low socio-economic status are highly likely to present severe COPD together with frequent and uncontrolled asthma for a long time. 9 | Strength and limitations The present study had some limitations. First, because it was a cross-sectional study using KNHANES data, caution should be taken when interpreting the causal relationships between the variables. In other words, whether the associated factors are independent causative factors or epiphenomenon factors cannot be known by their causal relationships. Second, asthma has a characteristic higher incidence at younger ages, but because the study population included only adults aged ≥40 years, the identification of factors may have been influenced by the fact that more COPD cases than asthma cases were included. Third, asthma was diagnosed by a doctor and thus may have been subject to underdiagnosis, overdiagnosis, or poor accessibility to health care services. 30 On other hand, we included patients diagnosed based on the pulmonary function test (laboratory diagnostic criteria) for COPD. The diagnosis of asthma is made based on a physician's subjective judgment, the diagnosis may not be as objective as COPD. There is the additional confounder that the physician's diagnosis of asthma would have been based on a number of different factors in that age group, such as gender and smoking history. Lastly, KNHANES was not created for the purpose of investigating chronic airway diseases. Therefore, the total participation rate was limited, and only participants with vital capacity measurements were included. Because lung function was not measured in all individuals who participated in KNHANES, selection bias cannot be discarded. To overcome such shortcomings, large-scale prospective cohort studies are needed to identify the associations of suspected risk factors and actual causes with outcomes of respiratory diseases. Despite these limitations, the present study also had the following strengths. First, it used data from surveys conducted by a Korean government agency. The surveys were nationwide and were conducted on sample populations from the same region using the same questions during the same survey period. Because the surveys targeted the general population, the obtained data have high reliability. Second, this was the first study to analyze the risk factors of three different respiratory diseases over multiple years. While previous studies have analyzed the risk factors of respiratory diseases, no study has analyzed such factors over multiple years. Additionally, the present study compared the associations of asthma-asthma + COPD and COPD-asthma + COPD to investigate which risk factors would increase the likelihood of asthma or COPD progressing to asthma + COPD. Finally, the present study measured the ORs of each respiratory disease relative to a group of healthy controls to identify the risk factors that influence specific respiratory diseases. This study analyzed the correlation between respiratory diseases and associated factors from 2008-2018. As a result of the global COVID-19 pandemic, there may have been changes to the status of respiratory diseases since 2019. Subsequent studies can comparatively analyze the status of respiratory diseases prior to and after the advent of the COVID-19 pandemic by examining respiratory diseases after 2019. Therefore, this study provides useful data shedding light on the characteristics of respiratory diseases in the Korean population prior to the advent of COVID-19. In the future, studies should analyze respiratory diseases and their associated factors in Korea also using data since 2019. | CONCLUSION The present study identified the associations between respiratory diseases and demographic factors, health behavior, and disease-related factors. In particular, the independent risk factors influencing specific diseases were being a woman and having a previous history of atopic dermatitis for asthma; older age, being a current smoker, and having a previous history of lung cancer for COPD; and older age, being underweight, being a current smoker, and having a previous history of atopic dermatitis for asthma + COPD. Our findings could be used to developing policies to treatment of COVID-19, respiratory diseases and the prevention of infectious diseases.
2022-11-17T06:18:03.763Z
2022-11-15T00:00:00.000
{ "year": 2022, "sha1": "1bb70cf86809a52f97266002eec45e8f53448f79", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1111/crj.13558", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "de8bf144c42bbc4434c642853c40aef046e10bc8", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
225273773
pes2o/s2orc
v3-fos-license
Use of Patient Health Records to Quantify Drug-Related Pro-arrhythmic Risk Summary There is an increasing expectation that computational approaches may supplement existing human decision-making. Frontloading of models for cardiac safety prediction is no exception to this trend, and ongoing regulatory initiatives propose use of high-throughput in vitro data combined with computational models for calculating proarrhythmic risk. Evaluation of these models requires robust assessment of the outcomes. Using FDA Adverse Event Reporting System reports and electronic healthcare claims data from the Truven-MarketScan US claims database, we quantify the incidence rate of arrhythmia in patients and how this changes depending on patient characteristics. First, we propose that such datasets are a complementary resource for determining relative drug risk and assessing the performance of cardiac safety models for regulatory use. Second, the results suggest important determinants for appropriate stratification of patients and evaluation of additional drug risk in prescribing and clinical support algorithms and for precision health. In Brief Davies et al. analyze patient health records and FDA Adverse Event Reporting System reports to demonstrate how patient subtypes affect the incidence of drug-related arrhythmia. Using such real-world data to understand background arrhythmia can further validate cardiac risk models for regulatory use and help stratify patients when evaluating drug risk. INTRODUCTION Over the past 10 years there has been an emphasis on use of in silico approaches for cardiac risk assessment. Initially, these computational tools were used to aid pharmaceutical industry decision-making [1][2][3] and, more recently, by offering an interpretation of in vitro assay data for regulatory purposes. 4 There are good reasons for doing so, most notably an increasing amount (quality and throughput) of in vitro data, 5,6 in silico tools, 2,3,5,[7][8][9][10][11][12][13][14][15] supporting research activities, 4,[16][17][18] and pressures to adapt an imperfect but apparently successful pair of International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) guidance documents, to motivate these efforts. These guidance documents were introduced in response to a number of drugs being removed from the market in the 1990s and 2000s 19 and were implemented to require testing of compounds for their ability to modulate the human Ether-à -go-go-Related Gene (hERG) potassium channel currents (ICH S7B) and to test compound effects on the QT interval measured from the clinical body surface electrocardiogram (ECG) (ICH E14). Although perceived to be successful in reducing arrhythmia-related (specifically torsades de pointes) drug withdrawal, there was concern that discarding promising therapies on a perceived hERG risk negatively affected novel drug devel-opment because these screens result in false positives. To counter this and to incorporate the improved understanding of the mechanisms of proarrhythmia, the Comprehensive In Vitro Proarrhythmia Assay (CiPA) initiative was tasked with defining a new paradigm for cardiac risk assessment using a combination of in vitro screening, stem cell-derived cardiomyocyte tests, and in silico predictions. 20,21 Some of the earlier in silico studies focused on supplementing pre-clinical decisions; for instance, by replacing the need for isolated animal-derived cardiomyocyte experiments. 2,22 Over time, the output of in silico studies has been challenged to address increasingly more ambitious goals; namely, correlation of simulated cellular action potential biomarkers with the measure between Q wave and T wave (QT interval) in the body-surface ECG from the clinical thorough QT (TQT) study 3 and proarrhythmia. 23,24 It is important to note that the underlying models have not fundamentally changed in that time, but novel metrics that integrate predictions from single-cell simulations are being considered as surrogate indicators for proarrhythmia. 23,25 The ambition to extend single-cell simulations to a population-level risk therefore necessitates a thorough evaluation of these in silico tools as a key step toward understanding their utility to predict arrhythmic risk. In a recent study, we showed how a different selection of compounds can have a profound effect on the evaluation score of these models; 26 therefore, a more rigorous effort to establish a fixed and balanced compound set for model evaluation should be considered. Two ongoing initiatives, CiPA and the Japanese induced Pluripotent Stem (iPS) cellsCardiac Safety Assessment (JiCSA) initiative, are attempting to establish a set of in vitro data for model evaluation. Typically, selected evaluation compounds are scored using Credi-bleMeds evaluation 27 or, in the case of CiPA, interpretation of the CredibleMeds score, including expert assessment that also accounts for clinical experience. The classification schemes described above and others relevant within the field (such as Redfern category 28 ) are designed to simplify risk information, which is a quantitative continuous measure, into a set of qualitative categories. Although this is a valuable (and sometimes necessary) exercise for supporting decision-making, it comes at the cost of losing information and introducing subjectivity, particularly when new information or new compounds are required to be evaluated. This concern is well recognized in medicine, where a desire to dichotomize continuous scales is also prevalent, such as ''low'' or ''high'' cholesterol. It has been argued that such dichotomization leads to reduced statistical power in detecting cause and effect. 29 A recent review by Wisniowska and Polak 30 discusses a number of issues that occur when attempting to compare cardiac risk across different classification schemes. One such limitation is how a ranking could be applied, e.g., to previously uncharacterized drugs. The ability to rank compounds in terms of putative risk would be advantageous for ongoing and continual model performance assessment beyond the immediate needs of the CiPA initiative. To date, consideration of these regulation-led efforts for proarrhythmic risk prediction has prioritized focus on reproducibility and variability of the in vitro (i.e., input) data for the models. In this study, we aim to complement those activities by focusing more on the risk classification (i.e., output) scores in the evaluation datasets, and we set out to take advantage of the considerable post-marketing medical use of a broader set of evaluation drugs to establish the frequency of adverse cardiac events. Use of such post-market (i.e., real-world) data sources not only provides an estimate of the rate of adverse events that are observed in a real-life population but, we hypothesize, will also provide a more quantitative and continuous metric for assessing proarrhythmic risk. However, although post-market observational data sources may be a valuable way of gaining insights into routine healthcare practice, they are not without complexity and show variability in patients and in the reporting practices inherent in the real world. One limitation of the data from adverse event databases is that the number of events is not normalized to the number of prescriptions-what we call the denominator problem. In the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS), a high incidence of adverse events for a given drug may simply reflect highly prescribed drugs; therefore, statistical methods to identify clinically important adverse events (i.e., when particular adverse events are seen more often than expected) are invaluable for pharmacovigilance. 31 For this study, we used a disproportionality metric of empirical Bayes geometric mean (EBGM) 32 with a threshold of EB05 > 2 as a positive signal commonly used in pharmacovigilance. To additionally account for the denominator problem, the Truven Health MarketScanâ Research Databases were used, which contain individual-level, de-identified healthcare claims information from employers, health plans, hospitals, and Medicare and Medicaid programs. Since their creation in the early 1990s, the MarketScan Databases have grown into one of the largest collections of de-identified patient-level data in the United States. These databases reflect real-world treatment patterns and costs by tracking millions of patients as they travel through the healthcare system, offering detailed information about all aspects of care. Data about individual patients are integrated from all providers of care, maintaining all healthcare utilization and cost record connections at the patient level. Used primarily for research, these databases are fully compliant with United States privacy laws and regulations (e.g., Health Insurance Portability and Accountability Act (HIPAA)). Until now, many of the existing in silico models were designed and developed to give insights to cardiomyocyte electrophysiology and cellular-level outcomes. Extrapolation to population effects was never the primary design goal, and although approaches have been developed to allow surrogate markers to be evaluated, validation of such markers needs careful consideration. Blinded studies for in silico risk assessment, as performed recently by Zhou et al., 33 are significantly more difficult when the performance or outcomes of the drug effects are defined up front, such as the CiPA classification or CredibleMeds, and a more objective performance metric based on observational data could be used instead. We set out to test the utility of these so-called real-world datasets to provide insights into the categorization of compounds for proarrhythmic potential to support or refute the clinician-led understanding of risk. Coinciding with the recent General Principles for the Validation of Proarrhythmia Risk Prediction Models, 34 the work was not intended to establish new cardiac safety metrics. Instead, the work was motivated to be complementary and to highlight datasets that should prove to be helpful when appraising the existing metrics, assays, and computational models that have been developed to allow early assessment of cardiac risk potential, particularly in cases of discordance between metrics, and also to stratify individual drug risk in patient subsets. Cardiac Adverse Events per Year Analysis and Its Regulatory Effect It is perceived that the regulations in ICH documents S7B and E14 mean that no new drugs have been associated with increased risk of torsades de pointes (TdP) arrhythmias. This study set out to query whether this statement is equivalent to there being no new reports of TdP events. Indeed it would be intuitive to expect that TdP (and other related ventricular conditions) might be observed to have decreased since introduction of these regulations. Therefore, an early aim was to assess TdP incidence and update and extend a previous analysis by Stockbridge et al., 19 who reviewed the annual number of reports received by the FAERS. The Pharmapendium (Elsevier) tool provides access for querying FAERS reports of TdP events. To recognize that arrhythmia events may be recorded differently clinically than TdP, we selected other heart-related adverse events in addition to TdP, some as sibling Medical Dictionary for Regulatory Activities (MedDRA) terms to TdP ( Figure S1) for the years 2000-2015. It is worth noting that FAERS data provide outcomes for each adverse event and that, for cases where ''ventricular fibrillation'' is reported, approximately 45% result in a fatal outcome, whereas fatality is associated with approximately 12% for reports of TdP. A further significant finding is a reporting delay, observed as a discrepancy between the occurrence date of the adverse event and the reported date to the FDA. Of 56,682 unique cases, 15,931 do not report the event date, and of the remaining cases, only 16,376 (i.e.,~40%) are reported in the same year as the event date, with 2,771 (i.e.,~7%) showing a delay of 5 years or more. Consequently, examining the incidence of cardiac adverse events on a year-by-year basis ( Figure 1A) reflects the observation as a drop in the most recent years. For this reason, we also present the data as events per submission year ( Figure 1B), where the perceived drop in events is not observed. The FAERS data, together with pharmacovigilance analysis, enable the user to spot drug safety signals in a timely manner. However, the database is not without limitations; FAERS does not explicitly account for whether (or how) the drug caused the adverse events or the volume of prescriptions, nor is it exhaustive in covering all possible adverse events. In other words, drugs that are more highly prescribed would be expected to show higher total numbers of events than drugs with the same level of risk that are prescribed less frequently. To partially account for this limitation, a disproportionality metric using the EBGM analysis was used to account for whether a cardiac adverse event rate is disproportionately higher than these background rates. The EB05 is the lower bound of the 95% confidence interval of the EBGM; 31 EB05 values greater than 2 are considered to show a signal and, therefore, a drug-induced risk increase. 35 Table 1 shows the CiPA reference drugs ranked by EB05 value and the corresponding CiPA and CredibleMeds classifications together with the frequently used safety margin built based on the hERG half maximal inhibitory concentration (IC 50) /free highest concentration of a drug in the blood (Cmax ) ratio. It is important to recognize that only 6 of 28 CiPA compounds have an EB05 value of less than a positive pharmacovigilance ''signal'' threshold of 2, which indicates a set of drugs showing a higher propensity for cardiac disorders than other adverse events. Interestingly, ranking of compounds based on their EB05 score (Table 1) shows some discordance between different classification systems. For instance, vandetanib has a low EB05 value but is classified as high risk by CredibleMeds and CiPA. The inverse is also seen with the anti-arrhythmic drug ranolazine, which has a high EB05 value but is ranked ''very low'' by CiPA. Of the CiPA compounds, it is striking that 8 of the drugs are indicated primarily as anti-anginal or anti-arrhythmic drugs where it might reasonably be expected to see a higher proportion of cardiac adverse events because of patients' comorbidities. For this reason, we chose to investigate whether an expanded set of drugs (beyond the CiPA list) would provide more drugs with a low EB05 value and cover a more diverse range of drug classes because representing negative drugs is also important for model evaluation. Expanded Compound Set for Data Visualizations To ensure consistency and overlap with previous work, a search was conducted for studies that had already compiled lists of compounds relevant for cardiac risk assessment and model validation. 2,3,5,6 The motivation was to minimize introduction of novel compounds, consolidate prior work, and promote consistency across studies, as discussed recently. 26 Ideally, compounds that have information on ion channel effects, cardiomyocyte action potentials, and ECG effects are most suited for understanding the predictive capacity of pro-arrhythmia models to most reasonably assess their translational capacity. We composed an initial list of 149 drugs that have a broad range of molecular and in vivo effects. The drugs in our set are comprised of those under study by the JiCSA and CiPA initiatives, 21,40 in a recent in vitro assay study 6 and by three other in vitro/in silico combination studies 2,3,5 and, finally, an unpublished list of 66 reference drugs we judged to give a balance of positive and, critically, negative effects in cardiac ion channel assays. The full list of drugs is given in Data S3, but a number of interesting findings were uncovered in this exercise. Most notably, the overlap between the different studies was low, with no drugs being studied in all of the prior studies; only 4 drugs (quinidine, dofetilide, cisapride, and terfenadine) were studied in 6 of the 7 studies. Furthermore, 89 of the total list of 149 drugs are unique to a single study, meaning that a cross-comparison of different in silico tools is currently difficult to interpret when different sets of compounds are used for evaluation; see, for example, Figure 4 from Davies et al. 26 Therefore, the consensus list of 149 compounds was used as the basis for onward analysis, recognizing that not all of the compounds on this list are Cell Reports Medicine 1, 100076, August 25, 2020 3 Article ll OPEN ACCESS approved for clinical use and so would not be identifiable in postmarket observational databases. We now examine how the propensity of cardiac disorders in FAERS reports is distributed in this expanded set of compounds. Figure 2 shows the distribution of EB05 values for TdP and ventricular tachycardia (a sibling MedDRA term for TdP). 28 CiPA compounds are highlighted on the plot according to their risk classification. Again, many of them are presented in the top right quadrant of the EB05 plot, indicating that this set of compounds is unevenly distributed toward more active compounds. We propose that including additional compounds (shown in Figure 2 as non-colored compounds) will facilitate improved evaluation of positive and (equally important) negative signals. In Figure 1, we can see that ventricular tachycardia (VT) is more frequently reported than TdP. Because we see a strong correlation between TdP and VT, VT and similar adverse events (i.e., MedDRA sibling terms to TdP) could potentially be included as part of the overall cardiac risk assessment of a given drug. Broadening the range of terms considered (as done for CredibleMeds) would improve risk sensitivity. This is exemplified by mexiletine, which is classified as low risk by CiPA, and is supported by the marginal EB05 value (EB05 = 2.6) and yet appears to be of higher risk for VT (EB05 = 10.1) or ventricular arrhythmia (EB05 = 4.0). Recognizing that this correlation may simply be representative of co-reporting of the adverse event, we investigated the underlying co-occurrence rate. It was found that the number of VT reports that also co-reported TdP was only approximately 10% (i.e., 1,525 of 15,041). This demonstrates that, typically, cardiac adverse events are reported as one term or another and emphasizes a need to consider a broader scope of adverse outcomes beyond TdP; e.g., VT and ventricular tachyarrhythmia. 41 We analyzed each drug for FAERS reports and also used the MarketScan database (data were collected for the period of January 1, 2009, through December 31, 2014). Because data from healthcare claims are recorded longitudinally along with prescription use, it is possible to normalize events based on drug use (i.e., providing an incidence rate). Using Electronic Claims Data to Inform Different Outcomes An optimal strategy for evaluating safety model performance would be to compare against a continuous and objective metric that can be readily calculated for an extended set of compounds. For this purpose, we queried how translation of prior metrics (e.g., hERG IC 50 /free Cmax ratio and a prior categorization [CiPA risk category]) compares with results from insurance claims records. The claims data in Figure 3 show a clear trend between total exposure (in patient years) and the incidence of cardiac dysrhythmia, indicating a previously unreported underlying background rate of cardiac dysrhythmia. Color indicates the CiPA score and hERG IC 50 /Free Cmax ratio. Although some higher-classification drugs (e.g., a CiPA value of high or ratio < 30) appear to stand out above the main cluster, others cannot be readily differentiated from the group. To examine whether measured hERG IC 50 /free Cmax ratios are concordant with the safety risk, as indicated by the EB05 parameter (from the FAERS database) or the normalized incidence rates (gauged from the MarketScan database), we combined two of these parameters at a time in a conjoint visualization ( Figure 4). We use the log-transformed hERG IC 50 /free Cmax ratio in this case to achieve the effect that higher numerical values represent a higher risk for TdP, which is our targeted endpoint. Based on these graphs, it becomes clear that the hERG measurements coarsely reflect the trend in safety risks signaled by either of the other data sources (FAERS EB05 or MarketScan incidence rate), and although the overall correlation is not very strong (the coefficient of determination R 2 = 0.1155 for EB05 and R 2 = 0.0573 for the incidence rate), the trends are still significant because of the large number of observations (**p = 0.0016 for EB05 and *p = 0.04 for the incidence rate). The prediction interval from a line of best fit shows how hERG measurements actually scatter very widely around this overall trend, which raises concerns regarding use of fixed thresholds on hERG IC 50 /free Cmax values to stratify compounds with regard to their expected risk of causing TdP events. The Importance of Patient Sub-grouping The striking correlation of exposure to incidence motivated a need to investigate whether drugs with higher incidence are observed in all patient types or whether it is skewed by only a few subtypes. Therefore, a further derivation of the aggregated data and the benefit of working with observational claims data are to explore how patient subtypes affect the rate of cardiac dysrhythmia. For this purpose, we separated each drug into up to 32 individual subtypes based on gender, age (less than 18, between 18 and 44, 45-64, and older 65 years), and degree of comorbidities. Comorbidities were evaluated using the Charlson index, which accounts for a patient's pre-existing conditions and, accordingly, provides a weighted analysis, and binned into 4 groups (score = 0, 1, 2, or R3) 43 . It is worthwhile to note that not all drugs showed the full range of these combinations, reflecting that not all drugs are prescribed for all subtypes; e.g., vandetanib, an anti-cancer agent, is unlikely to have been prescribed for lower Charlson index patient subgroups. This rich dataset provides the previously unexplored ability to query our pre-existing assumptions about the correlation with drug risk classification and observed levels of pro-arrhythmia. This is critical to ensure that we allow unknown influences in addition to ion channel inhibition as factors predicting pro-arrhythmic potential. Identified factors such as age and comorbidities could be subsequently incorporated more explicitly into mathematical concordance (Con), discordance (Dis), or unknown (Un) between, e.g., hERG IC 50 /free Cmax ratio, CiPA, and the EBGM score (also presented in Table 1). A full list of drugs labeled in the order of Data S3 is presented in Figure S2. See also Data S1. Cell Reports Medicine 1, 100076, August 25, 2020 5 Article ll OPEN ACCESS models or implicitly via a population-type approach, as suggested previously. 2,10,44 Exploring the different subsets also enabled us to make an estimate of the background rate of cardiac dysrhythmia within each of the different subgroups. This is important for understanding the patient context of intended drug risk because not all drugs elicit an adverse response in all patient subtypes. Therefore, an understanding of the expected rate of cardiac dysrhythmia (CD) in each different patient subtype should offer an alternative mechanism for categorizing drug risk, given the variable baseline of incidence, and, hence, allow more stratified treatment options. To carry out this analysis, we excluded drugs where total use was less than 100 patient years (as this tends to skew the incidence rate and is not sufficiently representative). From this, the average incidence rate across drugs for each age group and comorbidity group was calculated (Table S2). In general, we observe that older patient subgroups and those in which the Charlson comorbidity score was greater than 3 tend to show the highest incidence rates compared with subgroups where no comorbidities were identified. Drugs could be broadly be categorized into 3 distinct types of profiles: those that showed an elevated incidence of proarrhythmia regardless of patient subgroup, those showing a normal (or lower) incidence of CD regardless of subgroup, and those that show a differential response between patient subgroups. Three exemplar drugs-the antiarrhythmic flecainide, the antibiotic moxifloxacin, and the antidepressant desvenlafaxine-are shown in Figure 5. In the case of flecainide, for each patient subgroup, a higher rate of CD incidence was observed than the aggregated value of 23.0. For moxifloxacin, the subgroups are highly variable for incidence rate, whereas for desvenlafaxine, the majority of subgroups are near or below this background rate. It is interesting to note that the EB05 values for these drugs (flecainide, 23.36; moxifloxacin, 6.6; desvenlafaxine, 0.13) correlate well with the observed claims data and indicate that EB05 may have merit as a useful metric for quantifying proarrhythmia, particularly when other classifications schemes are missing, as in the case of desvenlafaxine. A further observation with moxifloxacin and flecainide was how the subgroup incidence rate was highly correlated with the age of the patient (inversely for flecainide), and we chose to investigate whether this was related to isolated drugs or a more general finding. Interestingly, for other antiarrhythmics (amiodarone, disopyramide, dofetilide, dronedarone, quinidine, and sotalol) and antibiotics (azithromycin, ciprofloxacin, clarithromycin, erythromycin, metronidazole, and pentamidine) in the evaluation set, a very similar pattern of age dependency was observed. This observation indicates that it could be related to the class of drugs or even the underlying medical condition 45 for which the drugs are being rather than a specific action of the drug. This could have implications for how drugs are classified for cardiac risk; patient age could be a strong predictor for risk classification. This also suggests how appropriate stratification of patient subsets could be useful in prescribing and clinical support algorithms (i.e., to avoid prescribing to subtypes most at risk). Future Metrics for Classifying Drug Risk In this study, we considered how post-market datasets may complement and augment our current assumptions regarding drug-induced cardiac risk. When considering a far wider selection of drugs than previous studies, together with a wider portfolio of complementary data sources, we can challenge or confirm our empirical assessment of cardiac risk, which can potentially lead to an improvement in our evaluation of in silico and/or in vitro models. However, it is apparent that no single marker (i.e., the hERG IC 50 /free Cmax ratio), will successfully categorize each drug. Table 2 shows a selection of drugs for which different classifications overlaid with claims data from MarketScan demonstrate concordance or discordance between classification systems and also where opportunities for classifying unknown drugs can be used. This is well recognized by the Arizona Center for Education and Research on Therapeutics (AZCERT) group, which has developed a method (adverse drug event causality analysis [ADECA]) for stratifying risk based on multiple inputs, including FAERS, clinical evidence of TdP and hERG inhibition, and the QTDrugs list. The ADECA process performs this well by considering multiple data points from 4 different sources, including biomedical literature, drug labels, and adverse event reports, when classifying a risk score. 46 However, the list is limited in its utility for validation and benchmarking because lack of categorization of a drug cannot be used as an equivalent to ''no risk,'' and many drugs remain uncategorized, partially because of incomplete data or a lag in the report times of FAERS reports or literature evidence. There remains a need for a systematic, transparent, and (preferably) automated approach to quantify cardiac risk for a chemical. This would ideally build on and develop work already done to provide transparent and available models for cardiac risk assessment; e.g., by the FDA (https://github.com/FDA/CiPA) and also open-source platform AP-Portal, a cardiac electrophysiology simulator based on the published interface developed by Williams and Mirams 8 . We propose that electronic health care records should be considered together with other risk factors, such as patient comorbidities, co-medications, and lifestyle factors (among others), in line with the current healthcare digitalization trend within the next decade. DISCUSSION The purpose of our study was to highlight that cardiac risk decision-making requires us to not only use empirical knowledge of drug use but also to augment it with larger observational post-market data (e.g., FAERS, health insurance claims, and electronic patient healthcare records) that are able to support or refute the clinician-led understanding of risk. To the same extent that high quality input data are a necessity for meaningful training of in silico models (e.g., the model parameters), so too must highquality outcome data be considered for the models' credibility or for model validation exercises. Consideration of the outcome data is critical for the model validation exercise to ensure a model that is best in class for arrhythmia prediction and compound stratification. 47 Similar challenges have been reported before; for instance, for classification of hepatotoxicity 48 or prediction of cancer driver genes, where the gold standard or truth is unknown. 49 A potential consequence of failing to consider outcomes is that false confidence can be attributed to the selected model and, therefore, subsequent predictions of novel compounds. In this study, we chose to supplement and review the existing standard approaches (e.g., hERG IC 50 /free Cmax safety margin ratio and CiPA classification ranking) by considering how datasets that account for the incidence of proarrhythmia derived from the real-world setting can be used to support ongoing evaluation of proarrhythmic risk and offer an opportunity to test our prior assumptions regarding cardiac safety outcomes in patients. An important motivation for this study was to better understand the possible limitations of the current models to help shape the direction of future development. Whether this means including additional biological details to better represent patient variability or using more empirical models should be an ongoing challenge for the computational biology community, who are likely to be beneficiaries from the extensive datasets being generated within the CiPA and JiCSA initiatives to support these efforts. An important aspect of CiPA and similar initiatives is to consider how to perform an ongoing evaluation of models as (B) logarithmic plots of hERG IC 50 /free Cmax and normalized incidence rate for CD (obtained from MarketScan). Compounds were sorted by normalized incidence rate from large to small. (A) and (B) Points are color coded by CiPA classification; compounds that were not included in the CiPA list are colored in gray. Note that -log function was applied for the safety margin ratio transformation to account for the compound sorting. A linear regression of hERG IC 50 /free Cmax values by compound rank (as ordered by the respective other variable) is indicated with a red line and the corresponding 95% prediction interval with a shaded area. Figure 5. Stratification of Patient Subtypes Shows Differences in the Incidence of CD A scatterplot of CD rates shows incidence rate differences between patient subgroups (split into up to 32 individual subtypes based on age, gender, and comorbidity index) for 3 exemplar drugs of high incidence, mixed incidence, and low incidence, derived from the MarketScan claims database. Colors represent different age groups for patients: green, patients younger than 18 years; yellow, between 18 and 45 years; orange, between 45 and 65 years; red, over 65 years. Dashed lines represent the mean incidence rates across all drugs for these age groups together with the overall mean incidence rate (black dashed line), as seen in Table S2. 8 Cell Reports Medicine 1, 100076, August 25, 2020 Article ll OPEN ACCESS new data emerges; e.g., post-market safety signals. In this study, we suggest the types of datasets and possible metrics that would support this effort. Therefore, it was important to carefully consider the data source for its appropriateness for validation of in silico predictions. I It is equally essential that we recognize that lack of a strong signal in the post-market and insurance claims data for drugs with a previously identified risk of pro-arrhythmic potential should challenge us to re-evaluate our risk categorizations. Observational claims data sources offer great potential for being able to supplement our existing data resources, such as biomedical literature or clinical trial data repositories (e.g., https://clinicalstudydatarequest.com/). However, there are still a number of limitations of these data sources that should be overcome to improve the relevance; these are discussed briefly here. For instance, for this study, we include an incidence rate for ''drug-burdened'' patients; i.e., we can only include patients who have visited their medical professional, and the calculation of a background rate in healthy patients is typically not collected. However, the opportunity of mobile health (e.g., the AliveCor device 50 ) may allow improved understanding of the true background in an otherwise healthy population. In a recent study, Hingorani et al. 51 estimate that 13 healthy volunteers in 1,000 (1.3%) would be expected to show non-sustained VT (NSVT) over a 24-h ECG recording period. Solomon et al. 52 look at arrhythmia detection beyond 24 h and conclude that the incidence of background arrhythmia could be higher still, with 18.3% incidence of NSVT in 128,401 continuously monitored patients over 14 days. Interoperability across the different post-market datasets (e.g., between FAERS and claims data) is hampered by the different clinical coding dictionaries that are used to identify a medical event. For FAERS, events are represented by the MedDRA dictionary, whereas claims data use the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) vocabulary. This means that, for a given event of TdP, although this can be represented as such in FAERS, an equivalent term is not available in the ICD-9-CM vocabulary and, hence, would be recorded elsewhere. For the purpose of this study, we made the assumption that CD in ICD-9-CM would be an approximation of cases of TdP (and related arrhythmias) but would also encompass other events. However, because focusing solely on whether a drug has caused a TdP event might limit our understanding of a more complete cardiac safety concern, the broader term CD may be a more suitable outcome measure. 53 Comedications (sometimes referred to as polypharmacy) are a frequent issue and another confounding factor with adverse event reports. Recently, a study investigated the role of multiple-ion-channel testing in determining the mechanistic reasons of loperamide's proarrhythmic potential in overdose situations. 54 Many of the reported overdoses cases, however, also exhibited polypharmacy, including drugs classified by CredibleMeds in many of the subjects in which loperamide was a cause. 55 This polypharmacy observation is further supported by our analysis of FAERS reports in which loperamide is rarely a primary causative drug taken in isolation. This can make it particularly challenging with regard to identifying the primary drug and/or underlying genetic mutations responsible for the adverse event and identifying the contributing effects of these comedications (and comorbidities). Because this is the case, improvements in how Article ll OPEN ACCESS we assess drug risk in the context of typical comedications (particularly, e.g., Cytochrome P450 (CYP) inhibitors and other ion channel inhibitors) would be worthwhile and further support a need for in silico or clinical decision support systems such as CredibleMeds. Ideally, an understanding of drug-induced proarrhythmia cases rather than drug-associated cases would provide the ideal calibration for computational modeling based on ion channel screening data and in silico predictions. This has been advocated previously by other reporters; for example, Mason 56 recently proposed a need for formal validation with patient outcomes to move away from the current ''surrogate'' (e.g., hERG inhibition or QTc prolongation) model of cardiac risk. However, studies tackling the epidemiology of drug-induced arrhythmia are limited in the number of patients and cases studied; the Berlin Pharmacovigilance Center (PVZ-FAKOS) 57 , there is useful understanding resulting from these studies, notably identification of drugs with no previous classification risk of QTc prolongation or TdP, such as propafenone. This observation clearly shows how existing classifications (CredibleMeds in this case) can be misleading for our assumptions regarding proarrhythmic potential; a case of ''the unknown unknowns'' (i.e., a negative CredibleMeds classification) is not equivalent to no-risk. These studies point to further improving our view of drug-induced arrhythmias. However, these studies are difficult and costly to conduct; therefore, the observational datasets (e.g., based on claims data) offer an excellent bridging study. Reporting dynamics and quality should be considered. A pharmacovigilance signal that partly informs the CredibleMeds classification can and does change over time, particularly for newer-to-market drugs, as novel observations are made with increasing clinical use. Hence, the stability and appropriateness of these rankings will affect in silico model selection and validation exercises; i.e., the optimum model may succeed at a later time for no reason other than a change in risk evaluation of one or more of the validation study drugs. The FAERS datasets, for instance, are predominantly based on United States reports (approximately 70% in 2014) and underreporting of adverse events (e.g., 80% underreporting of serious adverse drug reactions) has been reported previously. 59 The reporters to FAERS are also highly mixed. When we considered 6,470 individual TdP events, 34% did not give a primary reporter occupation, and only 28% were from a physician. This implies that more than two-thirds of TdP reports are reported by individuals other than a physician; this motivated us to consider insurance claims data to reduce bias as a result of the reporter. In addition, FAERS reports, perhaps linked to the reporter, can be influenced significantly by external events, such as safety alerts and labeling of the product with indications of cardiac events. In our sample set, we identified 55 drugs with a product label containing a cardiac warning (data obtained from CredibleMeds). The median EB05 value (for TdP) for drugs with a label warning for TdP was 7.84, whereas drugs that did not specifically mention TdP was 1.48. Although a product label can result in overreporting and underreporting of events, it is nevertheless consistent with the hypothesis that drug warning labels for TdP can cause tendency in the healthcare community for overreporting events. A number of drugs are highly reported for cardiac adverse events within a short period of time and can potentially skew the data. 60 We therefore recognized a need for augmenting any reporter-led datasets because of these biases, which would equally apply to FAERS, World Health Organization, and European Medicines Agency adverse events with insurance claims datasets. Full coverage across datasets (e.g., data missing for hERG IC 50 , drug Cmax, CredibleMeds analysis) or prior classifications makes comprehensive cross-comparison more difficult and limits the number of drugs for which comparisons can be made. However, even with these limitations, this study captures a number of drugs for which data across the different categories are present; 57 drugs, for example, have information from claims (MarketScan) data, hERG IC 50 data, or EB05 (FAERS pharmacovigilance) data, of which only 36 have a corresponding Credible-Meds classification. We advocate for continual assessment and experiments that help improve this set of 57 drugs, and this should be a priority for further studies and developments in this area. One outcome of the ongoing regulatory initiatives is that multiple experimental values, rather than single IC 50 records, will be generated and, therefore, will provide an understanding of experimental variation that can be subsequently modeled to better represent experimental uncertainty. 61 It has to be noted that it is not possible at this stage to gauge the biases that are present in either data source, so a weak correlation between different measures just reiterates a general concern regarding blindly trusting the available data. The finding does not challenge any specific parameter, so in practice, it would be up to the prior assumptions of the researcher to properly weight the sources of evidence. One could, for example, assume that a set of hERG channel binding values obtained under constant conditions in one lab is much harder to question than any observational dataset that comes with plenty of potential biases. From Figure 4, it can also be inferred that the CiPA classification of compounds is backed by other measures, mostly for the high-risk category, whereas separation between a medium-and low-risk class is much harder to justify, especially when looking at the reported incidence rates. If real, then this finding would have notable implications for construction of mathematical prediction models hinging on those labels. An emergent outcome of this study is to demonstrate the potential for a more general utility of post-market datasets for modeling and simulation as a result of improving data access and availability to more generally support systems pharmacology/biology model calibration and evaluation. Finally, the data from post-market sources offer an opportunity to attribute drug risk to many of the drugs uncategorized by CiPA, Credible-Meds, or Redfern. As an example, propafenone (indicated in Figure 4) has recently been described as causing 3 proarrhythmia cases; 58 this was subsequently added (March 1, 2018) to the CredibleMeds listing as having a conditional risk for TdP. The disproportionality index calculated on FAERS data shows a value of more than 2.0, and using the incidence data from MarketScan data in Figure 4 also indicates that the drug resides on the upper portion of the scatterplot, consistent with the signal from FAERS. We anticipate that this work can also be valuable for drug repurposing and repositioning, particularly when the benefit/risk is changed significantly for the new proposed indication. As a method for providing quantitative, transparent proarrhythmic risk, these datasets are additional tools to support clinical decision-making and risk/benefit analysis. These datasets are still somewhat nascent in their utility to support the field of quantitative systems pharmacology, but by developing methods to show how they can be used, we also show how future collection of real-world health datasets can be aligned with supporting risk management. We hope this will encourage experimentalists, data scientists, and clinicians to work together to develop a transparent model-driven approach based on FAIR (findable, accessible, interoperable, and reusable) data standards. The framework should enable scientists, sponsors, and decision-makers to quantitatively evaluate the probability of success of new medicines in a better computeraugmented and human-rendered way that can support more nuanced and patient-specific prescribing. Limitations of Study As discussed above, this work is not without limitations, the most significant being the difference of correlation versus causation of drug-induced pro-arrhythmia. Being able to definitively state that an arrhythmic event is the sole result of a prescribed drug is hard, and we typically use surrogates such as prolonged QTc. 2 recent studies, PVZ-FAKOS 57 and DARE, 58 have successfully addressed this issue but are limited in size of patient population. In our study, we looked at a fixed time period with patient health records following commencement of a new drug prescription to minimize the risk of confounders. Additionally, there was a lack of consistency across the different post-market datasets; i.e., between the FAERS and MarketScan data for arrhythmia events because of differences in coding dictionaries (Table S1). We therefore used CD from ICD-9 as a surrogate for the MedDRA-coded events in FAERS; e.g., VT or TdP. The intent of this study is to demonstrate what can be achieved with current datasets. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: DECLARATION OF INTERESTS The authors declare no competing interests. Lead Contact Further information and requests for resources should be directed to and will be fulfilled by the Lead Contact, Liudmila Polonchuk (liudmila.polonchuk@roche.com) Materials Availability This study did not generate any unique reagents/materials Data and Code Availability The published article includes all datasets generated during this study EXPERIMENTAL MODEL AND SUBJECT DETAILS hERG testing To improve consistency and minimize lab-to-lab variance we chose to profile the electrophysiological effects of compounds against hERG ourselves and collect free Cmax concentrations of drugs using, where possible a primarily single source. Assessment of proarrhythmia algorithms will be most efficient if the compound set includes both positive and negative response compounds in order to ensure an adequate assessment of a model's positive and negative predictive values. Compounds Reference drugs were purchased from commercial vendors. Selection of test concentrations for each compound was done based on the hERG potency data and the solubility in the extracellular solution. Stock solutions of compounds were freshly prepared in DMSO. Test solutions were made such that solvent concentrations were kept constant throughout the experiment (0.1%). Cell culture The CHO crelox hERG cell line (ATCC reference Nr. PTA-6812, female Chinese hamster cells) was generated and validated at Roche. 62 Ready-to-use frozen instant CHO-hERG cells are cryopreserved at Evotec (Germany). For the experimental use, the vials with cryopreserved cells are thawed at 37 C, washed with the pre-warmed IMDM cell culture medium (GIBCO Life Technologies, USA) and re-suspended in the extracellular solution. Solutions The Electrophysiology The hERG test is performed using automated patch clamp system SynchroPatchâ 384 (Nanion Technologies GmbH, Germany) at 35-37 C following the experimental procedure described previously. 63 Subjects Patients were selected by exposure to either of a list of drug compounds (from NDC codes) used for this study from 2009 À2014. In total, the cohort included 49,421,340 patients, of which 43.6% were male (mean age 36.74 years) and 56.4% female (mean age 38.05 years). All enrolment records and inpatient, outpatient, ancillary, and drug claims were collected. Datasets used in this study The two post market datasets used in this study show different strengths and limitations and hence were both necessary for the purpose of the included work, a summary of the major differences is provided in Table S1. FAERS The FDA Adverse Event Reporting System database (FAERS) is based upon voluntary reports of post marketed drug safety. It is a useful resource for pharmacovigilance and monitoring of potential signals that can be apparent only when larger numbers of patients are exposed to a drug, particularly for rare events such as ventricular arrhythmias. Data for this study was from FAERS (since Nov 1997) up to March 31, 2015. EB05 values were calculated from the FAERS data using the Empirica Signal version 8.1 from Oracle. The cumulative gamma distribution function can be used to obtain percentiles of the posterior distribution of l. The equation was as follows: EB05ij = Solution to: Prob(l < EB05 | Nij, q) = 0.05; where I and j represent the drug and event under study. Duplicate reports as identified by Oracle were excluded from the analysis. MedDRA version 18.0 was used for the purpose of this study. Truven Health MarketScanâ Commercial and Medicare Supplemental Database Data used for the analysis were derived from the Truven Health MarketScanâ Commercial Claims and Encounters and Medicare Supplemental and Coordination of Benefits research data bases (Truven Health Analytics, Ann Arbor, Mich.) for the period January 1, 2009, through to December 31, 2014. These databases represent the health services of approximately 170million employees, dependents, and retirees in the United States with primary or Medicare supplemental coverage through privately insured fee-for-service, point-of-service, or capitated health plans. Index Date The index date for patients was the date they met the criteria of exposure to selected treatments according to the inclusion criteria. Exposure period (time at risk) Claims supply days were used to determine exposure; if a claim had a missing or zero day supply the median day supply was assigned corresponding to the drug name and route of administration. Exposure was defined as the time from the first treatment claim until the last treatment claim + median supply in the enrolment period. If two consecutive treatment claims in the exposure period were more than two times the median supply days apart, this was considered a gap and treatment exposure was stopped at the last treatment claim prior the gap + median supply days. Outcomes The present study assessed the incidence of Cardiac dysrhythmia from inpatient and outpatient claims using ICD-9 diagnosis codes.
2020-08-27T09:15:01.926Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "f4daac80ef94a1ebf803b61179fa375913f804f0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xcrm.2020.100076", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c11e8e675ddfa2c84dba48a7317193585ff91120", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54585817
pes2o/s2orc
v3-fos-license
Sparse Signal Reconstruction Based on Multiparameter Approximation Function with Smoothed ℓ 0 Norm . The smoothed ℓ 0 norm algorithm is a reconstruction algorithm in compressive sensing based on approximate smoothed ℓ 0 norm. It introduces a sequence of smoothed functions to approximate the ℓ 0 norm and approaches the solution using the specific iteration process with the steepest method. In order to choose an appropriate sequence of smoothed function and solve the optimization problem effectively, we employ approximate hyperbolic tangent multiparameter function as the approximation to the big “steep nature” in ℓ 0 norm. Simultaneously, we propose an algorithm based on minimizing a reweighted approximate ℓ 0 norm in the null space of the measurement matrix. The unconstrained optimization involved is performed by using a modified quasi-Newton algorithm. The numerical simulation results show that the proposed algorithms yield improved signal reconstruction quality and performance. Introduction Sparse representation of signals has been extensively studied for decades.The concept of signal sparsity and ℓ 1 norm based on recovery techniques can be traced back to the work of Logan [1] in 1965, Santosa and Symes [2] in 1986, and Donoho and Stark in 1989 [3].It is generally agreed that the foundation of the current state of the art in compressive sensing (CS) theory was laid by Candés et al. [4], Donoho [5], and Candés and Tao [6] in 2006.Mathematically, under sparsity assumptions, one would want to recover a signal ∈ R , for example, the coefficient sequence of the signal in the appropriate basis, by solving the combinatorial optimization problem: where is an observed signal, Ψ is the dictionary matrix, means a coefficient vector for the linear combination, and ‖‖ 0 is the ℓ 0 norm of the vector indicating the number of its nonzero elements. Exactly solving (1) is reported to be NP-hard [5,7]; thus a relaxed approach [8][9][10] was proposed to address this challenge.By replacing nonconvex ℓ 0 norm with convex ℓ 1 norm, the method finds optimal solution using convex optimization [8,[11][12][13].It was also introduced [6, [14][15][16][17] that, for many problems, the minimizing of the ℓ 1 norm is equivalent to that of the ℓ 0 norm under certain conditions.However, experimental results in [13] posed a strong question about the equivalence of minimizing both norms in practical problems.The sparse signal reconstruction (SSR) algorithm based on the optimization of a smoothed approximate ℓ 0 norm is studied in [11] where simulation results are compared with corresponding results obtained with several existing SSR algorithms with respect to reconstruction performance and computational complexity.These results favor the use of the approximate ℓ 0 norm. In this paper, we present a new signal reconstruction algorithm for CS, which is based on the minimization of a smoothed approximate ℓ 0 norm.But it differs from the previous algorithms in several aspects.First, we use approximation hyperbolic tangent function to approximate the ℓ 0 norm, which is found to have better approximation quality of ℓ 0 norm than the standard Gauss function.Second, the ℓ 0 norm minimization algorithm in this paper is carried out in null space of the measurement matrix, in which the constraints on measurements are eliminated and become unconstrained.Third, we use a reweighting technique in the null space of the measurement matrix to force the algorithm to reach the desired sparse solution faster.The rest of the paper is organized as follows.In Section 2, the smoothed ℓ 0 norm algorithm is briefly presented.Section 3 is about the signal reconstruction on the base of the reweighted approximation smoothed ℓ 0 norm algorithm.Section 4 provides the experimental results.The paper is finally concluded in Section 5. Smoothed ℓ 0 Norm Algorithm (SL0) Smoothed ℓ 0 norm attempts to solve the problem in (1) by approximating the ℓ 0 norm with a continuous function.Consider the continuous Gaussian function () with the parameter : The parameter determines the quality of the approximation and for any family of functions () which approximates the Kronecker delta function. Define the continuous multivariate function () as It follows from ( 2) and (3) that → 0 and lim Since the number of entries in is and the function () is an indicator of the number of zero entries in , the ℓ 0 norm of reconstructed vector is approximated by Substituting this approximation into (1) yields the problem The approach is then to solve the problem (6) for a decreasing sequence of 's.The underlying thought is to select a which ensures that the initial solution is in the subset of R ×1 over which the approximation is convex and gradually increases the accuracy of the approximation.Here, the final smoothed ℓ 0 algorithm is proved to be faster with the possibility of recovering a sparser solution [11]. Signal Reconstruction Based on Approximation Function 3.1.Approximate ℓ 0 Norm with Approximate Hyperbolic Tangent Sequence.The key to SL0 algorithm is to select high quality smoothed continuous function to approximate ℓ 0 norm.The optimal value of ℓ 0 norm is achieved through the solution of the minimum value of the approximate smoothed continuous function.In general, the approximation of the ℓ 0 norm with Gauss function cannot produce desired result.To effectively approximate ℓ 0 norm, we analyzed inverse tangent function [18], hyperbolic tangent function, and approximate hyperbolic tangent sequence.Inverse tangent function is as follows: Hyperbolic tangent function is as follows: Approximate hyperbolic tangent function is as follows: where is a parameter and is a component of the vector . The following properties can be obtained. (1) For formula (11), let According to the definition of ℓ 0 norm, when goes infinitely to zero, the value of () goes as well to the nonzero elements of the vector , that is, the ℓ 0 norm of vector .Therefore, it can be shown with the smoothed function ‖‖ 0 = lim → 0 ().Obviously, formulas ( 9) and ( 10) have the same approximation property of ℓ 0 norm as that of (11).The approximation performance of ℓ 0 norm of each formula, however, is different.For further illustration of the advantages of approximate hyperbolic tangent function as smoothed continuous function, we compared the distribution of each approximation function at interval [−1, 1] when = 0.1.Figure 1 is the approximation quality comparison of the smoothed ℓ 0 norm of the standard Gauss function, inverse tangent function, hyperbolic tangent function, and approximate hyperbolic tangent function.The estimation of ℓ 0 norm by approximate hyperbolic tangent function outsmarted other functions.It shows that the approximate hyperbolic tangent function has better "steep nature" between the intervals of [−0.5, 0.5] and the estimation of the ℓ 0 norm is more accurate. (2) Given ≥ 0, approximate hyperbolic tangent function ( ) has better convergence than other approximate functions; that is, ( ) ≥ (1 − ( )), for arbitrary , , where From ( 9), (10), and (11), it is clear that the smaller the is, the more the local extreme values of objective function would be, and it would be difficult to get the global optimal value of the objective function.The determines the smoothness of the objective function.The bigger the , the smoother the objective function, and the less accurate it would be to estimate the ℓ 0 norm.Conversely, the smaller the is, the better the approximation of the ℓ 0 norm would be.In the process, we built a decreasing sequence 1 , 2 , . . ., to optimize each corresponding objective function until is small enough, so as to eliminate the influence of the local extreme value and get the global optimal value of the smoothed function.We, thus, use approximate hyperbolic tangent sequence for the approximation of ℓ 0 norm. Problem (1) can be approximated by the following model: (3) There are lots of algorithms to the solving of problem (14), among which the most representative one is the steepest descent method.But there still exist two drawbacks. (a) "Notched effect" in search direction strongly hinders the convergence speed. (b) Search steps cannot be estimated.Usually, it is estimated with experiences, which lacks theoretical support. We, therefore, propose the modified quasi-Newton algorithm to solve the above problem.First, calculate the Newton direction of the approximate hyperbolic tangent function: where We know that the Hesse matrix is nonpositive definite in Newton direction , which cannot ensure that the Newton direction is descent direction, so the Hesse matrix should be modified.Then, set up a new matrix: where is a group of proper positive numbers, is identity matrix, and the diagonal elements in matrix are all positive.Since lim choose as modification coefficients; then the diagonal elements in matrix can be shown as Also, it can be obtained that (4) Usually, parameter is chosen as = ⋅ −1 , = 1, 2, . .., and , ∈ (0.5, 1). 1 and are chosen as follows. Let = max | 0 |, for faster algorithm convergence; parameter should meet the following: and choosing 1 = max (| 0 |/ √ 2), when → 0, we can obtain () → ‖‖ 0 , so when is smaller, () can better reflect the sparsity of vector .Meanwhile, however, it is more sensitive to noise; the value of should not be too small.In summary, the steps are as follows. (2) Calculate the modified Newton direction with (23) and use modified Newton algorithm to get = + . 3.2. Reweighting the Approximate ℓ 0 Norm Minimization.It is well known that all solutions of Ψ = can be parameterized as where is a solution of Ψ = and is a matrix whose columns constitute an orthogonal basis of the null space of Ψ [19].Vector and matrix can be evaluated by using the singular-value decomposition (SVD) or by using the QR decomposition of matrix Ψ [20].Using (25) the problem in ( 14) is reduced to where Signal reconstruction based on the solution of the problem in (28) works well, but the technique can be considerably enhanced by incorporating a reweighting strategy, and the reweighted unconstrained problem can be formulated as follows [21]: where are positive scalars that are from a weight vector Starting with an initial 0 = , where is the all-one vector of dimension , in the ( + 1)th iteration the weight vector is updated to (+1) with its th component given by where () denotes the th component of vector () obtained in the th iteration as () = + () and is a small positive scalar which is used to prevent numerical instability when | () | approaches zero.As long as > 0, the objective function in (28) remains differentiable and its gradient can be obtained as ) . ( For a fixed value of , the problem in (26) is solved by using a quasi-Newton algorithm where an approximation of the inverse of the Hessian is obtained by using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update formula.It can be shown that function (28) remains convex in the region where the largest magnitude of the components of = 2 + is less than , where vector is chosen to be the least squares solution 2 of = Ψ; namely, = 2 = Ψ (ΨΨ ) −1 . Based on this, a reasonable initial value of can be chosen as where is a small positive scalar.As the algorithm starts at the origin (0) = 0, the above choice of 0 ensures that the optimization starts in a convex region.This greatly facilitates the convergence of the proposed algorithm. (2) Perform the decomposition of Ψ = and construct using the last columns of . (3) With = () and (0) as an initial point, apply the BFGS algorithm to solve the problem in (28), where reweighting with parameter is applied using (29) in each iteration; denote the solution by () . Simulation Result This section describes some experiments testifying to the performances of the proposed algorithm.All experiments are performed under Windows 7 and MATLAB V8.0 (R2012a) running on a Lenovo workstation with an Intel (R) Core (TM), CPU i7 at 2.40 GHz, and 6 GB of memory. Simulation 1: Algorithm Performances Comparison of One-Dimensional Signal.In the experiment, the signal length and number of measurements were set to = 256 and = 100, respectively.A total of 16 sparse signals with sparsity = 4 − 3, = 1, 2, . . ., 16, were used.Then a -sparse signal was constructed as follows. (1) Set to zero vector of length . (2) Generate a vector of length assuming that each component is a random value drawn from a normal distribution (0, 1). (3) Randomly select indices from the set 1, 2, . . ., , and set 1 = 1 , and 2 = 2 , . . ., = .The measurement matrix was assumed to be of size × and was generated by drawing its elements from (0, 1), followed by a normalization step to ensure that the ℓ 2 norm of each column is unity. The performance of iteratively reweighted (IR) algorithm [22,23] with = 0.1, the SL0 algorithm, ASL0 algorithm and RASL0 algorithm with = 10 −4 , = 1/3, = 0.01, and = 0.08 was measured in terms of the number of perfect reconstructions over 100 runs.The results obtained were plotted in Figure 2. It can be observed that the RASL0 algorithm outperforms the IR algorithm.On comparing the SL0 algorithm and the ASL0 algorithm, we note that the two algorithms are comparable for smaller than 45 but the RASL0 algorithm performs better for larger than 45. The mathematical complexity of the SL0 algorithm, IR algorithm, ASL0 algorithm, and RASL0 algorithm was measured in terms of average CPU time over 100 runs for typical instances with = /2 and = round(/2.5)where varies in the range between 128 and 512.In Figure 3, it is noted that the RASL0 algorithm and the SL0 algorithms were more efficient than the IR algorithm, and the complexity of the RASL0 algorithm was slightly higher than that of the SL0 algorithm.The moderate increase in the mathematical complexity of the RASL0 algorithm was primarily due to the fact that the objective function in (26) needed to be modified in each iteration using (29).Typically the RASL0 algorithm converges in a small number of iterations.Figure 4 showed how the objective function in (26) converges in 24 iterations, where the parameters were set to = 256, = 100, = 40, and = 0.0212.Simulation 2: Algorithm Performances Comparison of Two-Dimensional Signal.As to examine the algorithms' performances for different images, the reconstruction performance of each method is evaluated in terms of two indicators: the reconstruction relative error (RE) and peak signal-to-noise ratio (PSNR).The relative error of reconstructed image is defined as follows: where and are the original and the reconstructed image, respectively.Apparently, the lower the value of relative error is, the better the reconstructed performance will be.The PSNR of reconstructed images is defined by where , are the size of the image. In experiment, the DCT basis is selected as the sparse representation dictionary; compression ratio is / = 0.25.Figure 5 shows that the quality of the reconstruction with RASL0 algorithm has been improved and details of images are better reconstructed. Table 1 shows three different 256 × 256 images, with the same compression ratio / = 0.5.From the comparison of the PSNR, the RE, the signal-to-noise ratio (SNR), and the matching degree (MD) among RASL0 algorithm, SL0 algorithm, and ASL0 algorithm, it is clear that the relative errors with RASL0 algorithm are 0.02% smaller than the SL0 algorithm, the SNR ratio is 3 dB higher, the PSNR is improved by 2 db, and the matching degree is also improved.Figure 6 is the reconstruction effect comparison of image (Lena) with different algorithms.It shows that RASL0 outperforms OMP, GP, GPSR, and NSL0 algorithms.Table 2 is the quality comparison of different algorithms, indicating that RASL0 algorithm is better in all aspects. Conclusion and Future Work In this paper, we use approximate hyperbolic tangent function to estimate ℓ 0 norm and design a RSNL0 algorithm based on minimizing an approximate ℓ 0 norm of the signal in the null space of the measurement matrix, where a reweighting technique is used to force the solution's sparsity and a quasi-Newton algorithm is used to accelerate the optimization.Simulation results are presented which demonstrate that the proposed algorithm yields improved signal reconstruction performance and requires a reduced amount of computation relative to iteratively reweighted algorithms based on the ℓ norm, with < 1.When compared with a known algorithm based on a smoothed ℓ 0 norm, improved signal reconstruction is achieved although the amount of computation is increased somewhat.However, how to choose the parameter decreasing sequence so as to eliminate the disturbance of the minimal value to the solution of the minimum value would be our future work. Figure 2 : Figure 2: Number of perfect reconstructions by each algorithm over 100 runs with = 256 and = 100. Table 1 : Reconstruction quality comparison with different images and different approximation smoothed ℓ 0 algorithms. Table 2 : Reconstruction quality comparison with four other algorithms.
2018-12-03T08:03:46.864Z
2014-06-04T00:00:00.000
{ "year": 2014, "sha1": "abe2f9769c74550a8b949b8c97507f4d1a1e966a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2014/416542.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "abe2f9769c74550a8b949b8c97507f4d1a1e966a", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
117965123
pes2o/s2orc
v3-fos-license
Two Characterizations of the Maximal Tensor Product of Operator Systems In this paper we provide two characterizations of the maximal tensor product structure for the category of operator systems. The first one is via the schur tensor product; the second one employs the idea of the completely positive approximation property (CPAP). Introduction An operator system is a self-adjoint and unital subspace of B(H) of all bounded operators on a Hilbert space H. In recent years, the theory of tensor products of operator systems has been developed systematically, see e.g. [3,4,5]. Given operator systems S and T , their maximal tensor product, denoted by S ⊗ max T , is equipped the smallest family of cones for which the algebra tensor product S ⊗ T forms an operator system. In the category of operator systems, the maximal tensor product is the natural analogue of the projective tensor norm of operator spaces, as well as a generalization of the maximal tensor norm of C*-algerbas. In fact, given any operator space V , there is an operator system S V containing V completely order isometrically, see [7,Chp 8] . In [4], it is shown that for operator spaces V and W , the projective tensor product V ∧ ⊗ W is completely isometrically included in S V ⊗ max S W . Similary, any unital C*-algebras A and B are as well operator systems; in the same paper it is proved that their C*-maximal tensor product A ⊗ C*-max B is completely order isomorphic to A ⊗ max B. Thus it is natural to characterize the maximal tensor product. In this paper we give two characterizations of the maximal tensor product from different approaches. The first approach given in section 3 is to examine the schur tensor product from [9] in the category of operator systems. It provides a different view of the matricial cones of S ⊗ max T . In section 4, we employ factorization and the idea of the completely positive approximation property (CPAP) in [2] to characterize these matricial cones. We show that this characterization possesses connections to results on (min, max)nuclearity found in [3]. Preliminaries We outline a few basic facts about the maximal tensor product and refer readers to [1,3,4,5] for the details. Given a pair of operator systems (S, {P n } ∞ n=1 , 1 S ) and (T , {Q n } ∞ n=1 , 1 T ), by an operator system structure on S ⊗ T , we mean a family τ = {C n } ∞ n=1 of cones, where C n ⊂ M n (S ⊗ T ), satisfying: By an operator system tensor product, we mean a mapping τ taking any pair of operator systems S and T into an operator system structure τ (S, T ), denoted by S ⊗ τ T . We say τ is fuctorial, provided in addition it satisfies the following property: (T4) Given operator systems S i and If for all operator systems S and T , the map θ : x ⊗ y → y ⊗ x is a unital complete order isomorphism from S ⊗ τ T onto T ⊗ τ S, then τ is a called symmetric. We now recall the construction of the maximal tensor product. Given operator systems S and T , we first define the family of cones For short we denote D max n (S, T ) = D max n . We shall remark the following useful representation of D max 1 . Lemma 1. Every u ∈ D max 1 can be represented as u = p ij ⊗ q ij for some [p ij ] ∈ M n (S) + and [q ij ] ∈ M n (T ) + . Proof. If u = A(P ⊗ Q)A * as above with A ∈ M 1,km , note that u is then the sum of the entries of the Kronecker tensor product (A P ... P . . . . . . where the operator matrix is in M m (M k (S)) + . Since we can replace Q by Q 0 0 0 of some appropriate size and likewise for the first operator matrix, we deduce such representation as claimed. This matricial cone structure {D max n } ∞ n=1 is then a compatible family with matrix order unit 1 S ⊗ 1 T . Yet it is not Archimedean, so we complete the cones through the Archimedeanization process (see [8]) basically by taking the closure of D max n : C max n (S, T ) = {U ∈ M n (S ⊗ T ) : ε(1 S ⊗ 1 T ) + U ∈ D n (S, T ), ∀r > 0}. Likewise we denote C max n (S, T ) = C max n . Now S⊗T equipped with this family {C max n } ∞ n=1 satisfies Properties (T1) to (T4) and it defines a symmetric and associative operator system structure. We call it the maximal tensor product of S and T and denote it S ⊗ max T . The maximal tensor product is projective in the following sense. Let τ be an operator system tensor product. We say that τ is left projective, provided if q : S → R is a complete quotient map ( [1,3]), then for any operator system T , the map q ⊗ id is a complete quotient from S ⊗ τ T onto R ⊗ τ T . It is equivalent to require that for every n ∈ N, every u ∈ M n (R ⊗ τ T ) + , and every ε > 0, there isũ ε ∈ M n (S ⊗ τ T ) + so that q ⊗ id(ũ ε ) = u + ε(I n ⊗ 1 R ⊗ 1 T ). Right projectivity is defined similarly and we say τ is projective if it is both left and right projective. This maximal tensor product has the following universal property: If we take B(H) = C, we obtain the following representation of the maximal tensor product: where the latter set is the cone of all completely positive maps from S to T d . This statement is precisely the operator system analogue of a result by Lance in [6]. The following lemma is in [4] and will be used in the next section. We include the proof for completion. Proof. If P ∈ M n (S) + and Q ∈ M m (T ) + , then Property (T2) implies P ⊗ Q ∈ C nm . By compatibility of The Schur Tensor Product In this section we examine the schur tensor product from [9] in the category of operator systems. It turns out that in the operator system settings, the matricial cones of the schur tensor product coincide with that of the maximal tensor product, providing a different description of the maximal tensor product. Definition 4. Given operator systems S and T , X = [x ij ] ∈ M n (S) + , and Y = [y ij ] ∈ M n (T ) + , we define the schur tensor product X • Y to be Proof. Let {E ij } n i,j=1 denote the standard matrix units of M n (C) and regard X ⊗ Y as the Kronecker tensor product. In the case when n = 2, note that x 11 ⊗ y 11 x 11 ⊗ y 12 x 12 ⊗ y 11 x 12 ⊗ y 12 In general, we may view X • Y as a pre-and-post mulitplication of X ⊗ Y by a special n × n Proof. Write P as a sum of matrices whose entries are elementary tensors, which is X • Y for some X ∈ M nm (S) and Y ∈ M nm (T ). Now let A = I n I n . . . I n ∈ M n,nm with m copies of I n . Then, Lemma 5 shows that schur tensor product is in fact of the form of elements in D max n , except positivity. Motivated by Lemma 6 and the construction of the maximal tensor product, we define the following family of cones. For short we denote C s n (S ⊗ T ) = C s n . Proposition 8. The family {C s n } ∞ n=1 defines a matrix ordering on S ⊗ T with matrix order unit 1 ⊗ 1. Proof. We first check that C s n is a cone of M n (S ⊗ T ). It is obvious from is a compatible family of cones on S ⊗ T ). Finally, to see that they are proper, we claim that in fact C s n ⊂ D max n . Indeed, let A(X • Y )A * ∈ C s n , for some X ∈ M k (S) + , Y ∈ M k (T ) + , and A ∈ M n,k . Then by the previous lemma, which is in D max n (S, T ) by definition. Since the latter cone is proper, −C s n ∩ C s n = {0}. The fact that 1 ⊗ 1 is a matrix order unit with respect to {C s n } follows from the inclusion C s n ⊂ D max n and that 1 ⊗ 1 is a matrix order unit with respect to D max n . Consequently, {C s n } ∞ n=1 defines a matrix ordering on S ⊗ T . From the last paragraph of the proof, we see that C s n ⊂ D max n . In fact, one can further deduce that C s n = D max n by Lemma 3, after proving that this family satisfies Property (T2). Lemma 9. The family {C s n } ∞ n=1 satisfies Property (T2). That is, given X ∈ M n (S) + and Y ∈ M m (T ) + , X ⊗ Y ∈ C s nm . Proof. Let X and Y be as above, note that we may view where J k ∈ M k (C) is the matrix of entries all 1. It is easy to see that the second matrix in the above equation is Y ⊗J n . A straight-forward calculation shows that for each k ∈ N, J k has eigenvalues 0 and k, so Y ⊗J n ∈ M nm (T ) + . On the other hand, after the "canonical shuffle" [7, Chp 3], the first matrix is unitarily equivalent to X ⊗ J m , which is also positive in M nm (S). Therefore, X ⊗Y = (X ⊗J m )•(Y ⊗J n ) ∈ C s nm and the family {C s n } ∞ n=1 satisfies Property (T2). Remark 10. Now by Lemma 3 we have the reverse inclusion D max n ⊂ C s n , so the two families of cones are the same. In particular, Lemma 1 follows easily: every u ∈ D max 1 = C s 1 can be represented as u = A(P • Q)A * = (A * P A) • Q, for some A ∈ M 1,n , P ∈ M n (S) + , and Q ∈ M n (T ) + . If we archimedeanize the cones {C s n } ∞ n=1 , then we obtain the schur tensor product of operator systems and denote it S ⊗ s T ; and it is unitally completely order isomorhpic to S ⊗ max T . Theorem 11. The cones C s n = D max n , for every n ∈ N. Consequently, for operator systems, the schur tensor product is the maximal tensor product, i.e. S ⊗ s T = S ⊗ max T . Given operator systems S and T , S ⊗ max T when viewed as an operator space, possesses a natural operator space matrix norm || · || osy-max ; that is, In particular since A ⊗ C*-max B = A ⊗ max B for C*-algebras, the C*maximal tensor norm || · || C*-max is precisely || · || osy-max for C*-algebras. The following proposition is a slightly generalized version of || · || C*-max ≤ || · || s in [9]. Proposition 12. Let S and T be operator systems. Then the identity map φ : S ⊗ s T → S ⊗ max T is a complete contraction. Proof. Let ||U || s < 1, then by scaling, there exist scalar contractions A, B and X ∈ M n (S) and Y ∈ M n (T ), ||X||, ||Y || ≤ 1 such that U = A(X • Y )B. Hence, the matrices P = I X X * I ∈ M 2n (S) + and Q = I Y Y * I ∈ M 2n (T ) + . Note that On the other hand, since A and B are scalar contractions, I − AA * and I −B * B are positive in M n . Thus, the operator matrix I − AA * 0 0 I − B * B is positive in M 2n (S ⊗ max T ). By adding the two matrices, we obtain I U U * I ∈ M 2n (S ⊗ max T ) + which implies that ||U || osy-max ≤ 1. Factorization Through The Matrix Algebras M n We now turn to study the maximal tensor product using factorization. Recall that every u = n i=1 x i ⊗ y i ∈ S ⊗ T maybe regarded as a linear mapû : The mapû is independent of representation of u and u →û is a one-to-one correspondence between S ⊗ T and L(S d , T ), where the latter is the space of linear maps from the linear dual S d to T . In this section, we use the duality results from [1]. Henceforth, to ensure S d is an operator system, we assume S and T to be finite dimensional. Fix a basis {y 1 = 1 T , . . . , y m } for T , where y i = y * i and ||y i || = 1, so that every u ∈ S ⊗ T has a unique representation u = m i=1 x i ⊗ y i , for some x i ∈ S. To obtain the main result in this section, we introduce a temporary norm on S ⊗ T by setting |||u||| = m i=1 ||x i ||. Hence for each f ∈ S d and i ∈ {1, . . . , n}, |f (x λ i )| → 0. The latter condition is equivalent to (x λ i ) → 0 in the weak topology, which coincides with the norm topology because S is finite dimensional. Definition 15. A linear map θ : S → T factors through M n approximately, provided there exists nets of completely positive maps φ λ : S → M n λ and ψ λ : M n λ → T such that ψ λ • φ λ converges to θ in the point-norm topology. An operator system S is said to have complete positive approximation property (CPAP) if the identity map factors through M n approximately. In [2] it is shown that S is (min, max)-nuclear if and only if S has CPFP. We now establish the main theorem in the section. Theorem 16. Let S and T be finite dimensional operator systems and u ∈ (S ⊗ max T ) + . The following are equivalent: (1) u is positive in S ⊗ max T . 1 .By Lemma 1 it can be written as ij )] and ψ ε : M nε → T by ψ ε ([a ij ]) = a ij q ε ij . Note that ϕ ε is completely positive by definition of S d . For ψ ε , first consider the completely positive map [a ij ] → [a ij ] ⊗ Q ε . We regard [a ij ] ⊗ Q ε as the matrix [q ε ij [a kl ]] nε i,j and pre-and-post multiply it by [E 11 , E 12 , . . . , E 1nε ], then we obtain the matrix [q ε ij a ij ] nε i,j=1 ∈ M nε (T ) + . Now pre-and-post multiply it by the row vector of length n ε whose entries are 1; this yields i,j a ij q ε ij , and ψ ε is completely positive. It follows thatû ε = ψ ε • ϕ ε and it converges toû as ε → 0 in the point-norm topology. By the identification M m (S ⊗ max T ) ∼ = S ⊗ max M m (T ), we establish the following characterization of the matricial cone structure of the maximal tensor product. Finally, again by identifying M n (R ⊗ max T ) to R ⊗ max M n (T ) and likewise for S ⊗ max M n (T ), we prove that the maximal tensor product is left projective. By symmetry, it is right projective and hence projective. At last, we remark that this characterization of the maximal tensor product indeed coincides with the (min, max)-nuclearity result in [2,3]. Corollary 19. Let δ i be the dual basis of y i for T d . Then u = m i=1 δ i ⊗y i ∈ T d ⊗ max T is positive if and only if T is (min, max)-nuclear. Proof. Let S = T d and note thatû is the identity map on T . Moreover, u ∈ (T d ⊗ max T ) + if and only ifû factors through M n , which by [2,Theorem 3.2], if and only if T is (min, max)-nuclear. Acknowledgment The author would like to thank Vern I. Paulsen for his valuable advice and inspirations in writing this paper, and thank Prof. Vandana Rajpal for introducing the schur tensor product to the author.
2015-03-24T16:14:37.000Z
2015-03-24T00:00:00.000
{ "year": 2015, "sha1": "ddea339884a0d8acbc6d7eb539aff7c386bf3a3a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ddea339884a0d8acbc6d7eb539aff7c386bf3a3a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
22795272
pes2o/s2orc
v3-fos-license
FIBROTIC SEQUELAE IN PULMONARY PARACOCCIDIOIDOMYCOSIS : HISTOPATHOLOGICAL ASPECTS IN BALB / c MICE INFECTED WITH VIABLE AND NON-VIABLE Paracoccidioides brasiliensis PROPAGULES (1) Laboratorio de Patología, Clínica de las Vegas, Medellín, Colombia. (2) Corporación para Investigaciones Biológicas (CIB), Medellín, Colombia. (3) Facultad de Medicina, Universidad Pontificia Bolivariana, Medellín, Colombia. Correspondence to: Angela Restrepo M., Ph.D. Corporación para Investigaciones Biológicas (CIB), Carrera 72A No 78B-141, Medellín, Colombia. Fax (57-4) 441 5514; Telephone (57-4) 441 0855. Email: angelares@epm.net.co FIBROTIC SEQUELAE IN PULMONARY PARACOCCIDIOIDOMYCOSIS: HISTOPATHOLOGICAL ASPECTS IN BALB/c MICE INFECTED WITH VIABLE AND NON-VIABLE Paracoccidioides brasiliensis PROPAGULES INTRODUCTION Paracoccidioidomycosis is a deep-seated systemic mycosis of importance in Latin America, including Colombia 1,12,26 .The disease, caused by the dimorphic fungus Paracoccidioides brasiliensis, is usually a chronic, progressive illness that involves several organs and systems, with predominance of the lungs 18,19 .The latter are presently considered as the primary site of the infection 1,18 .Effective treatment regimens are available 25 to control the infectious process; nonetheless, most patients (60-80%) develop fibrotic sequelae that may severely hamper respiratory functions and limit the patient well-being 24,25,34 . In spite of the severe repercussions of fibrosis in this and in a number of other infectious and non-infectious disorders the mechanisms of tissue damage are incompletely understood 11,21 .Fibrosis appears to begin simultaneously with both the inflammatory process and the presence of leukocyte infiltrates; it then progresses and consolidates at the time of granuloma formation 10,11,15,16,24 .In the late stages of the inflammatory responses, an increase in the production of certain cytokines, especially those capable of promoting accumulation of connective tissue, is noticed 13,14,20 .This results in structural and functional changes in the tissues, such as observed in other entities causing fibrosis 13 .In vitro experiments have demonstrated that tumor necrosis factor alfa (TNF-α), transforming growth factor beta (TGF-ß), gamma interferon (IFN-γ) and the interleukines IL-1 and IL-6, contribute towards the formation of tissue scarring leading to fibrosis 8,14 . Our group has developed an experimental model of paracoccidioidomycosis by the intranasal inoculation of P. brasiliensis conidia, which reproduces several aspects of human disease, including changes in the reticular stroma of the lung´s interstitium with formation of fibrotic scars 22,29 .In a more recent study with this model, the sustained increase in lung tissue levels of TNF-α and TGF-ß, was found to be associated with the production of granulomas and development of fibrosis 11 . The above studies 11,29 plus those conducted by KERR et al. 15,16 , LENZI et al. 17 , BURGER et al. 2 , SILVA 31 , SILVA & FAZIOLI 32 and SILVA et al. 33 have described in detail not only the characteristics of the inflammatory response evoked by P. brasiliensis but also the constitutive elements of fibrosis (collagen fibers I and III, altered extracellular matrix components).Similar findings have been described in the lungs in autopsy cases 34 .Furthermore, in experimental studies it has been shown that the inoculation of yeast cell walls or derivatives of the same give rise to an intense inflammatory response 5,7,10 .In spite of these findings, the roles played by the various types of inocula on the immune response, is far from clear. This study explored the pulmonary response of mice inoculated intranasally with either viable P. brasiliensis conidia, propagules that had already been shown to induce fibrosis 11,29 or fragmented yeast cells.The roles played by an active infection versus the continuous stimulus exerted by cell wall fragments in the development of the fibrotic sequelae, was investigated. Animals Isogenic BALB/c male mice, 4-6 week old and weighing 18-20g, were used.Care was taken to use animals only within these categories as previous experiences (unreported) indicated their importance in the establishment of a progressive illness.The animals came from the CIB´s breeding colony and were kept and fed under the conditions previously indicated 29 .They were inoculated intranasally with either viable conidia (Group I) or disintegrated yeast cells (Group II), suspended in PBS.A control group (III), represented by animals inoculated only with PBS, was also included.Experiments required a total of 270 animals, inoculated as described below and sacrificed at various time intervals, early at 24, 48, 72 h and late at 1, 2, 4, 8, 12, 16 weeks.In each period, 6 mice from each experimental group, as well as 4 non-inoculated control animals, were sacrificed for histopathological studies. Isolate and Inocula P. brasiliensis isolate (ATCC # 60855) was passed through animals to restore virulence and then used for production of both conidia and yeast cells 4 .In group I, each mouse received an inoculum consisting of 4X10 6 viable conidia, suspended in 60µL of PBS.For group II, the inoculum consisted of 6.5X10 6 non-viable fragmented yeast cells, suspended in the same quantity of PBS.Control animals (Group III) received only PBS passed through glass wool (Pyrex fiber glass slivers, 8 µm, Corning glass works, Corning, New York, NY), such as done for separation of the conidia employed in group I mice 28 . The inocula were prepared as described previously 27,28 .Briefly, conidia were obtained from mycelial cultures and filtered through glass wool.Ultrasonically-fragmented yeast cells were obtained by growing the fungus in tubes with the modified MacVeigh and Morton broth medium 27 , kept at 36 ºC with 5% CO 2 (Incubator NAPCO series 5400, Chicago, USA) under continuous shaking (Gyratory Shaker, model G-2, New Brunswick Scientific Co, New Brunswick, NJ).When growth ensued, the contents of a tube were transferred to an Erlenmeyer containing 75 mL of fresh media and kept for 7 days, under the same conditions as the tubes.Sterility checking by microscopy and brain heart infusion (BHI) agar cultures, was done and when completed, a suspension containing 1.0x10 8 yeast cells per mL was subjected to ultrasonic vibration (Branson Sonifier Cell Disruptor # 185, Branson Sonic Power Co, Smithkline Co, Swedesboro, NJ), for 20 cycles, 15 min each.Precautions were taken to avoid heating of the yeast suspension with the vial being kept under ice.Liberation of proteases during sonication was controlled by adding a cocktail of inhibitors (Pepsatin, Leupeptin, Phenylmethyl sulfonide fluoride, N-tosil-Lfenilalamine chloromethyl ketone, (α) p-methyl L lysine chloromethyl ketone), prepared at 0.1-0.2mM concentrations (Sigma Chemical Co., St. Louis, USA).Upon sonication, viability testing was carried out with the ethidium bromidefluorescein stain to determine loss of viability 3 ; microscopic observations were also done in order to ascertain that at least 90% of the yeast cells had been broken.Samples were then taken for dry weight determination in tared vials kept at 50 ºC until dryness.The number of yeast cells subjected to sonic vibration was found to correspond to 56.17 mg/60 µL of dry powder.This inoculum was distributed in small vials and kept at 4 ºC until its use. Inoculation and Sacrifice Before proceeding to the inoculation, each animal was given 50 µL of a mixture of ketamine (100mg) (Park-Davies & Co., Ecuador) and xylazine (20mg) (Bayer S.A., Brasil) intramuscularly in order to achieve deep anesthesia 29 .The inoculum (60 µL), either viable conidia, fragmented yeast cells or PBS, was administered intranasally, in two periods 8 min apart from each other, in order to facilitate complete inhalation of the inoculum. At the chosen periods, animals were sacrificed by the intraperitoneal injection of 1.0mL of 2.5% sodium penthotal. Samples and processing Upon death, the thoracic cavity of animals in groups I, II and III was opened and the right auricle sectioned; 10mL of PBS were then injected directly on the heart in order to insuflate the lungs and withdraw any remaining blood.The left lung was removed, weighed and homogenized in 3.0mL of PBS containing protease inhibitors; it was then used for CFU determination.The right lung was processed for histopathologic examinations.For CFU, the process was identical to the one described above.The number of CFU per organ was calculated as follows: Log 10 [Weight of lung plus macerate´s volume] X CFU/mL Histopathological Studies Once the right lung was resected, it was insuflated, fixed in l0% neutral formaldehyde in PBS, embedded in paraffin and cut in 5 µm thick sections.Several staining procedures were employed to analyze the lung tissues, as follows: Hematoxilin and eosin, to determine the type and intensity of the inflammatory response; silver methenamine to detect the mycotic structures (conidia, yeast and fungal fragments); Gomori (silver reticulin) to visualize the presence and organization of reticulin type fibers and Masson thrichromic stain to detect collagen fibers, type I.In each case, the whole lung was divided in two sections, from apex to base, and the extension of the area corresponding to the inflammatory reaction was examined.The tissues were examined independently by two pathologists.Lungs of control animals served to establish absence of inflammation (0%); in infected animals, the extent of the inflammatory reaction was estimated according to the percentage of the tissue involved. In this study we adopted Mitsuhashi´s definition of fibrosis 23 , namely, a lung disorder associated to the progressive loss of pulmonary volume and of the functionality of the alveolus-capillary junction.Morphologically, fibrosis is characterized by an abnormal increase or an altered disposition of the connective tissue (collagens I, III).Our interpretations were based on the comparisons established between the infected and the control animals with the latter taken to represent the normal composition of the lung parenchyma. The severity of the involvement was classified as previously suggested 29 , with a slight modification consisting in studying the whole lung instead of a single region.Severity was graded as follows: • Minor: Presence of isolated collagen I-and of fragmented reticulin fibers abnormally organized and with a condensed aspect.Both fibers would be localized in the center and around the inflammatory foci. • Severe: Abundant, thick collagen I fibers.Additionally, an increased number of reticulin fibers that have altered their localization and organization would be present.As in the above case these fibers would appear around and in the interior of the granulomas. The granulomas were classified as compact and loose, according to MARIANO 21 and MONTENEGRO & FRANCO 24 .The former consisted of concentric rings of monocytes, lymphocytes and large mononuclear cells (epithelioid) arranged in palisade formation that occupy the center of the lesion.Giant cells were also present.Loose granulomas have undefined borders and are characterized by the presence of small groups of dispersed epithelioid cells as well as by a variable number of giant cells.Granulomas also had other components, namely, the inflammatory exudate composed of PMNs and fungal cells, in active multiplication. Statistical Analysis Statistical analyses were processed with the software Graph Pad, Prisma, (version 2.01, l994).Statistical significance was determined by using Student´s t test analysis, and p<0.05 considered significant Control (Group III) animals Mice inoculated intranasally with PBS had no alterations in their lungs.With the exception of a mild transitory inflammation (24-72h), infiltrates were absent and no changes were detected in the aspect or number of the collagen or reticulin fibers. Presence of Fungal Cells in the Lungs In the lungs of group I mice quantification with a 40X objective of the 10 microscopic fields, revealed that conidia as well as transforming propagules were present during the early periods (24-72h) (Fig. 1).Later on, after one week, yeast cells were observed and increased in numbers with time post-challenge (Fig. 2).CFU corroborated fungal viability and showed increasing numbers ranging from 4.2 to 6.5 Log 10 per lung up to week 12 th (data not shown).No intact fungal elements were observed in group II mice; however, in the Gomori-stained sections, the presence of fungal dust inside foamy histiocytes (Fig. 3) was observed regularly during the early stages post-challenge. Extension of the inflammatory reaction in the lungs of Groups I and II mice In group I animals and during the early stages post-infection (24, 48, 72 h), the inflammatory reaction occupied between 26.6% and 35.5% of the lung´s area.From the 1 st to the 8 th week, there was a decrease in the above figures (8.8%-14.2%).However, the infectious process reactivated later on and by the 12 th week, inflammation had extended to cover again 30.8% of the organ (Table 1). In group II mice, the extent of the inflammatory reaction was somewhat higher during the early post-inoculation periods (40-48.3%)but with no significant differences among the groups (Table 1).From the 1 st week on, these animals exhibited a progressive decrease in the extension of the inflammatory area and at the end of the experiment (weeks 12-16), only a 3.3% to 3.0% of the pulmonary parenchyma was involved, respectively. A comparison between the 2 groups revealed significant differences at 24 hours (p=0.01) and during the first 2 weeks post-challenge (p=0.001 Type of cells present in the infiltrate As shown in Table 2, in group I mice PMNs predominated during the first hours, 98.5% at 24 h and 75.8% at 72h.They decreased progressively in the following weeks; however, they did not disappear completely.Conversely, the mononuclear leukocytes were few (1-24%) during the early periods but increased and reached high, stable levels, above 82%, in the following weeks post-infection (data not shown). In group II mice, cell distribution was similar to those of group I with only one difference, presence of PMNs; in the former group such cells disappeared after the 4 th week post-inoculation while in the latter they remained present all throughout the experiments.Significant differences were seen between the groups at 72 hours (p=0.01), 2, 4, 8, 12 and 16 weeks post-inoculation, with maximal p values at the 12 th week (p=0.0004)(Table 2). Granuloma formation Despite the similarities observed in the inflammatory response and the cell composition of the infiltrate, granulomas were formed only by group I animals.This structure became apparent only during the 1 st week and their number increased with time post-infection.During the following weeks and in the whole lung, the number of granulomas varied from 8 to 16, with a peak at week 12. Granulomas were compact when first formed but as shown in Fig. 4, they became loose with the progression of the infection. Histopathological findings In both groups of propagule-inoculated mice, the initial post-challenge periods showed a bronchopneumonic, acute type of response.There were PMNs accumulations that fused with each other and constituted extensive, ill-defined masses.This reaction was more marked at the peribronchiolar level. Group II animals showed a decrease of the acute inflammatory reaction during the 1 st week, when it changed and became a lymphohistio-plasmocytic infiltrate.At this period, there were few yeast cells located inside intra-alveolar macrophages.Group II mice also had macrophages characterized by a broad, foaming cytoplasm (Fig. 5); these cells contained ingested yeast fragments.At week 2 post-inoculation, differences became more pronounced as animals in group I but not those in group II, revealed granuloma formation.The granulomas had a concentric arrangement and exhibited epithelioid cells, lympho-histio-plasmocytic cells and Langhans giant cells.At this moment, the inflammatory reaction has diminished in group II animals, although the foamy macrophages continued to be present. During the following weeks (4-16 th ), the inflammatory process progressed in the conidia-infected mice (group I) with granuloma formation, especially at the perivascular level and also, with persistence of the lympho-plasmocytic infiltrates.At week 12, when the granulomas were more abudant, they had lost their compact appearance and had become loose, fusing with each other.Yeast cells were numerous, in active multiplication, and appeared intermingled with PMNs, plasmocytes and fibroblasts.On the other hand, in animals of group II, only isolated, residual inflammatory foci could be found; macrophages exhibiting phagocytosed fungal residues were seen rarely.Eosinophils were not observed in either group. Collagen type I fibers In group I mice (Table 3), thin fibers were seen in 4/6 (66.6%) of the animals after the first week post-infection, with a tendency to diminish at later stages.Thick fibers became apparent at 4 weeks in 1/6 animals (16.6%) of the animals and their frequency increased thereafter so that by the 8 th and 12 th weeks, they were observed in 66.6% and 83.3% of the mice, respectively.At the end of the experiment, a third of the animals had severe involvement represented by thick collagen I fibers in the lung parenchyma, usually surrounding the granulomas.Fig 6 illustrates the presence of these fibers in a mouse inoculated with viable P. brasiliensis conidia. The presence of thin collagen I fibers was also noticed at week 1 postinoculation in 5/6 (83.3%) of the group II animals; however, these fibers diminished with time and disappeared after the 8 th week post-infection. No thick collagen I fibers were detected in this group (Table 3). Statistically significant differences were observed between the groups concerning both the absence of fibrosis (9 vs. 26, p<0.0006) and the presence of thick collagen fibers (12 vs. 0, p≤0.021), heralding severe fibrosis. Reticulin fibers In both groups of mice, minor abnormalities in number and rearrangement of reticulin fibers was noticed only after the 1 st week postinfection.During the 1-4 week period, thin, disarranged elements were noticed in 1 (17%) to 3 (50%) of the animals.Later on, however, they disappeared.The thicker and grossly disarranged reticulin fibers corresponding to severe lung damage, were observed only in group I animals (p≤ 0.05).They became apparent in 1/6 (17%) of the animals at 4 week and increased up to the 12 th week when 83.3% of the mice exhibited major alterations.Fig. 7 illustrates the corresponding histopathological aspect. DISCUSSION In this study, we observed the sequential histopathological changes that occurred as a result of the intranasal inoculation of either viable P. brasiliensis conidia or disintegrated yeast cells.As expected, progressive paracoccidioidomycosis became established with the former inoculum, as shown by the isolation of the fungus throughout the experimental period.The inflammatory process was, however, similar in both groups only that it was transient and acute with the fragmented yeast cells.Consequently the progressive histopathological changes observed in group I mice can be attributed to the infection and confirm previous findings concerning the tissue responses elicited by the fungal process 11,29 . Other authors who have previoulsy induced experimental infections with viable yeast cells, have found a similar pattern of response to the one observed with conidia 2,5,6 .Initially there is PMNs afflux, which then changes towards a lympho-histo-plasmocytic infiltrate and ending with granuloma formation.On the other hand, use of purified fungal components, such as lipids and polysaccharides, has shown to induce only a limited, non-progressive tissue response 10,31,32,33 . In this study the histopathological comparison between conidiainfected mice and animals inoculated with fragmented yeast cells, revealed two important differences.One was the persistence during the experimental period of low but constant numbers of PMNs in the former group and their tendency to disappear in the latter.It has been shown that in tissues, PMNs tend to surround yeast cells and probably, destroy them by liberation of their granules´ contents 2 .SILVA et al. 33 have suggested that during the early stages of the host-parasite interaction, massive infiltration by PMNs, as well as their presence around the suppurative region of the granulomas, during the latter phase, suggest that they have an important role in defense, and that they kill the fungus.This has been demonstrated in tuberculosis, a disease where PMNs play a significant role in liquefying the granulomas 21 . Whether persistence of PMNs around the inflammatory foci could be correlated to continued tissue damage in chronic forms of paracoccidioidomycosis, remains to be shown. The second important difference between the two groups, lied in the formation of granulomas in all conidia-infected mice and the absence of these structures in animals receiving fragmented yeast cells.The latter finding, however, does not agree with the experiences of SILVA 31 , SILVA & FAZIOLI 32 , SILVA et al. 33 , FIGUEIREDO et al. 10 and DEFAVERI et al. 6,7 , who observed granulomas in mice and rats inoculated by the intraperitoneal route with a variety of cell wall fragments and cellular products (glucan, lipids, polysaccharides).None of these authors, however, used air-borne models or viable conidia nor did they report on the presence of the most essential element of the granuloma, the epithelioid cell 21 .It has been noticed that different routes of inoculation 65 may, at times, give rise to different host responses and this could be another explanation for the observed differences 6 .Our data indicate that fibrosis is the result of an active, progressive pulmonary infection in which the inflammatory response is strong and formation of granuloma takes place. Fungal cell wall components tend to remain in the host´s tissues for some time and consequently, they continue to stimulate the tissue´s immune responses to produce the characteristic chronic inflammatory reaction 2,10,11,31,32 .Nonetheless, it has been shown that the presence of α glucan in the yeast cells contributes to the virulence of the fungus as this component is slowly degraded by the host 30 .The in situ stimulus exerted by fungal components, was clearly illustrated in chromoblastomycosis, a disorder in which persistence of fungal cells in affected tissues, was the most important pathological factor involved in the progressive inflammatory reaction, as well as in the subsequent fibrosis 9 . The clue element in the development of fibrosis appears to be the granuloma.Although this formation is recognized as the most potent defense mechanism against P. brasiliensis 12,24 , it also appears to damage the host because it is in their surroundings that fibrosis becomes organized 2,17,21,24 .This dual function could be related to the type of granuloma being formed; when they are compact, as is the case when they first appear, they are capable of restraining fungal multiplication such as shown by the presence of only a few yeast cells in these experiments.On the other hand, when granulomas become loose, their protective function ceases to operate and yeast cells increase in number.At this stage of the host-parasite interaction, granulomas may only serve to perpetuate the tissue´s inflammatory responses with fibrosis representing its final consequence.As indicated by KERR et al. l6 , the occurrence of recent granulomas with older ones, implies that this process does not eliminate the fungus nor is it capable of limiting fungal multiplication. As shown, group II animals had no granulomas nor did they present thick reticulin or collagen fibers type I .On the contrary, most (83%) mice infected with viable conidia, exhibited both elements.These findings tend to indicate that alterations in number and arrangement of collagen I and reticulin fibers, both of which centered around the loose granulomas, constitute essential elements in the transformation of the lung´s tissues into a consolidated structure, unable to perform the normally free gaseous interchanges 15,16,17,34 . Actually, granulomatous inflammation is the pathological substrate for many infectious microorganisms (tuberculosis, leprosy) as well as for non-infectious irritating agents (talc, berillium).In tissues, all these agents are capable of evoking this type of reaction, which, in turn, prompts excess accumulation of connective tissue resulting in structural and functional alterations of the parenchyma 13,21 . Much remains to be done before the genesis of the fibrotic process in paracoccidioidomycosis is understood; however, the possibility offered by the mouse model here reported may contribute to designing new, more far reaching experiments. Table 1 Extension of inflammatory reaction according to inoculum and time post-challengePulmonary involvement Percent ± SD * ** Each experimental group consisted of 6 animals + NS: non-significant and 0.02), as well as by the 12 th and 16 th weeks, p=0.001 and p=0.002, respectively. Table 2 Inflammatory cells in the lungs of animals (H & E)PMN according to time post-challenge Percent ±SD NS: non significant *Each experimental group consisted of 6 animals
2017-07-12T09:42:30.598Z
2000-04-01T00:00:00.000
{ "year": 2000, "sha1": "313fbcf9d10136e9581c722716c52639263a038f", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/rimtsp/a/qfBdLXKGVzgkGPYpbJBT8Pp/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "313fbcf9d10136e9581c722716c52639263a038f", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259773347
pes2o/s2orc
v3-fos-license
The impact of vocabulary selection ability on EFL students’ communication skills ABSTRACT Through vocabulary selection ability, students are able to express feelings, ideas, or information in various ways without changing the meaning. For example, students may say "fresher" instead of "new graduation" in a job interview context. In addition, in phoneme selection, the sound of "fresher" and "fresh air" refers to different things. Therefore, vocabulary awareness is the basis of all language use. It is like the raw building blocks to build a mix of thoughts and ideas, information, and personal relationships. In the worst situation of learning English, understanding is still possible with little knowledge of grammar. In addition, Agustiawati (2022) also claims that vocabulary is essential in foreign language learning as the meanings of new words are very often emphasized, whether in books or classrooms. To put it simply, the more vocabulary to know, the easier students can improve their language skills. However, communication skills are more complex because they involve the students' soft skills to manage a well-flowing conversation (Wardana et al., 2022). As a result, it will help them to understand words from their context, naturally expanding their vocabulary and improving their language skill without needing to spend more time looking the words up in a dictionary or asking someone for an explanation. There have been many studies investigating the vocabulary and language skills of EFL students (Soliman, 2014;Alshumaimeri & Alhumud, 2021;Miralpeix & Muñoz, 2018). However, the present study explores whether the tenth-grade students of a private vocational high school in Badung, Bali have great performance in vocabulary selection and whether their ability is statistically correlated with communication skills. This study investigates how English communication skills might be dependent on the student's ability to recognize and use the correct words. A study of He and Godfroid (2019) introduces a stepby-step approach for materials writers, curriculum designers, and teaching professionals to identify word groupings in a potential list of target words, using a combination of objective and subjective data, with the prospect of creating more effective and more efficacious vocabulary learning materials. In addition, another study of Yorkston et al. (1989) reveals that standard vocabulary lists may be considered a necessary but not sufficient aspect of vocabulary selection. Therefore, this study addresses the significant impact of English vocabulary selection ability on communication skills. From having correct selection of vocabulary, students are The impact of vocabulary selection ability on EFL students' communication skills Communication skill Communication is a process conducted among humans as they interact with one another, which is an important aspect of them. In relation to this, human communication is a connector for society in the process of relaying information or news and broadcasting important announcements. According to Dale et al. (2013), human communication is a subtle set of processes through which people interact, control one another and gain understanding. Communication in society is basically needed in face-to-face interaction or direct conversation which requires both speaking skill and listening skills. Communication skills are the abilities you use when giving and receiving different kinds of information. Some examples include communicating new ideas, feelings, or even an update on your project. According to Crichton (2009), students need to actively use the language to give them confidence and to feel its communicative value. As a university student who is getting ready to start their chosen career, he or she should take the opportunity in any activities that develop communication skills in a wider and complete aspect so that communication skills can be fully developed. Iksan et al. (2012) claim that in our globalized world, university students need to master communication skills in different cultural contexts. Communication skills involve listening, speaking, observing, and empathizing. It is also helpful to understand the differences in how to communicate through face-to-face interactions, phone conversations, and digital communications like email and social media. Communication skill also means having the ability to convey information and ideas effectively. Moreover, Trosborg (1987) states that people frequently fail to communicate effectively because they do not express themselves clearly enough. Indeed, in every conversation, each party needs to express what they talk or discuss about. Mastering communication skills will help them a lot in doing an effective conversation. The effectiveness of communication will be achieved when the person that is involved in communication understands each other. That being said, communication skills are necessary for students to master. Listening skill Listening can be considered the fundamental skill to speaking because without understanding the input at the right level, any learning cannot begin. Along with speaking skill, listening skill also allows people to communicate effectively (Tilwani et al., 2022). Listening is not a passive skill but an active process of constructing meaning from a stream of sounds. Having good listening comprehension makes people able to decipher and interpret the meaning of a certain context while doing communication. Based on , listening receives little attention in language teaching and learning, because teaching methods emphasize productive skills, and listening was characterized as a passive activity. In the teaching and learning process, the view of listening has changed the role of the listener from someone who was thought to passively receive the spoken message to an active participant in the act. People are able to hear what the other person says by only focusing their hearing toward the person which is called "hearing", while listening to it, not only about hearing yet people need to understand the context well. Brownell (2002) states that listening is the process of receiving, constructing meaning from, and responding to spoken and/or non-verbal condition of a phenomenon with a correlation research design to compare two or more variables in a single group. The population of the study was 80 tenth-grade students at vocational high school students in Badung, Bali, Indonesia. However, 40 students were selected as samples of the study. Some steps in selecting the samples included (1) preparing four pieces of paper and writing the name of the class, (2) folding and putting them into a glass, (3) showing the glass, (4) determining the sample of this research from the selected papers, and (5) conducting the research. In the present study, the researcher used a note-taking test that was focused on student vocabulary selection ability and listening skills. In this case, the researcher prepared a voice recording that contained several vocabularies. There was a voice recording of the researcher which stated 10 vocabularies included five groups of words belong to core vocabulary, relates to the most commonly used words and consists mostly of verbs, pronouns, descriptors, prepositions and few nouns and five group of words belong to Fringe vocabulary, refers to topic-, environment-, or person-specific vocabulary that is unlikely to be used across environments. The students needed to listen to the words in the vocabulary carefully and then write down what vocabulary was heard by them in the form of note-taking. To take the vocabulary selection ability test result, the researcher provided a platform which was a WhatsApp group. The researcher provided 10 minutes for the students to do their work. Then, the students took a photo of their work and sent it via WhatsApp group. Besides, in the present study, the researcher used oral proficiency scoring categories which are adapted from Brown (2014) consisting of three adapted aspects, i.e., fluency, comprehension, and grammar. However, Brown's oral proficiency criteria score consists of six aspects: vocabulary, pronunciation, fluency, comprehension, grammar, and task. The scoring rubric was adapted in the present study only on the three aspects based on the student's level and condition in the teaching and learning process. In the present study, the data was collected by administering several topics as a theme for the students' role play as the research instrument. The test was constructed by giving a topic that students need to discuss with their partner to make a dialogue by inserting some vocabulary that was determined before. Then, after the research instrument was constructed, the test had to be considered valid and reliable. The test was constructed based on the crucial terms of validity and reliability. The 40 students were divided into 15 groups which each consisted of 2 students. They were given 20 minutes to construct their dialogue with their partners. They were required to construct a simple conversation by using the determined vocabulary as a theme of their conversation. Finally, students performed their conversation in front of the class and then were scored through a scoring rubric. In addition, data analysis is the process of modeling the data to obtain specific information that can be applied in formulating the conclusion, prediction results, and scientific and social knowledge. In this present study, the researcher used authentic listening tasks in the form of notes and role-play tests to measure students' English vocabulary selection ability and communication skills. The normality test aimed to determine whether the sample was from the population and had a normal distribution or not. In the present study, the normality test with Liliefor's Significance Correction by Kolmogorov Smirnov in (KS-Z) of SPSS 25.0 was analyzed. In Table 1 shows the result of the normality test, while the program showed that the normality test for English vocabulary proficiency was 0.010 and for communication ability was 0.076. This means that the data were normal, while English vocabulary proficiency was 0.010 > 0.05 and communication ability was 0.076 > 0.05. The homogeneity test is performed to test two or more sample data sets derived from the sample population variant. Homogeneity tests are performed to determine whether the data has homogeneous variance or not. In this homogeneity test, the minimum standard of 0.05 is the same as in the normality test. The result of the homogeneity test can be [represented in Table 2. From the calculation in Table 2, the significance of the student's English vocabulary proficiency and communication skills is 0.011 > 0.05, which means that the variances were homogeneous and not different. The hypothesis tests were calculated using the SPSS 25.0 version for Windows. In addition, the hypothesis test consisted of a Pearson product-moment t-test. In summary, these two analyzes were essential. The hypothesis can be formulated as follows. 1. Alternative hypothesis (Ha): There is a positive and significant correlation between proficiency in English vocabulary and the communication of the tenth-grade vocational high school students in Badung, Bali. 2. Null hypothesis (H0): There is a negative correlation between the mastery of English vocabulary and the communication skills of the tenth-grade vocational high school students in Badung, Bali. The first analysis of the hypothesis tests performed in the present study was done by applying Pearson's product moment. In the present study, the Pearson's product-moment was used to measure the linear relationship. The method was used to find the correlation I. K. Wardana The impact of vocabulary selection ability on EFL students' communication skills between the study variables, students' proficiency in English vocabulary, and their communication skills. The significance of the correlation coefficient was determined by comparing the data when the significance value is less than 0.05. Data are classified as significantly correlated if the correlation coefficient is less than 0.05. On the other hand, the data can be classified as not significantly correlated if the coefficient correlation is greater than 0.05. In summary, Pearson's product-moment correlation coefficient was a simple way to assess the correlation between two variables. The product-moment correlation index can be seen in Table 3. The t-test is performed after the data are normally and homogeneously distributed. The t-test was used to test the hypothesis. The t-test assumes that both groups were normally distributed and had relatively equal variances. Also, the t-statistic is distributed on a curve based on the number of degrees of freedom. An alternative hypothesis is accepted if the significance value is greater than the significance level (0.05). Conversely, if the significance value is less than the significance level (0.05), the hypothesis is rejected. Additionally, in the present study, the investigator used SPSS 25.0 in a computer program to compare the means of two variables to determine if there was statistical evidence that the associated population had significant differences. Findings and discussion Students' proficiency in English vocabulary was an independent variable, referred to as variable X, and student's ability to communicate was a dependent variable, referred to as variable Y. To determine the correlation between these two variables, the researcher conducted a vocabulary test in which an audio recording contained 10 vocabulary words that students had to hear and write down. In addition, the researcher conducted a communication skills test, which was a role play that required students to incorporate the vocabulary they had previously heard into their conversation. A descriptive analysis of students' English vocabulary selection ability and communication skills is presented in Table 4. The impact of vocabulary selection ability on EFL students' communication skills Table 4 demonstrates the result of the English vocabulary proficiency test and the result of communication ability showed that the mean for communication ability is 70.80. The standard deviation is a numeric index that expresses the average variability of the score, or in other words, it is the distance from the mean. From the table above, the standard deviation for students' proficiency in English vocabulary is 13.679, while for students' ability to communicate is 12.237. The smallest value of the variable is called the passing score, while the passing score for the student's proficiency in English vocabulary is 50 and the passing score for the student's communication ability is 47. On the other hand, the largest value of the score is called the maximum score. The maximum score for proficiency in English vocabulary is 90 and the maximum score for communication ability is 93. Both mean scores of the word selection ability and communication skills were compared by the Pearson's product-moment test presented in Table 5. Table 5, the correlation value of the coefficient (r) was 0.589, meaning that there was a positive correlation between word selection ability and the ability to communicate. Furthermore, based on Table 3, the correlation between them was moderate correlation while the (r) value was between 0.40 and 0.60. The value of (r) product moment for 40 samples with 5% degrees is 0.312. In addition, the r-count was greater than the r-table, which was 0.589 > 0.312. It means that the correlation between word selection ability and the ability to communicate is significant. Informed in Further, the t-test was the final analysis of the hypothesis test as the last step in the correlative research design. In calculating the t-test, the researcher also used SPSS 25.0 to calculate the correlation r-product moment when testing the study hypothesis. The result of the calculation of the correlation r product moment is presented in Table 6. The impact of vocabulary selection ability on EFL students' communication skills Table 6 reveals the result of the t-test, whereas the t-counted was -0.294. Furthermore, the df was 39 and the t-table of df 39 at α = 0.05 (5%) was 2.023. The t-counted was bigger than the t-table (-2.294 > 2.023). It means that the correlation between students' English vocabulary selection ability and students' communication skills was significant and the hypothesis was accepted. To sum up, the correlation coefficient (r-counted) of 0.589 could be used to represent the whole population of 40 samples. The aim of this study was to examine whether or not there was a significant impact of students' vocabulary selection ability observed from listening towards their communication skills. To find the causal-impact correlation, the researcher constructed several steps to collect the data. As a first step, the researcher administered the English vocabulary selection ability test and then proceeded to the communication ability test. This test aimed to determine the correlation between students' proficiency in English vocabulary and students' ability to communicate. In addition, based on the data analysis performed with the SPSS 25.0 program, calculations are made at the 0.05 level. The normality score for proficiency in English vocabulary was 0.10 and the normality test for communication skills was 0.76. Both data were normal as values were greater than 0.05 (>0.05). The next is the calculation of the homogeneity test. The significance of students' English vocabulary and communication skills was 0.11 > 0.05. It showed whether the variances were homogeneous and not different, or in short, had the same variant. Based on the results of these analyses, the researcher also performed hypothesis testing using Pearson's product moment. From the data on students' knowledge of English vocabulary and communication skills, it was found that r = 0.589. There was a positive and moderate association between students' proficiency in English vocabulary and their ability to communicate. In summary, there was a positive correlation between variable X (knowledge of English vocabulary) and variable Y (communicative ability). After performing the Pearson's product moment test, the researcher used the pairedsample t-test to test the hypothesis. The result of the above calculation data shows that the alternative hypothesis (Ha) was accepted since the Pearson test shows correlation R = 0.589. This means that there was a moderate correlation. The result of the t-test was -2.294 > 2.023. It indicates that the data was clearly accepted. In other words, the correlation between students' proficiency in English vocabulary and students' ability to communicate was moderately accepted and the hypothesis was accepted. The results supported the study conducted by Franscy (2016) which aimed to determine the English proficiency of the students and to determine whether there is a connection between the mastery of vocabulary and pronunciation. Based on the discussion of the study, there was a significant correlation between vocabulary selection ability and pronunciation ability with the ability to speak English. There was a positive and significant correlation between vocabulary selection ability and pronunciation ability. However, the research did not give clear information about the population and the sample of their research. Also, finding the population and the sample is confusing. Hence, the unequivocal grade of the students for the basic population and sample in the present study. This study, likewise, is consistent with the findings of research conducted by Agustiawati (2022), there is a correlation between vocabulary selection ability and speaking ability when describing people. In addition, the study data analysis showed that there is a significant
2023-07-12T16:59:18.554Z
2023-05-31T00:00:00.000
{ "year": 2023, "sha1": "1cd85e44fa849d87c5f31358a089dfae1cc56f4d", "oa_license": "CCBYSA", "oa_url": "https://e-journal.uingusdur.ac.id/index.php/ERUDITA/article/download/6980/2867", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "51c69f78f835b731ba66fc425c735bc7edcca833", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
221824195
pes2o/s2orc
v3-fos-license
The conversion of formate into purines stimulates mTORC1 leading to CAD-dependent activation of pyrimidine synthesis Background Mitochondrial serine catabolism to formate induces a metabolic switch to a hypermetabolic state with high rates of glycolysis, purine synthesis and pyrimidine synthesis. While formate is a purine precursor, it is not clear how formate induces pyrimidine synthesis. Methods Here we combine phospho-proteome and metabolic profiling to determine how formate induces pyrimidine synthesis. Results We discover that formate induces phosphorylation of carbamoyl phosphate synthetase (CAD), which is known to increase CAD enzymatic activity. Mechanistically, formate induces mechanistic target of rapamycin complex 1 (mTORC1) activity as quantified by phosphorylation of its targets S6, 4E-BP1, S6K1 and CAD. Treatment with the allosteric mTORC1 inhibitor rapamycin abrogates CAD phosphorylation and pyrimidine synthesis induced by formate. Furthermore, we show that the formate-dependent induction of mTOR signalling and CAD phosphorylation is dependent on an increase in purine synthesis. Conclusions We conclude that formate activates mTORC1 and induces pyrimidine synthesis via the mTORC1-dependent phosphorylation of CAD. Background Cells activate metabolic pathways during differentiation and de-differentiation. The activation of one or more biochemical reactions can, in turn, trigger a cascade of metabolic changes leading to a distinct metabolic state. We have recently shown that the mitochondrialdependent oxidation of the third carbon of serine to formate triggers a metabolic switch [1]. This switch is characterized by an increase in metabolic fluxes associated with glycolysis, purine synthesis and pyrimidine synthesis. Nevertheless, how formate induces pyrimidine synthesis is yet unclear. The formate-dependent induction of pyrimidine synthesis is characterized by a dramatic increase of dihydroorotate levels [1]. Dihydroorotate is the product of carbamoyl phosphate synthetase (CAD), the first step of pyrimidine synthesis. We hypothesized that the formate-dependent increase in adenosine triphosphate (ATP) levels stimulate the enzymatic activity of cytosolic CAD. Mammalian CAD has a half saturation constant for ATP in the mM range [2], which is the range of intracellular ATP levels. However, a theoretical analysis of the full range of behavior indicates that increased ATP is not sufficient to explain the dramatic increase in dihydroorotate levels. A necessary condition is that the maximum CAD activity exceeds the maximum enzymatic activity of a downstream reaction in the pyrimidine synthesis pathway. CAD maximum activity is regulated by a mechanistic target of rapamycin complex 1 (mTORC1)-dependent phosphorylation [3,4]. In this way, mTORC1 increases the pyrimidine pool required for cell growth and proliferation. Mechanistically, the mTORC1 kinase complex phosphorylates and activates the ribosomal protein S6 kinase 1 (S6K1), which in turn phosphorylates CAD at Ser 1859. Finally, CAD-Ser1859 phosphorylation increases the maximum CAD activity and consequently pyrimidine synthesis. Here, we demonstrate that formate induces pyrimidine synthesis through the activation of mTORC1 signalling. Chemicals Cell culture medium and supplements were obtained from the Life Technologies, and all chemicals were purchased from Sigma-Aldrich unless specified otherwise. Cell lines and cultures HAP1-WT were cultured in IMDM medium supplemented with 10% FBS. HAP1-ΔSHMT2 cells were cultured in the same media supplemented with a mix of 16 μM thymidine and 100 μM hypoxanthine (HT). MDA-MB-231-ΔSHMT2 cells [1] were cultured in DMEM medium supplemented with 10% FBS, glutamine and HT. HT was omitted when seeding cells for experiments. Cell counts and volumes were assessed using the Casy Technology (Innovatis). Phosphoproteomics sample preparation HAP1-ΔSHMT2 cells were seeded in 15 cm dishes and stimulated the following day with 1 mM formate, in the absence of presence of 50 nM rapamycin as indicated. Cells were washed twice in PBS, scraped and lysed in thiourea/urea buffer (6 M urea, 2 M thiourea, 50 mM TRIS pH8.5, 75 mM NaCl) containing complete phosphatase and protease inhibitors. Lysates were sonicated and centrifuged at 16 k G for 5 min at room temperature. Sample supernatants were collected, and the protein concentration measured using the Bradford method. Samples were then snap-frozen in liquid nitrogen and stored at -80°C until further analysis. Phosphoproteome analysis Cell lysates were reduced with 10 mM DTT and subsequently alkylated with 55 mM Iodoacetamide. Alkylated proteins were then digested first using Endoproteinase Lys-C (Alpha Laboratories) for 1 h followed by trypsin (Promega) overnight. Peptides were then desalted using C18 SepPak (Waters) filtration and resuspended in a buffer containing 80% acetonitrile (ACN) and 6% trifluoroacetic acid (TFA). Enrichment of phosphorylated peptides was performed using TiO 2 beads (GL Sciences) and eluted from beads with a buffer 15% NH 4 OH and 40% ACN. Peptides were separated on a nanoscale C 18 reverse-phase liquid chromatography (20 cm fused silica emitter (New Objective) packed in-house with ReproSil-Pur C18-AQ, 1.9 μm resin (Dr. Maisch)) performed on an EASY-nLC 1200 (Thermo Scientific) coupled online to an Orbitrap Fusion Lumos mass spectrometer (Thermo Scientific). MS data were acquired using the XCalibur software (Thermo Scientific). The MS raw files were analysed with the MaxQuant computational platform [5] and searched against the human UniProt database using the Andromeda search engine [6]. Trypsin with full-enzyme specificity and maximum two missed cleavages were allowed. Only peptides longer than six amino acids were analysed. Oxidation (Met) and N-acetylation were set as variable modifications, as well as phospho(STY). Carbamidomethylation (Cys) was set as fixed modification. A 1% false discovery rate (FDR) was used for peptides, proteins and phosphopeptide identification. MaxQuant output Phospho(STY)site.txt file was processed with Perseus software [7]. Reverse and potential contaminant flagged proteins were removed, and only class I sites (= sites accurately localized with localization probability > 0.75 and score difference > 5) were used for the analysis. Intensity values were used for phosphorylation site quantification. Metabolite extraction Metabolite extraction and analysis were performed as previously described [1]. Briefly, HAP1-ΔSHMT2 cells were seeded into 12-well plates at 100.000 cells/well and treated the following day as indicated. For 15 N tracing, [ 15 N-amide]-glutamine was added to the media at 4 mM. To harvest, wells were washed with ice-cold PBS and scraped and extracted with freezer cold extraction solvent (acetonitrile/MeOH/H 2 O (30/50/20)). The extracts were transferred into tubes and centrifuged for 5 min at 18 k G. Supernatants were transferred to LC-MS glass vials and kept at -80°C until LC-MS analysis using a pHILIC chromatography and a Q-Exactive mass spectrometer (Thermo Fisher Scientific). Compounds were identified using Tracefinder 4.1 (Thermo Scientific), comparing the exact mass and the retention time against an in-house compound database created with authentic standards. Statistics Technical replicates were used to calculate statistics in the phosphoproteomics analysis. Biological replicates were used to calculate statistics in the immunoblots metabolomics analyses. Statistical significances were calculated using two-tailed and unequal variance t test otherwise indicated. Theoretical hypothesis To investigate how formate increases pyrimidine synthesis, we developed a hypothesis based on formate increasing ATP levels [1]. We analysed a theoretical model accounting for the synthesis and turnover of the pyrimidine precursor dihydroorotate (Fig. 1a). The synthesis of dihydroorotate has been simplified, for the purpose of illustration, to a reaction following a Michaelis-Menten equation between the reaction rate and the concentration of ATP and a constant rate of turnover. When the maximum rate of dihydroorotate synthesis is less than that of turnover, then the concentration of dihydroorotate changes very little with increasing ATP concentration (Fig. 1b). In contrast, when the maximum rate of dihydroorotate synthesis is higher than that of turnover, the concentration of dihydroorotate exhibits a dramatic increase with increasing ATP concentration. In order to observe dramatic changes in the concentration of dihydroorotate, it is necessary that the maximum activity of synthesis exceeds that of consumption. Therefore, the increase of dihydroorotate levels upon supplementation of formate can be explained by two hypotheses: (i) either the synthesis of dihydroorotate is higher than its turnover before the addition of formate or (ii) the addition of formate induces an increase in dihydroorotate synthesis relative to turnover. Phosphoproteomics reveals CAD phosphorylation Protein phosphorylation can change the activity of metabolic enzymes providing a biochemical mechanism underpinning the second hypothesis that the addition of formate induces an increase in dihydroorotate synthesis relative to turnover (ii). We previously reported that formate induces changes in phosphorylation levels of AMPK and its downstream targets [1]. Here we used an unbiased approach to identify additional phosphorylation events induced by formate. The first step in the mitochondrial serine catabolism to formate is catalysed by the mitochondrial serine hydroxymethyltransferase (SHMT2). HAP1 cells genetically engineered to inactivate SHMT2 (HAP1-ΔSHMT2) exhibit a reduction in formate production relative to the parental cell line (HAP1-WT) [1]. Consequently, HAP1-ΔSHMT2 cells have lower levels of purines and pyrimidines than HAP1-WT cells, a deficiency that can be rescued by supplementation of 1 mM formate in the Fig. 1 Theoretical model. a Schematic representation of the ATP-dependent synthesis of dihydroorotate and its turnover. b Expected dependency of the dihydroorotate concentration as a function of the ATP concentration for two different scenarios culture medium [1]. Therefore, HAP1-ΔSHMT2 cells provide a model to investigate phosphorylation events induced by formate supplementation. To investigate the impact of formate supplementation on protein phosphorylation in an unbiased manner, we performed a phospho-proteome analysis of HAP1-ΔSHMT2 cells and HAP1-ΔSHMT2 cells supplemented with formate (Additional file 1). We identified phosphorylation of CAD at Ser1859 as one of the phosphorylation sites that was significantly increased in HAP1-ΔSHMT2 cells supplemented with formate relative to control cells (Fig. 2a). Phosphorylation of CAD at Ser1859 is known to be regulated by the mTORC1/S6K1 pathway [3,4]. To investigate whether the formate induced increase in CAD phosphorylation was mediated by the mTORC1 signalling, we analysed the phospho-proteome of HAP1-ΔSHMT2 cells treated with formate in the presence of the allosteric mTORC1 inhibitor rapamycin (Additional file 2). Strikingly, more than half of the formate induced changes in phosphorylation levels in HAP1-ΔSHMT2 cells were abrogated upon rapamycin treatment (Fig. 2c), including the phosphorylation of CAD at Ser1859 (Fig. 2b). These results indicate that the mTORC1 pathway is a major facilitator of the intracellular signalling induced by formate. CAD phosphorylation is mediated by mTORC1 To validate the phospho-proteomics profiling, we performed immunoblotting of CAD and CAD phosphorylated at Ser1859 in HAP1-ΔSHMT2 cells. The immunoblots corroborated the significant induction of CAD phosphorylation at Ser1859 by formate supplementation (Fig. 3a, b). We also immunoblotted for other known targets of mTORC1. Formate supplementationinduced ribosomal protein S6 kinase beta-1 (S6K1) phosphorylation at T389, ribosomal protein S6 phosphorylation at Ser235/236 and the mobility band shift of eukaryote Translation Initiation Factor 4EB Binding Protein (eIF4EBP1) (Fig. 3a), canonical events associated with the activation of mTORC1 signalling. All of these events were reverted by treatment with the mTORC1 inhibitor rapamycin (Fig. 3a). Similar results were also observed in the breast cancer cell MDA-MB-231 with genetic inactivation of SHMT2 (Fig. 3c). These data indicate that formate induces CAD phosphorylation through the activation of mTORC1. Formate induction of mTORC1 is dependent on purines It has been previously shown that formate availability modulates purine levels [1] and that purine levels modulate mTORC1 signalling [8,9]. From this evidence, we Fig. 2 Phospho-proteomic analysis. a Volcano plot reporting the logratio of phosphopeptide quantifications in HAP1-ΔSHMT2 cells supplemented with formate (formate) relative to HAP1-ΔSHMT2 cells (Ctrl), together with the associated statistical significance. b Volcano plot reporting the logratio of phosphopeptide quantifications in HAP1-ΔSHMT2 cells supplemented with formate and treated with rapamycin (Rapa+For) relative to HAP1-ΔSHMT2 cells treated with rapamycin (Rapa), together with the associated statistical significance. Each symbol represents a phosphorylation site. The statistical analysis was conducted on 3 technical replicates from 1 experiment, and the significance was calculated using an unequal variance two-tailed t test. c Venn diagram highlighting the overlap and non-overlap of significant changes in the absence and under treatment with rapamycin hypothesized that formate induces mTORC1 signalling and CAD phosphorylation via purine synthesis. In agreement with this hypothesis, the purine salvage metabolite hypoxanthine was able to induce an increase in mTORC1 signalling and CAD phosphorylation in HAP1-ΔSHMT2 cells (Fig. 3d,e). Similarly, inhibition of endogenous levels of formate by treating HAP1 WT cells with the SHMT1/2 inhibitor SHIN1 [10] suppressed mTORC1 signalling, which could be reverted by both formate and hypoxanthine (Fig. 3f). In contrast, treatment with the purine synthesis inhibitor L-alanosine [11] abrogated the formate-dependent induction of mTORC1 signalling and CAD phosphorylation in HAP1-ΔSHMT2 cells (Fig. 3d, e). Furthermore, L-alanosine caused a reduction of mTORC1 signalling and CAD phosphorylation in the HAP1 WT cells, which have endogenous formate production (Fig. 3f). Metabolic profiling up to 6 h To investigate the impact of these phosphorylation events upon pyrimidine synthesis, we performed [ 15 Namide]-glutamine tracing experiments. The incorporation of the 15 N into intracellular metabolites was quantified using mass spectrometry. To test the dependencies on formate and mTORC1, we profiled HAP1-ΔSHMT2 cells untreated, treated with rapamycin, supplemented with formate and supplemented with both formate and rapamycin. The levels of intracellular [ 15 N-amide]- glutamine were not significantly differently across these conditions (Fig. 4b). First, we focused our attention on pyrimidine synthesis (Fig. 4a). In untreated cells, there was an evident increase in the amount of 15 N-dihyroorotate from 1 to 6 h (Fig. 4c, Ctrl). However, this increase was significantly reduced by treatment with rapamycin. In cells treated with formate, there was a similar rate of incorporation of 15 N into dihydroorotate as in the untreated cells, which was also significantly reduced upon treatment with rapamycin (Fig. 4c). A similar pattern was also observed when comparing the levels of the downstream pyrimidine precursor orotate and the pyrimidines UMP and CTP in untreated cells and cells treated with rapamycin ( Fig. 4d-f). Based on the bottleneck model in Fig. 1, these data indicate that the maximum activity of dihydroorotate synthesis does not exceed the maximum activity of dihydroorotate turnover within the first 6 h. In contrast, formate significantly increases the levels of 15 N-orotate at 6 h, indicating that the maximum activity of orotate synthesis exceeds the maximum activity of orotate turnover within the first 6 h. Next, we focused on purine synthesis metabolites 5aminoimidazole-4-carboxamide ribotide (AICAR), inosine monophosphate (IMP), adenosine monophosphate (AMP) and guanosine monophosphate (GMP) (Fig. 5a). In untreated cells, there was no significant change in the amount of intracellular 15 N-AICAR, indicating that the rate of AICAR synthesis is rapid and does not change during this time window (Fig. 5b). In contrast, rapamycin caused a significant depletion of 15 N-AICAR relative to untreated cells (Fig. 5b). Formate supplementation reduced 15 N-AICAR to undetectable levels independently of rapamycin treatment (Fig. 5b). For IMP, we observed a similar picture (Fig. 5c), except for a significant time-dependent increase of 15 N-IMP in untreated cells. Both rapamycin treatment and formate supplementation caused a significant decrease of 15 N-IMP, as observed for 15 N-AICAR. Finally, the levels of 15 N-AMP and 15 N-GMP were significantly increased by formate independently of rapamycin treatment (Fig. 5d, e, Ctrl vs formate). This result together with the depletion of 15 N-AICAR and 15 N-IMP relative to untreated cells suggests an increase in the turnover rate of purine precursors towards the synthesis of purines. Metabolic profiling at 24 h To investigate the effects of rapamycin and formate on HAP1-ΔSHMT2 cells after longer treatment, we analysed intracellular metabolite levels after 24 h of culture using mass spectrometry. As previously shown [1], most pyrimidines, purines and their precursors increased with increasing the concentration of formate (Fig. 6a-i). In Bars represent the mean and error bars the standard deviation from 3 independent experiments, each represented by the symbols. The p values shown were obtained using a t test with two tails and unequal variance the case of the pyrimidines UMP and CTP, the formatedependent increase was not as pronounced, but there was no evidence of a decreased level at 24 h. In contrast, formate had no effect on the levels of most pyrimidines, purines and their precursors when cells were treated with rapamycin. This observation agrees with the finding of rapamycin repression of 15 N incorporation into dihydroorotate and orotate in the first 6 h, and the repression of CAD-Ser1859 phosphorylation by rapamycin (Fig. 6a-i and Figs. 2 and Fig. 3). There is one noted exception. Formate induced a decrease rather than an increase of AICAR levels, particularly at the highest formate concentrations (Fig. 6e). The same behaviour was previously observed for HAP1 cells with inactivation of both the cytosolic and mitochondrial pathways of serine one-carbon metabolism [1]. Furthermore, these observations were not changed by rapamycin treatment (Fig. 6e). Based on the bottleneck model of Fig. 1, the increase in both dihydroorotate and orotate at 24 h (Fig. 6a, b) indicates that at this time point, the maximum activity of both dihydroorotate and orotate synthesis exceeds the maximum activity of their turnover. We noted that this was true for orotate but not for dihydroorotate at 6 h (Fig. 4c, d). This difference suggests that there may be an early increase of CAD activity that is further increased later on, finally leading to the accumulation of all pyrimidine precursors. One possibility for the later increase is the allosteric induction of CAD activity by ATP [2]. There is no 13 C incorporation into ATP up to 6 h (Fig. 5f), and ATP levels are not changed by formate during the first 6 h (Fig. 5g). In contrast, ATP levels are 15 N purine precursors and purines in HAP1-ΔSHMT2 cells at different times and under different conditions. g Total ATP levels ( 14 N+ 15 N). Quantifications are done relative to the Taurine peak area. Bars represent the mean and error bars the standard deviation from 3 independent experiments, each represented by the symbols. The p values shown were obtained using a t test with two tails and unequal variance increased significantly at 24 h (Fig. 6i). Further work is required to determine whether the allosteric regulation of CAD by ATP is indeed playing a role. Discussion Our work shows that the formate induction of pyrimidine synthesis is mediated by the mTORC1/S6K1dependent phosphorylation of CAD at Ser1859. We also provide evidence that these phosphorylation events are just the tip of the iceberg. More than half of the protein phosphorylation changes induced by formate supplementation are suppressed by treatment with rapamycin. Formate induction of mTORC1 signalling is consistent with the formate-dependent increase of purine synthesis and the previously reported mTORC1 activation by purine nucleotides [8,9]. To close the loop, it has been also reported that mTORC1 induces mitochondrial serine catabolism to formate and purine synthesis through activation of the transcription factor ATF4 [12]. Mechanistically, mTORC1 activates ATF4, which then induces the expression of mitochondrial methylene tetrahydrofolate dehydrogenase (MTHFD2), the second step in the mitochondrial serine catabolism to formate. Therefore, our data has unveiled that formate metabolism and mTORC1 signalling are coupled through a positive feedback loop. The extrapolation of these observations to mammalian physiology requires further work. There are mammalian tissues with active purine synthesis, where the formatedependent regulation of mTORC1 signalling may play a role. The inactivation of mitochondrial formate production genes carries as a consequence neurodevelopmental Fig. 6 Levels of pyrimidines and purines at 24 h. a-i Peak areas of the reported metabolites in HAP1-ΔSHMT2 cells supplemented with different concentrations of formate, without or with rapamycin treatment. Quantifications are done relative to the Taurine peak area. Bars represent the mean and error bars the standard deviation from 3 independent experiments, each represented by the symbols. The p values shown were obtained using a t test with two tails and unequal variance defects and embryonic lethality, which can be rescued by sodium formate supplementation [13,14]. mTORC1 activators with a specific activity in nervous system cells have been recently developed [15]. It will be interesting to see whether mTORC1 activators could rescue the embryonic lethality caused by genetic inactivation of mitochondrial formate production. Tumour metabolism is also characterized by an induction of mitochondrial formate production genes and purine synthesis [1]. In turn, genetic inactivation of mitochondrial formate production genes reduces tumour growth in subcutaneous xenograft and leukaemia models [16,17]. However, it is not clear what specific downstream effect or combination of effects (purine synthesis, mTORC1 activation, pyrimidine synthesis) is responsible for the requirement of mitochondrial formate metabolism for enhanced cancerous growth. Finally, the relationship between mTORC1 signalling and formate availability requires further investigation from the nutritional point of view. Currently, we have a very poor understanding of the nutritional demand of formate or one-carbon units in general [18,19]. Understanding the relationship between the availability of formate and mTORC1 signalling in normal physiology could have important implications for the management of optimal growth in humans and mammals. Conclusion Formate activates mTORC1 and induces pyrimidine synthesis through mTORC1-dependent phosphorylation of CAD at Ser1859. Additional file 2. Phosphoproteomics quantifications of HAP1-ΔSHMT2 cells supplemented with formate and controls, with rapamycin treatment.
2020-09-22T13:15:20.426Z
2020-09-21T00:00:00.000
{ "year": 2020, "sha1": "5968e4f6a0491d6ef24ce9d60f2a222beaf44212", "oa_license": "CCBY", "oa_url": "https://cancerandmetabolism.biomedcentral.com/track/pdf/10.1186/s40170-020-00228-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e1fb30aa2c34eedbb8846ccab085ea7a5d4dc35e", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
55234003
pes2o/s2orc
v3-fos-license
OCCURRENCE, CRYSTALLOGRAPHY AND CHEMISTRY OF THE FLUOCERITE-BASTNAESITE-CERIANITE INTERGROWTH FROM THE FJÄLSKÄR GRANITE, SOUTHWESTERN FINLAND LAHTI, SEPPO I. and SUOMINEN, VELI, 1988: Occurrence, crystallography and chemistry of the fluocerite-bastnaesite-cerianite intergrowth from the Fjälskär granite, southwestern Finland. Bull. Geol. Soc. Finland 60, Part 1, 45—53. White crystals composed mainly of fluocerite were identified during separation of the accessory minerals of the Middle Proterozoic Fjälskär granite, southwestern Finland. Detailed X-ray diffraction and microprobe studies showed that the mineral is intergrown with bastnaesite and cerianite. X-ray single crystal studies reveal that fluocerite-(Ce) (a = 7.11 Å, c = 7.28 Å, space group P63mcm) forms in the mineral syntaxial intergrowth with bastnaesite(Ce) (a = 7.11 A, c = 9.74 Å, space group P62c). The cerianite phase (a = 5.45 Å, space group Fm3m) is intergrown with fluocerite-bastnaesite in a more complex way. The [00011 plane of fluocerite-bastnaesite is parallel to the (111J plane of cerianite and the [1000] plane of fluocerite-bastnaesite is parallel to the (llO) or corresponding plane of cerianite. The fluocerite-bastnaesite-cerianite intergrowth is dominated by LREEs, especially Ce, La and Nd. Its composition varies slightly; pure separate phases cannot be distinguished with the microprobe. Zircon and fluorite associated with fluocerite have minor REEs (below 0.3 wt% RE203) . The Fjälskär granite is exceptionally enriched in REEs mainly incorporated in fluocerite and associated REE minerals. The C02-bearing fluids altered fluocerite to bastnaesite as crystallization drew to an end in the rock. Separation of the cerianite phase was due to later oxidation of Ce + . Because of structural and chemical similarities, the crystallographic axes of the various phases are preferentially oriented in the intergrowth. Introduction During separation of the heavy accessory minerals of the Fjälskär granite, southwestern Finland, for age determination (sample no.A287, Fjärdskär, Houtskär, map 1041 07, Finnish national grid coordinates x = 6689800, y = 1520860) a pinkish white mineral was encoun-tered that was tentatively identified as fluocerite (Ce,La,Nd.. .)F 3 .However, the X-ray powder diffraction films of the mineral always showed several strong additional lines, indicating that the crystals are intergrowths composed of different mineral phases. The Fjälskär granite forms a roundish stock about 5 km in diameter.The bulk of the intru-sion is composed of porphyritic biotite-muscovite granite.K-feldspar phenocrysts, 1-3 cm long, appear in the rock, but near the contacts the grain size decreases and the granite is less porphyritic.The contact between the granite and the surrounding rocks is sharp and the granite cuts the schistosity of the surrounding Svecokarelian gneisses.On the basis of the petrological evidence, Sederholm (1934) included the Fjälskär granite in the postorogenic rapakivi granite group of southwestern Finland.Later, Ehlers and Bergman (1984) and Bergman (1986) discussed the intrusion mechanism of the granite in detail, and also gave a short petrographic description of the rock.Recent studies assign a U-Pb zircon age of 1579+ 13 Ma to the granite (Suominen 1988, in prep.) and thus support Sederholm's assumption. Like the last intrusion phases of the Finnish rapakivi granites, the rock has fluorite and topaz as accessory minerals (cf.Vorma 1976;Haapala 1977).Thin, often irregular, pegmatite dykes containing abundant blue green topaz and fluorite, and muscovite-greisen dykes appear in the marginal parts of the stock, indicating that the residual melt was enriched in volatiles, fluo-rine in particular.Chemical analyses of the granite show exceptionally high concentrations of REEs mainly due to fluocerite and associated REE minerals. The fluocerite-intergrowth from the Fjälskär granite was studied in detail in a Buerger X-ray precession camera and with X-ray powder diffraction methods with a view to establishing the different phases of the mineral and to confirm the identification of fluocerite.Because of the small amount of sample, the mineral was analysed only with the microprobe.Of the authors, S.I.L. is responsible for the X-ray diffraction and other mineralogical studies and V.S. for the description of the granite and its geology. Occurrence of the fluocerite intergrowth The fluocerite intergrowth was first encountered in the heavy mineral fraction of the rock as faint, pinkish white small grains with columbite, topaz, zircon, bastnaesite, fluorite, apatite, galena and molybdenite.After careful examination under stereomicroscope the mineral was ob- served in a polished slab of the granite.In this specimen the fluocerite intergrowth occurs as a white, angular grain about 0.5 mm in diameter replaced by light brown bastnaesite-(Ce).The surrounding minerals are zircon, fluorite, biotite, quartz, K-feldspar and plagioclase (Fig. 1). Thin section and X-ray studies show that the reddish K-feldspar associated with the fluocerite intergrowth is cross-twinned micro-or cryptoperthitic intermediate microcline or orthoclase.Quartz occurs in the rock either as irregular, small crystals between feldspars or as large, separate, short-prismatic crystals that are readily recognizable on the surface of the rock on the basis of the bluish colour.The core of the plagioclase (oligoclase) crystal laths is often slightly seritized and saussuritized, but the crystals are rimmed by fresh, unaltered plagioclase of the second generation.Brown iron-rich biotite predominate in the granite, but the rock also contains some muscovite. X-ray single crystal studies The Fjälskär fluocerite intergrowth was studied using a Buerger X-ray precession camera and a small crystal fragment (diameter 0.1-0.2mm).The c-axis 0-level precession photographs show the spots of two separate mineral phases, but the a-axis 0-level photographs indicate that three different phases are present and that certain axes of the two phases are equal in length.The reciprocal lattice point rows are parallel or coalesce indicating that the different phases are oriented in certain directions in the intergrowth. Careful scrutiny of the photographs showed that the main phase of the intergrowth is fluocerite.First a-axis and later c-axis 0-, 1-and 2-level precession photographs of fluocerite were taken.Film shrinkage was determined on the spots of an oriented silicon crystal.Zr-filtered Mo radiation (X,MoKa = 0.7107 Å) was used in the studies. Fluocerite is syntaxially intergrown with another hexagonal phase, which corresponds well with bastnaesite in unit cell and symmetry.Both the a-axes and the c-axes of the minerals are parallel to each other.The spots of the bastnaesite phase cannot be detected in the c-axis 0-level photograph, because the length of the a-axis of both fluocerite and bastnaesite and the systematic extinctions in the c-axis photographs are the same.The unit cell dimensions of bastnaesite measured from the precession photographs are a = 7.11 Å and c = 9.74 Å; the space group is P62c (no.190, 0001 : 1 = 2n and hhOl : 1 = 2n and hh2hl : 1 = 2n). In addition to fluocerite and bastnaesite, the intergrowth has a third phase for which the Xray powder pattern indicates cubic symmetry with fluorite-derivative structure.Its strongest reflections correspond closely to those of cerianite.The computation of the unit cell dimension from the X-ray powder pattern on the basis of the five strongest reflections (indexed as 111, 200, 220, 311 and 331) gave a = 5.45 Å.Later interpretation of the precession photographs shows the same unit cell dimension, and the space group is in accordance with Fm3m (no.225, hkl: h + k = 2n, k + 1 = 2n; 1 + h = 2n; hhl (1 + h = 2n); Okl (k, 1 = 2n)). The symmetry, the unit cell and the composition of the intergrowth indicate that the third phase is cerianite Ce0 2 or a corresponding mixture of Ce and other REE oxides.The unit cell dimension of pure synthetic Ce0 2 is slightly shorter, i.e. a = 5.41 Å (see JCPDS-card 34-394).Natural cerianites are usually mixtures of Ce and other REE oxides and therefore their unit cell dimensions may vary considerably.Gurov et al. (1975) have reported a cubic cerianite phase with a = 5.45 Å, and Styles and Young (1983) a cerianite phase with a = 5.46 Å, both occurring in association with fluocerite.The unit cell dimension also agrees well with that of synthetic cerium neodymium oxide with a = 5.46 Å as reported in JCPDS-card 28-266. The d-values of the cerianite reflections along the c*-axis of fluocerite-bastnaesite correspond X-ray powder diffraction studies An X-ray powder pattern was subsequently recorded from the mineral in a Debye-Scherrer camera (diam.114.3 mm) using Ni-filtered Cu radiation (XCuKa = 1.5418Å).The powder diffraction pattern of the crystal is given in Table 1.All possible d-values of each three phases observed in the precession photographs were computed.The d-values measured from the powder photograph were indexed with the aid of the computed d-values.Table 1 shows that the reflections of the fluocerite intergrowth derive mainly from the fluocerite-phase.Several cerianite lines, but only some reflections of the bastnaesite phase, can be conclusively identified in the X-ray powder pattern; most of the reflections of the three phases coalesce and form a sum line in the Debye-Scherrer camera film.The measured and computed d-values are close to each other, and the intensities of the reflections in the X-ray powder pattern and the single crystal photographs correspond well with each other. In Table 1 the X-ray powder pattern of the Fjälskär fluocerite intergrowth is compared with the X-ray powder pattern given by Gurov et al. (1975) for fluocerite altered to cerianite.The chemical composition of both intergrowths is also close to each other.Some lines in the X-ray pow- der pattern reported by Gurov et al. (1975) might also be due to bastnaesite (see Table 1). Chemistry of the fluocerite intergrowth and associated REE-bearing minerals The Fjälskär fluocerite intergrowth, together with the associated bastnaesite, fluorite and zircon, was analysed at the Geological Survey of Finland using a JEOL JXA-733 Super Probe equipped with an energy dispersive spectrometer system (EDS).The specifications for the quantitative analyses were 15 kV accelerating voltage, 25 nA probe current and a beam diameter of about 1 |im.Synthetic (RE)F 3 and monazite were used as standards.The data were processed with a standard ZAF (atomic number, absorption and fluorescence) correction program. A fluocerite intergrowth partly surrounded and replaced by light brown bastnaesite was analysed from a polished section.First a set of X-ray, secondary electron (SEI) and backscattered electron (BEI) images was produced to test the homogeneity of the mineral.The composition of the fluocerite intergrowth was observed to vary slightly between different parts of the crystal.The intergrowth is, however, so fine-grained that separate phases could not be analysed.Several partial chemical analyses from different parts of both the fluocerite and the bastnaesite phases were made.The results are shown in Table 2.Because detailed analytical determinations including carbon oxide and water, were not carried out, the chemical formulae of the minerals could not be calculated and only the content of REEs is compared in the present context.Ce, La and Nd are the main elements in the fluocerite intergrowth but the chemical composition shows slight variation.Ce is, however, dominant in all parts of the mineral.The Ce/La ratio ranges from 1.3 to 1.8 within the fluocerite crystal as a result of separation of the different alteration phases. The bastnaesite surrounding the fluocerite intergrowth is similarly enriched in Ce and other LREEs, La and Nd, but the compositional variation is not usually so high as in fluocerite.The mineral contains minute concentrations of Y, which is almost lacking in fluocerite.The X-ray images and chemical analyses indicate that some per cents of CaO are locally present in bastnaesite.The REE ratios in both the fluocerite intergrowth and the bastnaesite are close to each other.Comparison of the REE ratios (the lower part of Table 2) shows that the bastnaesite is somewhat richer in Nd and poorer in La than the fluocerite intergrowth.Owing to the different chemical formula, the total content of REEs is, of course, smaller in bastnaesite. Discussion Natural rare earth trifluorides of Ce, La and Nd that crystallize in the hexagonal system have been called fluocerite or tysonite in the literature.Steyn (1961) suggested that the original name fluocerite should apply to the uniaxial optically positive material that appears to be oxyfluorides on the basis of old analyses; the name tysonite should refer to the optically negative REE trifluorides.Later studies have shown that the criteria are not quite correct.Thoma and Brunton (1966) have shown that the optic character of synthetic REE trifluorides varies substantially with the composition.Further crystallographic studies reveal that the structure of the known REE oxyfluorides differs from that of fluocerite (Baenziger et al. 1954, Wells 1975).Nowadays the mineralogical literature prefers the name fluocerite, originally given to the mineral described by Berzelius from Sweden (from the granite pegmatites of Finnbo and Broddbo near Falun; see Geijer 1921) as early as 1845.To date Ce and La dominant members of fluocerite are known in nature (see Chistyakova andKazakova 1969, andStyles andYoung 1983). After the Fjälskär fluocerite had been found, the mineral was also encountered in the heavy mineral fraction of the even-grained Saltvik rapakivi granite, Åland Islands (see Lindqvist and Suominen 1988, this volume).According to the records of the Mineralogical laboratory of the Geological Survey of Finland, cerianite has been identified with X-ray methods from the Väkkärä rapakivi granite, Eurajoki, western Finland, and fluocerite from the Parkkolansaari granite, Leppävirta, eastern Finland, but these observations have not been confirmed with chemical methods.Bastnaesite is known in Finland in several localities including certain types of rapakivi granite.has been reported in several studies (Oftedal 1931;Steyn 1961;Gurov and Gurova 1974;Gurov et al. 1975).A fluocerite-bastnaesitecerianite intergrowth similar to that in the Fjälskär granite has been described by Styles and Young (1983) from the altered eluvial monazite pebbles in the Afu Hills, Nigeria.The authors did not, however, describe the crystallography and chemistry of the intergrowth in detail. Fluocerite altered to cerianite or to bastnaesite Table 3 shows the analyses of the main elements and REEs from five specimens of the Fjälskär granite (1-5), from a cross-cutting pegmatitic granite dyke (6) and from a muscovite greisen dyke (7).The Fjälskär granite is a peraluminous syenogranite and its chemical compo-sition resembles certain rapakivi varieties (cf.Vorma 1976, Hapala 1977).Like rapakivi granites in general, the rock has exceptionally high concentrations of REEs, especially Ce, La and Nd (Koljonen and Rosenberg 1974;Vorma 1976).The Ce content is slightly higher than in rapakivi granites in general, and it may be up to four times the average Ce content of granite (cf.Haskin et al. 1968). The LREEs are enriched in the Fjälskär granite mainly in fluocerite, bastnaesite, cerianite and fluorite; the HREEs are enriched in zircon.The REE distribution pattern of the Fjälskär fluocerite intergrowth resembles closely those described by Semenov and Barinskii (1958) from northern Kirgiziya, USSR, and by Heinrich and Gross (1960) from the Black Cloud pegmatite, Colorado, USA.Ce, La and Nd dominate in fluocerites and the sum of La + Ce + Pr always exceeds 70 % of the total sum of REEs (Styles and Young 1983). The alteration of fluocerite in the Fjälskär granite may be due to postmagmatic fluid activity.Hydrothermal solutions enriched in C0 2 affected fluocerite and altered it to bastnaesite.The separation of a cerianite phase may be attributed to later oxidation of Ce 3+ (in fluocerite and bastnaesite) to Ce 4+ (in cerianite).Similar conclusions have previously been reached by Styles and Young (1983). Bastnaesite and fluocerite are similar in struc-ture, and therefore syntaxial intergrowth between them is possible.Similarly, bastnaesite may occur as an intergrowth with parisite and roentgenite as described by Donnay and Donnay (1953).A photograph of a preferentially oriented fluocerite-REE fluorite intergrowth has been presented by Arkhangel'skaya (1970). The results of X-ray single crystal studies from the Fjälskär fluocerite are in agreement with those of Oftedal (1931), who published a Laue photograph of a fluocerite-bastnaesite intergrowth and examined the structural similarities more closely.He showed that the a-axes of the minerals are identical and that they have a similar cation level after a period of n x 14.6 Å along the c-axis (that is 4 x l/2c nuocerite = 3 x l/2c bastnaesite ).To our knowledge, detailed studies of the fluocerite-bastnaesite intergrowth have not been reported since the study of Oftedal, and the present one is the first detailed X-ray diffraction study of the cerianite-fluocerite-bastnaesite intergrowth. Fig. 1 . Fig. 1.Secondary electron image of an altered fluocerite crystal (f) replaced by bastnaesite (b, dark grey).The angular crystal partly surrounded by the bastnaesite and fluocerite intergrowth is zircon (z).The black background is composed of other silicates. Fig. 2 . Fig. 2. Interpretation of the b-axis 0-level precession photograph of fluocerite.The figure indicates that the mineral is an intergrowth of fluocerite (f), bastnaesite (b) and cerianite (c) phases.See details in text.Zr-filtered Mo radiation. Table 1 . X-ray powder diffractogram of fluocerite intergrowth (fluocerite altered to cerianite and bastnaesite?) reported by Gurov et al. (1975, Table 3) compared with that of the Fjälskär fluocerite intergrowth.The computed d-values for the Fjälskär bastnaesite and the cerianite phase intergrown with fluocerite are given separately.B = broad line. Table 3 . Chemical analyses of the Fjälskär granite and associated rocks.The main constituents were analysed at the Research Centre of the Rautaruukki Company with XRF and the REEs at the Technical Research Centre of Finland with INAA methods.
2018-12-05T17:45:54.610Z
1988-06-01T00:00:00.000
{ "year": 1988, "sha1": "9a73218954683fab79b0bbf715bb7f9cf2e119b0", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.17741/bgsf/60.1.003", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9a73218954683fab79b0bbf715bb7f9cf2e119b0", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
34368414
pes2o/s2orc
v3-fos-license
Potential impact of a nonavalent HPV vaccine on the occurrence of HPV-related diseases in France Background Human Papillomavirus (HPV) infection is known to be associated with a number of conditions including cervical, vaginal, vulvar, penile, anal neoplasias and cancers, oropharynx cancers and genitals warts (GW). Two prophylactic vaccines are currently available: a bivalent vaccine designed to prevent HPV type 16 and 18 infection and a quadrivalent vaccine targeting HPV 6, 11, 16, and 18. In France, HPV vaccination is recommended in 11-14 year-old girls with a catch-up for girls aged 15-19. The objective of this study was to assess the potential impact of an HPV 6/11/16/18/31/33/45/52/58 nonavalent vaccine on anogenital and oropharyngeal HPV-related diseases in France. Methods HPV genotype distributions from 6 multicentric retrospective studies (EDiTH I to VI) were analyzed including 516 cases of invasive cervical cancers (ICC), 493 high-grade cervical neoplasias (CIN2/3), 397 low-grade squamous intraepithelial lesions (LSIL), 423 GW, 366 anal cancer and 314 oropharyngeal carcinomas. Low and high estimates of HPV vaccine impact were calculated as follows: low estimate: prevalence of HPV 6/11/16/18/31/33/45/52/58 genotypes alone or in association but excluding presence of another HPV type; high estimate: prevalence of HPV 6/11/16/18/31/33/45/52/58 genotypes alone or in association, possibly in presence of another HPV type. Results Estimates of potential impact varied from 85% (low estimate) to 92% (high estimate) for ICC, 77% to 90% for CIN2/3, 26% to 56% for LSIL, 69% to 90% for GW, 81% to 93% for anal cancer, and 41% to 44% for oropharyngeal carcinomas. Compared to the quadrivalent vaccine, the proportion of additional cases potentially prevented by the nonavalent vaccine was 9.9%-15.3% for ICC, 24.7%-33.3% for CIN2/3, 12.3%-22.7% for LSIL, 2.1%-5.4% for GW, 8.5%-10.4% for anal cancer, and 0.0%-1.6% for oropharyngeal carcinoma. Conclusions The nonavalent HPV vaccine showed significant increased potential impact compared to the HPV 6/11/16/18 quadrivalent vaccine for ICC, CIN2/3 and LSIL. Considering a 100% vaccine efficacy and high vaccine coverage, about 90% of ICC, CIN2/3, GW or anal cancer cases could be prevented by a nonavalent HPV vaccine in France. Electronic supplementary material The online version of this article (doi:10.1186/s12889-015-1779-1) contains supplementary material, which is available to authorized users. These studies have shown that a proportion of HPVrelated lesions were not targeted by currently available vaccines. The objective of the present study was thus to assess the potential impact in France of a 6/11/16/18/31/ 33/45/52/58 nonavalent HPV vaccine on anogenital and oropharyngeal HPV-related diseases, and to compare this impact with the 6/11/16/18 quadrivalent vaccine. For each condition, HPV genotype distributions were used to assess the potential impact of the quadrivalent and the nonavalent vaccines in France. A low and a high estimate of the vaccine impact were calculated as follows. For the quadrivalent vaccine, low estimate was the prevalence of HPV 6/11/16/18 genotypes alone or in association but excluding presence of another HPV type; high estimate was the prevalence of HPV 6/11/16/18 genotypes alone or in association possibly in presence of another HPV type. For the nonavalent vaccine, low estimate was the prevalence of HPV 6/11/16/18/31/33/45/ 52/58 genotypes alone or in association but excluding presence of another HPV type; high estimate was the prevalence of HPV 6/11/16/18/31/33/45/52/58 genotypes alone or in association possibly in presence of another HPV type. Additional file 1: Table S1 presents genotypes and combination of genotypes used to define the low and high estimates for the quadrivalent and nonavalent vaccines. Estimates are presented with their 95% confidence intervals calculated based on the cumulative binomial distribution. The absolute additional potential impact of the nonavalent vaccine, i.e. the proportion of additional cases potentially prevented by the nonavalent vaccine compared to the quadrivalent vaccine, was calculated as follows: with n being the number of cancer cases potentially prevented and N the total number of cancer cases. The relative additional potential impact of the nonavalent vaccine compared to the quadrivalent vaccine was calculated as follows: with n representing the number of potentially prevented cancer cases. Results HPV genotypes in the six EDiTH studies are described in details in Table 1 The potential impact of the quadrivalent and nonavalent HPV vaccines assessed by low and high estimates is presented in Figure 1. The nonavalent HPV vaccine showed increased impact compared to the quadrivalent vaccine for invasive cervical cancers, high-grade cervical neoplasias and low-grade squamous intraepithelial lesions. The number of genotypes targeted by the nonavalent vaccine varied between 84.5% (low estimate, 95%CI 81.0 to 87.8) and 92.1% (high estimate, 95%CI 89.5 to 94.3) for invasive cervical cancers, between The absolute additional impact of the nonavalent vaccine (i.e the number of additional cases that could be prevented by the nonavalent vaccine) as well as the relative additional impact (i.e compared to the quadrivalent vaccine) are presented in Table 2. Again, limited impact was found for the nonavalent vaccine compared to the quadrivalent one for external acuminata condylomata, anal cancer and oropharyngeal carcinomas. The absolute additional impact of the nonavalent vaccine lied between 9.9% and 15.3% for invasive cervical cancers, between 24.7% and 33.3% for highgrade cervical neoplasias, and between 12.3% and 22.7% for low-grade squamous intraepithelial lesions. The benefit of the nonavalent vaccine compared to the quadrivalent vaccine ranged between 12.0% and 22.1% for invasive cervical cancers, between 38.2% and 76.6% for high grade cervical neoplasias, and between 67.7% and 90.7% for low-grade squamous intraepithelial lesions. The attribution of cases to HPV types is often complicated by the existence of multiple infections characterized by the presence of several HPV types in the same tumor. Therefore, the potential benefit of an HPV vaccine is difficult to assess especially when HPV types not targeted by the vaccine are present. In all EDiTH studies and in the present study as well, we thus calculated a low and a high estimate of the vaccine impact based on presence of single or multiple infections. It is possible that the high estimate gives an overestimation of the potential impact of the vaccine since one assumes that HPV types targeted by the vaccine are causally related to the lesion in which they are found even in the presence of another HPV type. It is thus reasonable to believe that the "true" potential impact lies somewhere between the low and the high estimates. If low estimate calculations are considered, the present study indicates that the absolute additional impact of a nonavalent vaccine is highest for CIN2/3 with a 33% increase in the proportion of cases targeted by the nonavalent vaccine. This additional benefit is intermediate for ICC (15% increase), LSIL (12%) and anal cancer (10%) whereas almost no additional benefit is observed for genital warts (5%) and oropharyngeal carcinomas (0%). The EDiTH II study showed that between 43% and 65% of CIN2/3 cases were associated with HPV 6/11/16/18 (low and high estimates) [15] whereas the present study indicates that a nonavalent vaccine could target 77% and up to 90% of all CIN2/3 cases. This benefit on the prevention of CIN2/3 cases could have a real public health impact by reducing the costs related to the management of these lesions. It is indeed estimated that 25,000 to 30,000 conizations are performed in France annually [21]. The benefit on ICC would also be substantial with up to 92% of ICC cases that could be targeted by a nonavalent vaccine. Even if the proportion of LSIL cases attributed to HPV 6/11/16/18 was rather low (14-34%), the proportion of cases associated with HPV types targeted by the nonavalent vaccine is increased by almost 90% (14% vs 26% considering the low estimates). However, for genital warts, only about 5% of cases are attributed to the additional HPV types found in the nonavalent vaccine (HPV 31/33/45/52/58) resulting in a low efficacy benefit. Similarly, no oropharyngeal carcinoma case was associated with these additional HPV types suggesting no additional benefit of a nonavalent vaccine in this group. This limited additional benefit on oropharyngeal carcinoma and genital warts is mainly explained by the fact that these conditions are almost exclusively associated with HPV types targeted by the bivalent or quadrivalent vaccines. Anal cancers are known to be mainly associated with HPV 16. A possible explanation for the observed additional benefit of the nonavalent vaccine (15%) is that we included some HIV positive patients (14%) with higher risk of multiple infection (50% vs 20% in HIV negative patients). It should be noticed that a strong epidemiological impact characterized by significant reductions in HPVrelated precancerous lesions and cancers may be achieved only if vaccination coverage reaches more than 80% [22]. However, by the end of 2013, the vaccination coverage with 3 doses reached only 20% of 16 year-old women in France [23]. By raising public awareness of the importance of HPV vaccination, general practitioners and gynecologists have to play an important role for increasing vaccination coverage. A possible limitation of the present study is that HPV positivity in the EDiTH studies was based on HPV DNA detection only which could have resulted in a slight overestimation of the proportion of cancers potentially attributed to HPV. This is particularly true for oropharyngeal carcinomas for which HPV RNA detection would be a more accurate marker for those related to HPV infection. Moreover, other risk factors such as smoking or alcohol abuse are possibly involved. We calculated low and high estimates with the aim of taking into account the lack of knowledge regarding the causal relationship between each lesion and multiple HPV infection types. Low estimates suppose that the vaccine only prevents cases with genotypes targeted by the vaccine, while high estimates rather suppose that the vaccine prevents all cases where at least one genotype targeted by the vaccine is present, even in the presence of another genotype. This assumes that the other genotype is not involved in the occurrence of the lesion. Of course, true effects are in-between and it should be noted that alternative methods based on proportional (i.e., weighted) or hierarchical attributions of genotypes to disease categories have been proposed, providing intermediate estimates [10,24,25]. We nevertheless preferred to report estimates intervals rather that single estimates. However, our results are in accordance with previous results reporting that a nonavalent vaccine would increase the protection from 70% to almost 90% of the infections responsible for ICC [13]. Moreover, a modelbased analysis showed that at the population level, the switch from a bivalent or a quadrivalent to a nonavalent vaccine would further reduce the occurrence of precancerous lesions and cervical cancer [26]. In France, both the bivalent and the quadrivalent vaccines are available with a predominant use of the quadrivalent [27]. Gardasil is currently indicated for the prevention of premalignant genital lesions (cervical, vulvar and vaginal), premalignant anal lesions, cervical cancers, anal cancers, and genital warts (condyloma acuminata) causally related to specific HPV types [28]. It should thus be borne in mind that the potential vaccine impact we assessed is hypothetical and concerns some outcomes (e.g. oropharyngeal carcinoma) for which no specific indication exists yet. Pap smear screening is and will remain a very efficient tool for the prevention of ICC. Even if screening of elderly women should still be highly recommended, HPV vaccination could reduce and hinder the spread of the virus and prevent HPV-related diseases and cancers for which no screening strategies are available. Conclusion The nonavalent HPV vaccine showed significant increased potential impact compared to the HPV 6/11/16/ 18 quadrivalent vaccine for ICC, CIN2/3 and LSIL. Nonavalent vaccination could thus be a cost-effective alternative [29] with almost 90% of ICC, CIN2/3, genital warts and anal cancer cases being potentially prevented. Additional file Additional file 1: Table S1. Definition of low and high estimates.
2017-04-10T00:49:55.961Z
2015-05-02T00:00:00.000
{ "year": 2015, "sha1": "22d2dd1dc2fd04b73eec9ec4b11c21e960a18541", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-015-1779-1", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "268d73f54f05223dc90db379ffc7d7862b48652b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
230782899
pes2o/s2orc
v3-fos-license
Accurately Differentiating Between Patients With COVID-19, Patients With Other Viral Infections, and Healthy Individuals: Multimodal Late Fusion Learning Approach Background: Effectively identifying patients with COVID-19 using nonpolymerase chain reaction biomedical data is critical for achieving optimal clinical outcomes. Currently, there is a lack of comprehensive understanding in various biomedical features and appropriate analytical approaches for enabling the early detection and effective diagnosis of patients with COVID-19. Objective: We aimed to combine low-dimensional clinical and lab testing data, as well as high-dimensional computed tomography (CT) imaging data, to accurately differentiate between healthy individuals, patients with COVID-19, and patients with non-COVID viral pneumonia, especially at the early stage of infection. Methods: In this study, we recruited 214 patients with nonsevere COVID-19, 148 patients with severe COVID-19, 198 noninfected healthy participants, and 129 patients with non-COVID viral pneumonia. The participants’ clinical information (ie, 23 features), lab testing results (ie, 10 features), and CT scans upon admission were acquired and used as 3 input feature modalities. To enable the late fusion of multimodal features, we constructed a deep learning model to extract a 10-feature high-level representation of Introduction COVID-19 is an emerging major biomedical challenge for the entire health care system [1]. Compared to severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS), COVID-19 has much higher infectivity. COVID-19 has also spread much faster across the globe than other coronavirus diseases. Although COVID-19 has a relatively lower case fatality rate than SARS and MERS, the overwhelmingly large number of diagnosed COVID-19 cases, as well as the many more undiagnosed COVID-19 cases, has endangered health care systems and vulnerable populations during the COVID-19 pandemic. Therefore, the early and accurate detection and intervention of COVID-19 are key in effectively treating patients, protecting vulnerable populations, and containing the pandemic at large. Currently, the gold standard for the confirmatory diagnosis of COVID-19 is based on molecular quantitative real-time polymerase chain reaction (qRT-PCR) and antigen testing for the disease-causing SARS-CoV-2 virus [2][3][4]. Although these tests are the gold standard for COVID-19 diagnosis, they suffer from various practical issues, including reliability, resource adequacy, reporting lag, and testing capacity across time and space [5]. To help frontline clinicians diagnose COVID-19 more effectively and efficiently, other diagnostic methods have also been explored and used, including medical imaging (eg, X-ray scans and computed tomography [CT] scans [6]), lab testing (eg, various blood biochemistry analyses [7][8][9][10]), and identifying common clinical symptoms [11]. However, these methods do not directly detect the disease-causing SARS-CoV-2 virus or the SARS-CoV-2 antigen. Therefore, these methods do not have the same conclusive power that confirmatory molecular diagnostic methods have. Nevertheless, these alternative methods help clinicians with inadequate resources detect COVID-19, differentiate patients with COVID-19 from patients without COVID-19 and noninfected individuals, and triage patients to optimize health care system resources [12,13]. When applied appropriately, these supplementary methods, which are based on alternative biomedical evidence, can help mitigate the COVID-19 pandemic by accurately identifying patients with COVID-19 as early as possible. Currently, CT scans can be analyzed to differentiate patients with COVID-19, especially those in a severe clinical state, from healthy people or patients with non-COVID infections. Patients with COVID-19 usually present the typical ground-glass opacity (GGO) characteristic on CT images of the thoracic region. A recent study has reported a 98% COVID-19 detection rate based on a 51-patient sample without a comparison group [14]. Detection rates that ranged between 60% and 93% were also reported in another study on 1014 participants with a comparison group [15]. Furthermore, the recent advances in data-driven deep learning (DL) methods, such as convolutional neural networks (CNNs), have demonstrated the ability to detect COVID-19 in patients. On February 2020, Hubei, China adopted CT scans as the official clinical COVID-19 diagnostic method in addition to molecular confirmatory diagnostic methods for COVID-19, in accordance with the nation's diagnosis and treatment guidance [2]. However, the effectiveness of using DL methods to further differentiate SARS-CoV-2 infection from clinically similar non-COVID viral infections still needs to be explored and evaluated. With regard to places where molecular confirmatory diagnoses are not immediately available, symptoms are often used for quickly evaluating presumed patients' conditions and supporting triage [13,16,17]. Checklists have been developed for self-evaluating the risk of developing COVID-19. These checklists are based on clinical information, including symptoms, preexisting comorbidities, and various demographic, behavioral, and epidemiological factors. However, these clinical data are generally used for qualitative purposes (eg, initial assessment) by both the public and clinicians [18]. Their effectiveness in providing accurate diagnostic decision support is largely underexplored and unknown. In addition to biomedical imaging and clinical information, recent studies on COVID-19 have shown that laboratory testing, such as various blood biochemistry analyses, is also a feasible method for detecting COVID-19 in patients, with reasonably high accuracy [19,20]. The rationale is that the human body is a unity. When people are infected with SARS-CoV-2, the clinical consequences can be observed not only from apparent symptoms, but also from hematological biochemistry changes. Due to the challenge of asymptomatic SARS-CoV-2 infection, other types of biomedical information, such as lab testing results, can be used to provide alternative and complementary diagnostic decision support evidence. It is possible that our current definition and understanding of asymptomatic infection can be extended with more intrinsic, quantitative, and subtle medical features, such as blood biochemistry characteristics [21,22]. Despite the tremendous advances in obtaining alternative and complementary diagnostic evidence for COVID-19 (eg, CT scans, chest X-rays, clinical information, and various blood biochemistry characteristics), there are still substantial clinical knowledge gaps and technical challenges that hinder our efforts in harnessing the power of various biomedical data. First, most recent studies have usually focused on one of the multiple modalities of diagnostic data, and these studies have not considered the potential interactions between and added interpretability of these modalities. For example, can we use both CT scan and clinical information to develop a more accurate COVID-19 decision support system [23]? As stated earlier, the human body acts as a unity against SARS-CoV-2 infection. Biomedical imaging and clinical approaches can be used to evaluate different aspects of the clinical consequences of COVID-19. By combining the different modalities of biomedical information, a more comprehensive characterization of COVID-19 can be achieved. This is referred to as multimodal biomedical information research. Second, while there are ample accurate DL algorithms/models/ tools, especially in biomedical imaging, most of them focus on differentiating patients with COVID-19 from noninfected healthy individuals. A moderately trained radiologist can differentiate CT scans of patients with COVID-19 from those of healthy individuals with high accuracy, making current efforts in developing supplicated DL algorithms not clinically useful for solving the binary classification problem [14]. The more critical and urgent clinical issue is not only being able to differentiate patients with COVID-19 from noninfected healthy individuals, but also being able to differentiate SARS-CoV-2 infection from non-COVID viral infections [24,25]. Patients with non-COVID viral infection present with GGO in their CT scans of the thoracic region as well. Therefore, the specificity of GGO as a diagnostic criterion of COVID-19 is low [15]. In addition, patients with nonsevere COVID-19 and patients with non-COVID viral infection share several common symptoms, which are easy to confuse [26]. Therefore, for frontline clinicians, effectively differentiating nonsevere COVID-19 from non-COVID viral infection is a challenging task without readily available and reliable confirmatory molecular tests at admission. Incorrectly diagnosing severe COVID-19 as nonsevere COVID-19 may result in missing the critical window of intervention. Similarly, differentiating asymptomatic and presymptomatic patients, including those with nonsevere COVID-19, from noninfected healthy individuals is another major clinical challenge [27]. Incorrectly diagnosing patients without COVID-19 or healthy individuals and treating them alongside patients with COVID-19 will substantially increase their risk of exposure to the virus and result in health care-associated infections. There is an urgent need for a multinomial classification system that can detect patients with COVID-19, including patients with asymptomatic COVID-19, patients with non-COVID viral infection, and healthy individuals, all at once, rather than a system that analyzes several independent binary classifiers in parallel [28]. The third major challenge addresses the computational aspect of harnessing the power of various biomedical data. Due to the novelty of the COVID-19 pandemic, human clinicians have varying degrees of understanding and experience with regard to COVID-19, which can lead to inconsistencies in clinical decision making. Harnessing the power of multimodal biomedical information from combined imaging, clinical, and lab testing data can be the basis of a more objective, data-driven, analytical framework. In theory, such a framework can provide a more comprehensive understanding of COVID-19 and a more accurate decision support system that can differentiate between patients with severe or nonsevere COVID-19, patients with non-COVID viral infection, and healthy individuals all at once. However, biomedical imaging data, such as CT data, with a high-dimensional feature space do not integrate well with low-dimensional clinical and lab testing data. Current studies have usually only described the association between biomedical imaging and clinical features [15,[29][30][31][32][33], and the potential power of an accurate decision support tool has not been reported. Technically, CT scans are usually processed with DL methods, including the CNN method, independently from other types of biomedical data processing methods. Low-dimensional clinical and lab testing data are usually analyzed with traditional hypothesis-driven methods (eg, binary logistic regression or multinomial classification) or other non-DL machine learning (ML) methods, such as the random forest (RF), support vector machine (SVM), and k-nearest neighbor (kNN) methods. The huge discrepancy of feature space dimensionality between CT scan and clinical/lab testing data makes multimodal fusion (ie, the direct combination of the different aspects of biomedical information) especially challenging [34]. To fill these knowledge gaps and overcome the technical challenge of effectively analyzing multimodal biomedical information, we propose the following study objective: we aimed to clinically and accurately differentiate between patients with nonsevere COVID-19, patients with severe COVID-19, patients with non-COVID viral pneumonia, and healthy individuals all at once. To successfully fulfill this much-demanded clinical objective, we developed a novel hybrid DL-ML framework that harnesses the power of a wide array of complex multimodal data via feature late fusion. The clinical objective and technical approach of this study synergistically complements each other to form the basis of an accurate COVID-19 diagnostic decision support system. Participant Recruitment We recruited a total of 362 patients with confirmed COVID-19 from Wuhan Union Hospital between January 2020 and March 2020 in Wuhan, Hubei Province, China. COVID-19 was confirmed based on 2 independent qRT-PCR tests. For this study, we did not aggregate patients with COVID-19 under the same class because the clinical characteristics of nonsevere and severe COVID-19 were distinct. Patients' COVID-19 status was confirmed upon admission. The recruited patients were further categorized as being in severe (n=148) or nonsevere (n=214) clinical states based on their prognosis at 7-14 days after initial admission. This step ensured the development of an early detection system for when the initial conditions of patients with COVID-19 were not severe upon admission. Patients in the severe state group were identified by having 1 of the following 3 clinical features: (1) respiratory rate>30 breaths per minute, (2) oxygen saturation<93% at rest, and (3) arterial oxygen partial pressure/fraction of inspired oxygen<300 mmHg (ie, 40 kPa). These clinical features are based on the official COVID-19 Diagnosis and Treatment Plan from the National Health Commission of China [2], as well as guidelines from the American Thoracic Society [35]. The noninfected group included 198 healthy individuals without any infections. These participants were from the 2019 Hubei Provincial Centers for Disease Control and Prevention regular annual physical examination cohort. This group represented a baseline healthy group, and they were mainly used as a comparison group for patients with nonsevere COVID-19, especially those who presented with inconspicuous clinical symptoms. In order to differentiate patients with COVID-19, especially those with nonsevere COVID-19, from patients with clinically similar non-COVID viral infection, we also included another group of 129 patients diagnosed with non-COVID viral pneumonia in this study. It should be noted that the term "viral pneumonia" was an umbrella term that included diseases caused by more than 1 type of virus, such as the influenza virus and adenovirus. However, in clinical practice, it would be adequate to detect and differentiate between SARS-CoV-2 infection and non-COVID viral infections for initial triaging. Therefore, we recruited 129 participants with confirmed non-COVID viral infection from Kunshan Hospital, Suzhou, China. The reality was that most health care resources were optimized for COVID-19, and some patients who presented with COVID-19-like symptoms or GGOs were clinically diagnosed with COVID-19 without the use of confirmatory qRT-PCR tests in Hubei, especially during February 2020. Therefore, it was not possible to recruit participants with non-COVID viral infection in Hubei during the same period that we recruited patients with COVID-19. In summary, the entire study sample was comprised of the following 4 mutually exclusive multinomial participant classes: severe COVID-19 (n=148), nonsevere COVID-19 (n=214), non-COVID viral infection (n=129), and noninfected healthy (n=198). This study was conducted in full compliance with the Declaration of Helsinki. This study was rigorously evaluated and approved by the institutional review board committees of Jiangsu Provincial Center for Disease Control and Prevention (approval number JSJK2020-8003-01). All participants were comprehensively told about the details of the study. All participants signed a written informed consent form before being admitted. Medical Feature Selection and Description Patient participants, including those in the severe COVID-19, nonsevere COVID-19, and non-COVID viral infection classes, were screened upon initial admission into hospitals. Their clinical information, including preexisting comorbidities, symptoms, demographic characteristics, epidemiological characteristics, and other clinical data, were recorded. For the noninfected healthy class, participants' clinical data were extracted from the Hubei Provincial Centers for Disease Control and Prevention physical examination record system. Patient-level sensitive information, including name and exact residency, were completely deidentified. After comparing the different classes, the following 23 clinical features were selected for this study: smoking history, hypertension, type-2 diabetes, cardiovascular disease (ie, any type), chronic obstructive pulmonary disease, fever, low fever, medium fever, high fever, sore throat, coughing, phlegm production, headache, feeling chill, muscle ache, feelings of fatigue, chest congestion, diarrhea, loss of appetite, vomiting, old age (ie, >50 years; dichotomized and encoded as old), and gender. These clinical data were dichotomized as either having the condition (score=1) or not having the condition (score=0) (Figure 1). It should be noted that several clinical features, especially symptoms, were self-reported by the patients. A more comprehensive definition and description of clinical features are provided in Multimedia Appendix 1. The prevalence (ie, the number of participants that have a given feature over the total number of participants in the class) of each clinical feature was computed across the 4 classes. For the 0-1 binary clinical features, a pairwise z-test was applied to detect any substantial differences in the prevalence (ie, proportion) of these features between classes. The lab testing features were extracted from participants' electronic health records ( Figure 1). Only the features from lab tests that were performed at the time of admission were included. Noninfected healthy participants' blood samples were taken during their annual physical examination. We selected lab testing features that were present in at least 90% of participants in any of the 4 classes (ie, severe COVID-19, nonsevere COVID-19, non-COVID viral infection, and noninfected healthy). After screening, the following 10 features were included: white blood cell count, hemoglobin level, platelet count, neutrophil count, neutrophil percent, lymphocyte count, lymphocyte percent, C-reactive protein level, total bilirubin level, and creatine level. Features in the lab testing modality all had continuous numeric values, which were different from the 0-1 binary values in the clinical feature modality. The distributions of these lab testing features were compared across the 4 classes by using a 2-sided Kolmogorov-Smirnov test. In addition, we also applied the Kruskal-Wallis test for multiple comparisons across the 4 classes for the top 3 most differentiating features, which were identified later by an ML workflow. The Kolmogorov-Smirnov test was applied during initial screening to investigate whether the values of the same biomedical feature were distributed differently between 2 classes. The nonparametric Kruskal-Wallis test was chosen because it could rigorously compare classes and provide robust results for nonnormal data. The test was able to accommodate more than 2 classes (ie, multinomial classes) in this study. Each participant underwent CT scans of the thoracic region in the radiology department. Toshiba Activion 16 multislice CT scanners were used to perform CT scanning at around 120 kVp with a tube current of 50 mA. We obtained 50 CT images per scan, and each image had the following characteristics: slice thickness=2 mm, voxel size=1.5 mm, and image resolution=512×512 pixels. Each participant underwent an average of 50 CT scans. The total number of CT images obtained in this study was over 30,000. CT images were archived and presented as DICOM (Digital Imaging and Communications in Medicine) images for DL. Figure 1. Multimodal feature late fusion and multinomial classification workflow. A deep learning convolutional neural network was applied to computed tomography images for representation learning and extracting 10 features from a customized fully connected layer. These 10 features were merged with other modality data through feature late fusion. In the machine learning stage of the workflow, each of the 3 machine learning models (ie, the support vector machine, k-nearest neighbor, and random forest models) worked independently to provide their respective outputs. kNN: k-nearest neighbor; ML: machine learning; RF: random forest; SVM; support vector machine. The Multinomial Classification Objective The main research goal of this study was to accurately differentiate between patients with severe COVID-19, patients with nonsevere COVID-19, patients with non-COVID viral infection, and noninfected healthy individuals from a total of N participants all at once. Therefore, a formula was developed to address the multinomial output classification problem. The following equation uses 1 of the 4 mutually exclusive output classes (ie, H=noninfected healthy, V=non-COVID viral pneumonia, NS=nonsevere COVID-19, and S=severe COVID-19) of an individual (ie, i), as follows: In this equation, the inputs were individuals' (ie, i) multimodal features of binary clinical information (ie, X c ), continuous lab test results (ie, X l ), and CT imaging (ie, X m ). The major advantage of our study was that we were able to classify 4 classes all at once, instead of developing several binary classifiers in parallel. The Hybrid DL-ML Approach: Feature Late Fusion As stated earlier, the voxel level of CT imaging data does not integrate well with low-dimensional clinical and lab testing features. In this study, we proposed a feature late fusion approach via the use of hybrid DL and ML models. Technically, DL is a type of ML that uses deep neural networks (eg, CNNs are a type of deep neural network). In this study, we colloquially used the term "machine learning" to refer to more traditional, non-DL types of ML (eg, RF ML), in contrast with DL that focuses on deep neural networks. An important consideration in the successful late fusion of multimodality features is the representation learning of the high-dimensional CT features. For each CT scan of each participant, we constructed a customized residual neural network (ResNet) [36][37][38][39], which is a specific architecture for DL CNNs. A ResNet is considered a mature CNN architecture with relatively high performance across different tasks. Although other CNN architectures exist (eg, EfficientNet, VGG-16, etc), the focus of this study was not to compare different architectures. Instead, we wanted to deliver the best performance possible with a commonly used CNN architecture (ie, ResNet) for image analysis. By constructing a ResNet, we were able to transform the voxel-level imaging data into a high-level representation with significantly fewer features. After several convolution and max pooling layers, the ResNet reached a fully connected (FC; ie, FC1 layer) layer before the final output layer, thereby enabling the delivery of actual classifications. In the commonly used ResNet architecture, the FC layer is a 1×512 vector, which is relatively closer in dimensionality to clinical information (ie, 1×23 vector) and lab testing (ie, 1×10 vector) feature modalities. However, the original FC layer from the ResNet was still much larger than the other 2 modalities. Therefore, we added another FC layer (ie, FC2 layer) after the FC1 layer, but before the final output layer. In this study, the FC2 layer was set to have a 1×10 vector dimension (ie, 10 elements in the vector) to match the dimensionality of the other 2 feature modalities. Computationally, the FC2 layer served as the low-dimensional, high-level representation of the original CT scan data. The distributions of the 10 features extracted from the ResNet in the FC2 layer were compared across the 4 classes with the Kolmogorov-Smirnov test. The technical details of this customized ResNet architecture are provided in Multimedia Appendix 2. Once low-dimensional high-level features were extracted from CT data via the ResNet CNN, we performed multimodal feature fusion. The clinical information, lab testing, and FC2 layer features of each participant (ie, i) were combined into a single 1× 43 (ie, 1×[23+10+10]) row vector. The true values of the output were the true observed classes of the participants. Technically, the model would try to predict the outcome as accurately as it could, based on the observed classes. The Hybrid DL-ML Approach: Modeling After deriving the feature matrix, we applied ML models for the multinomial classification task. In this study, 3 different types of commonly used ML models were considered, as follows: the RF, SVM, and kNN models. An RF model is a decision-tree-based ML model, and the number of tree hyperparameters was set at 10, which is a relatively small number compared to the number of input features needed to avoid potential model overfitting. Other RF hyperparameters in this study included the Gini impurity score to determine tree split, at least 2 samples to split an internal tree, and at least 1 sample at a leaf node. All default hyperparameter settings, including those of the SVM and RF models, were based on the scikit-learn library in Python. An SVM model is a model of maximum hyperplane and L-2 penalty; radial basis function kernels and a gamma value of 1/43 (ie, the inverse of the total number of features) were used as hyperparameter values in this study. kNN is a nonparametric instance-based model; the following hyperparameter values were used in this study: k=5, uniform weights, tree leaf size=30, and p=2. These 3 models are technically distinct types of ML models. We aimed to investigate whether specific types of ML models and multimodal feature fusion would contribute to developing an accurate COVID-19 classifier for clinical decision support. We evaluated each respective ML model with 100 independent runs. Each run used a different randomly selected dataset comprised of 80% of the original data for training, and the remaining 20% of data were used to test and validate the model. Performing multiple runs instead of a single run revealed how robust the model was, despite system stochasticity. The 80%-20% split of the original data for separate training and testing sets also ensured that potential model overfitting and increased model generalizability could be avoided. In addition, RF models use bagging for internal validation based on out-of-bag errors (ie, how the "tree" would split out in the "forest" model). After each run, important ML performance metrics, including accuracy, sensitivity, precision, and F1 score, were computed for the test set. We reported the overall performance of the ML models first. These different metrics evaluated ML models based on different aspects. In this study, we also considered 3 different approaches for calculating the overall performance of multinomial outputs, as follows: a micro approach (ie, the one-vs-all approach), a macro approach (ie, unweighted averages; each of the 4 classes were given the same 25% weights), and a weighted average approach based on the percentage of each class in the entire sample. In addition, because the output in this study was multinomial instead of binary, each class had its own performance metrics. We aggregated these performance metrics across 100 independent runs, determined each metric's distribution, and evaluated model robustness based on these distributions. If ML performance metrics in the testing set had a small variation (ie, small standard errors), then the model was considered robust against model input changes, thereby allowing it to reveal the intrinsic pattern of the data. This was because in each run, a different randomly selected dataset (ie, 80% of the original data) was selected to train the model. An advantage that the RF model had over SVM and kNN models was that it had relatively clearer interpretability, especially when interpreting feature importance. After developing the RF model based on the training set, we were able to rank the importance of input features based on their corresponding Gini impurity score from the RF model [40,41]. It should be noted that only the training set was used to compute Gini impurity, not the test set. We then assessed the top contributing features' clinical relevance to COVID-19. We also developed and evaluated the performance of single-modality (ie, using clinical information, lab testing, and CT features individually) ML models. The performance results were used as baseline conditions. The models' performance results were then compared to the multimodal classifications to demonstrate the potential performance gain of the feature fusion of different feature modalities. In this study, each individual ML model (ie, the RF, SVM, and kNN models) was independently evaluated, and the respective results were reported, without combining the prediction of the final output class. The deep learning CNN and late fusion machine learning codes were developed in Python with various supporting packages, such as scikit-learn. Clinical Characterization of the 4 Classes Detailed demographic, clinical, and lab testing results among four classes were provided in supplementary Table S1. We In addition, based on the ML RF analysis, the top 3 differentiating clinical features were fever, coughing, and old age (ie, >50 years). For fever and coughing, we used the non-COVID viral infection class as the reference and constructed 2×2 contingency tables for the nonsevere COVID-19 and non-COVID viral infection classes, and the severe COVID-19 and non-COVID viral infection classes. The odds ratios and 95% confidence intervals for the forest plot are shown in Figure 2. Compared to patients in the non-COVID viral infection class, patients in both the nonsevere and severe COVID-19 classes were more likely to develop fever (ie, >37°C). In addition, based on the forest plot, patients with severe COVID-19 also experienced more fevers than patients with nonsevere COVID-19. Therefore, fever was one of the major determining factors of differentiating between multiple classes. Furthermore, patients with nonsevere COVID-19 (P<.001) and patients with severe COVID-19 (P<.001) reported significantly less coughing than the patients with non-COVID viral infection ( Figure 2). There were no statistically significant differences between the nonsevere and severe COVID-19 classes in terms of clinical features. With regard to the old age feature, we included the severe COVID-19, nonsevere COVID-19, and noninfected healthy classes in the analysis because the prevalence of old age in the noninfected healthy class was not 0. The forest plot for this analysis is shown in Figure 2. Patients with severe COVID-19 were significantly older than patients with non-COVID viral infection, while patients with nonsevere COVID-19 and noninfected healthy individuals were younger than patients with non-COVID viral infection. These differences in clinical features between the 4 classes could pave the way toward a data-driven ML model. Based on the RF model, the 3 most influential differentiating features were C-reactive protein level, hemoglobin level, and neutrophil count. The distribution of C-reactive protein level among the 4 classes are provided in the boxplot in Figure 3. In addition to the Kolmogorov-Smirnov test, which did not account for multiple comparisons between classes, further pairwise comparisons were performed with the nonparametric Kruskal-Wallis H test. Each of the 6 pairs used in the Kruskal-Wallis H test, as well as the overall Kruskal-Wallis test, showed significant differences between each class. The distribution of hemoglobin levels is shown in Figure 3. Although the noninfected healthy class differed significantly from the nonsevere COVID-19, severe COVID-19, and non-COVID viral infection class in terms of hemoglobin level, the other 3 pairs did not show statistically significant differences in lab testing features. The distribution of neutrophil count is shown in Figure 3. All pairwise comparisons and the overall Kruskal-Wallis test showed significant differences between classes in terms of lab testing features. CT Differences Between the 4 Classes Based on High-Level CNN Features We analyzed the FC2 layer features from the ResNet CNN in relation to the 4 classes. The corresponding boxplot is shown in Multimedia Appendix 5. The 2-sided Kolmogorov-Smirnov tests showed significant differences between every pair of classes in almost all 10 CT features in the FC2 layer. The only exceptions were feature 6 (ie, CNN6) between the severe COVID-19 and non-COVID viral infection classes and features 1, 4, and 5 between the noninfected healthy and non-COVID viral infection classes (Multimedia Appendix 6). Based on the RF model results, features 1, 6, and 10 were the 3 most critical features in the FC2 layer with regard to multinomial classification. Further Kruskal-Wallis tests were performed for these 3 features, and the results are shown in Figure 4. These results showed that developing an accurate classifier based on the CNN representation of high-level features is possible. Accurate Multimodal Model for COVID-19 Multinomial Classification We developed and validated 3 different types of ML models, as follows: the kNN, RF, and SVM models. With regard to training data, the average overall multimodal classification accuracy of the kNN, RF, and SVM models was 96.2% (SE 0.5%), 99.8% (SE 0.3%), and 99.2% (SE 0.2%), respectively. With regard to test data, the average overall multimodal classification accuracy of the 3 models was 95.4% (SE 0.2%), 96.9% (SE 0.2%), and 97.7% (SE 0.1%), respectively ( Figure 5). These 3 models also achieved consistent and high performance across all 4 classes based on the different approaches for calculating the overall performance, including the micro approach (ie, the one-vs-all approach), macro approach (ie, unweighted averages across all 4 classes), and weighted average approach (ie, based on percentage of each class in the entire sample). It should be noted that overall accuracy did not depend on sample size, so there was only 1 approach for calculating accuracy. The F1 score, sensitivity, and precision were quantified via each approach (ie, the micro, macro, and weighted average approaches). The F1 scores that were calculated using the macro approach were 95.9% (SE 0.1%), 98.8% (SE<0.1%), and 99.1% (SE<0.1%) for the kNN, RF, and SVM models, respectively. The F1 scores that were calculated using the micro approach was 96.2% (SE<0.1%), 98.8% (SE<0.1%), and 99.2% (SE<0.1%) for the kNN, RF, and SVM models, respectively. The F1 scores calculated using the weighted average approach was 96.2% (SE<0.1%), 98.9% (SE<0.1%), and 99.2% (SE<0.1%) for the kNN, RF, and SVM models, respectively. The differences in F1 scores based on the different approaches (ie, the micro, macro, and weighted average approaches) were minimal ( Figure 5). In addition, the differences in F1 scores across the different ML models ( Figure 5) were also not significant. Similarly, model sensitivity and precision were all >95% for all ML model types and all approaches for calculating the performance metric. The complete overall performance metrics for the 3 different evaluation approaches and 3 ML models are presented in Multimedia Appendix 7. Figure 5. The overall performance of machine learning models across the 4 classes. Model performance was based on the prediction of unseen testing data (ie, the 20% of the original data), not on the 80% of the original data that were used to develop the model. kNN: k nearest neighbor; RF: random forest; SVM: support vector machine. After examining the performance metrics across the 3 different types of ML models, it was clear that the SVM model consistently had the best performance with regard to all metrics, followed by the RF model, though the difference was almost indistinguishable. The kNN model had about a 1%-3% deficiency in performance compared to the other 2 models. It should be noted that the kNN model also had an accuracy, F1 score, sensitivity, and precision of at least 95%. Therefore, the kNN model was only bested by 2 even more competitive models. Furthermore, the relatively small standard errors demonstrated that the ML models were robust against different randomly sampled inputs (Multimedia Appendix 7). With regard to each individual class, the noninfected healthy class had a 95.2%-99.9% prediction accuracy, 95.5%-98.4% F1 score, 91.4%-97.3% sensitivity, and 97.5%-99.9% precision in the testing set, depending on the specific ML model used. It should be noted these are ranges, not standard errors, as shown in Figure 6. The approach to computing class-specific model performance was the one-vs-all approach. With regard to the nonsevere COVID-19 class, ML models achieved a 95.8%-97.4% accuracy, 97.8%-98.6% F1 score, 99.8%-99.9% sensitivity, and 95.8%-97.4% precision. With regard to the severe COVID-19 class, ML models achieved a 92.4%-99.0% accuracy, 93.4%-96.6% F1 score, 94.3%-94.7% sensitivity, and 92.4%-99.0% precision. With regard to the non-COVID viral pneumonia infection class, ML models achieved a 90.6%-95.0% accuracy, 92.9%-96.8% F1 score, 95.4%-98.8% sensitivity, and 90.6%-95.0% precision. The non-COVID viral infection class was relatively more challenging to differentiate from the other 3 classes, but the difference was not substantial. Therefore, the potential clinical use of the ML models is still justified. Similar to the results of overall model performance ( Figure 5), class-specific performance metrics also had relatively small standard errors, indicating that the training of models was consistent and robust against randomly selected inputs. Except for a few classes and model performance metrics, the SVM model performed slightly better than the RF and kNN models. The complete class-specific results are shown in Figure 6. The complete class-specific performance metrics across the 3 ML models are shown in Multimedia Appendix 8. All 3 ML multinomial classification models, which were based on different computational techniques, had consistently high overall performance ( Figure 5, Table S3) and high performance for each specific class ( Figure 6, Multimedia Appendix 8). Of the 3 types of ML models developed and evaluated, the SVM model was marginally better than the RF and kNN models. As a result, the ML multinomial classification models were able to accurately differentiate between the 4 classes all at once, provide accurate and detailed class-specific predictions, and act as reliable decision-making tools for clinical diagnostic support and the triaging of patients with suspected COVID-19, who might or might not be infected with a clinically similar type of virus other than SARS-CoV-2. In addition to the multimodal classification that incorporated all 3 different feature sets (ie, binary clinical, continuous lab testing, and CT features in the ResNet CNN; Figure 1), we also tested how each specific feature modality performed without feature fusion (ie, unimodality). By using each of the 23 symptom features alone, the RF, kNN, and SVM models achieved an average accuracy of 74.5% (SE 0.3%), 73.3% (SE 0.3%), and 75.5% (SE 0.3%) with the testing set, respectively. By using each of the 10 lab testing features alone, the RF, kNN, and SVM models achieved an average accuracy of 67.7% (SE 0.4%), 56.2% (SE 0.4%), and 59.5% (SE 0.3%) with the testing set, respectively. The overall accuracy of the CNN with CT scan data alone was 90.8% (SE 0.3%) across the 4 classes. With regard to each pair of classes, the CNN was able to accurately differentiate between the severe COVID-19 and noninfected healthy classes with 99.9% (SE<0.1%) accuracy, the non-COVID viral infection and noninfected healthy classes with 99.2% (SE 0.1%) accuracy, the severe COVID-19 and nonsevere COVID-19 classes with 95.4% (SE 0.1%) accuracy, and the non-severe COVID-19 and noninfected healthy classes with 90.3% (SE 0.2%) accuracy. However, by using CT features alone (ie, without feature late fusion), the CNN could only differentiate between the non-COVID viral infection and nonsevere COVID-19 classes with 84.9% (SE 0.2%) accuracy, and the non-COVID viral infection and severe COVID-19 with 74.2% (SE 0.2%) accuracy in the testing set. Substantial performance boosts were gained by combining input features from the different feature modalities and performing multimodal classification, instead of using a single-feature modality alone. A 15%-42% increase in prediction accuracy with the testing set was achieved compared to the single-modality models. It should be noted that the RF, SVM, and kNN models were technically distinct ML models. However, the performance differences between these 3 distinct ML models were marginal, based on the multimodal features. Therefore, we concluded that the high performance in COVID-19 classification in this study (Figures 5 and 6) was largely due to multimodal feature late fusion, not due to the specific type of ML model. Gini impurity scores derived from the RF model identified major contributing factors that differentiated the 4 classes. With regard to clinical feature modality, the top 3 most influential features were fever, coughing, and old age (ie, >50 years). The forest plots of odds ratios for these features are provided in Figure 2, which shows the exact influence that these features had across classes. With regard to lab testing features, the top 3 most influential features, in descending order, were high-sensitivity C-reactive protein level, hemoglobin level, and absolute neutrophil count. The distribution of these 3 features across the 4 classes and the results of multiple comparisons are shown in Figure 3. Although high-sensitivity C-reactive protein level is a known factor for COVID-19 severity and prognosis [42], we showed that it could also differentiate patients with COVID-19 from patients with non-COVID viral pneumonia and healthy individuals. In addition, we learned that different hemoglobin and neutrophil levels were novel features for accurately distinguishing between patients with clinical COVID-19, patients with non-COVID viral pneumonia, and healthy individuals. These results shed light on which set of clinical and lab testing features are the most critical in identifying COVID-19, which will help guide clinical practice. With regard to the CT features extracted from the CNN, the RF models identified the top 3 influential features, which were CT features 6, 10, and 1 in the 10-element FC2 layer (Figure 4). Although the actual clinical interpretation of CT features was not clear at the time of this study due to the nature of DL models, including the ResNet CNN applied in this study, these features showed promise in accurately differentiating between multinomial classes all at once via CT scans, instead of training several CNNs for binary classifications between each class pair. Future research might reveal the clinical relevance of these features in a more interpretable way with COVID-19 pathology data. Principal Findings In this study, we provided a more holistic perspective to characterizing COVID-19 and accurately differentiating COVID-19, especially nonsevere COVID-19, from other clinically similar viral pneumonias and noninfections. The human body is an integrated and systemic entity. When the body is infected by pathogens, clinical consequences can be detected not only with biomedical imaging features (eg, CT scan features), but also with other features, such as lab testing results for blood biochemistry [20,43]. A single-feature modality might not reveal the full clinical consequences and provide the best predictive power for COVID-19 detection and classification, but the synergy of multiple modalities exceeds the power of any single modality. Currently, multimodality medical data can be effectively stored, transferred, and exchanged with electronic health record systems. The economic cost of acquiring clinical and lab testing modality data are lower than the economic cost of acquiring current confirmatory qRT-PCR data. Availability and readiness are also advantages that these modalities have over qRT-PCR, which currently has a long turnaround time. This study harnessed the power of multimodality medical information for an emerging pandemic, for which confirmatory molecular tests have reliability and availability issues across time and space. This study's novel analytical framework can be used to prepare for incoming waves of disease epidemics in the future, when clinicians' experience and understanding with the disease may vary substantially. Upon the further examination of comprehensive patient symptom data, we believed that our current understanding and definition of asymptomatic COVID-19 would be inadequate. Of the 214 patients with nonsevere COVID-19, 60 (28%) had no fever (ie, <37°C), 78 (36.4%) did not experience coughing, 141 (65.9%) did not feel chest congestion and pain, and 172 (80.4%) did not report having a sore throat upon admission. Additionally, there were 10 (4.7%) patients with confirmed COVID-19 in the nonsevere COVID-19 class who did not present with any of these common symptoms and could be considered patients with asymptomatic COVID-19. Even after considering headache, muscle pain, and fatigue, there were still 4 (1.9%) patients who did not show symptoms related to typical respiratory diseases. Of these 4 patients, 1 (25%) had diarrhea upon admission. Therefore, using symptom features alone is not sufficient for detecting and differentiating patients with asymptomatic COVID-19. Nevertheless, all asymptomatic patients were successfully detected via our model, and no false negatives were observed. This finding shows the incompleteness of the current definition and understanding of asymptomatic COVID-19, and the potential power that nontraditional analytical tools have for identifying these patients. Based on this perspective, we developed a comprehensive end-to-end analytical framework that integrated both high-dimensional biomedical imaging data and low-dimensional clinical and lab testing data. CT scans were first processed with DL CNNs. We developed a customized ResNet CNN architecture with 2 FC layers before the final output layer. We then used the second FC layer as the low-dimensional representation of the original high-dimensional CT data. In other words, a CNN was applied first for dimensional reduction. The feature fusion of CT (ie, represented by the FC layers), clinical, and lab testing feature modalities demonstrated feasibility and high accuracy in differentiating between the nonsevere COVID-19, severe COVID-19, non-COVID viral pneumonia, and noninfected healthy classes all at once. The consistent high performance across the 3 different types of ML models (ie, the RF, SVM, and kNN models), as well as the substantial performance boost from using a single modality, further unleashed the hidden power of feature fusion for different biomedical feature modalities. Compared to the accuracy of using any single-feature modality alone (60%-80%), the feature fusion of multimodal biomedical data substantially boosted prediction accuracy (>97%) in the testing set. We compared the performance of our model, which was based on the multimodal biomedical data of 683 participants, against the performance of state-of-the-art benchmarks in COVID-19 classification studies. A DL study that involved thoracic CT scans for 87 participants claimed to have >99% accuracy [37], and another study with 200 participants claimed to have 86%-99% accuracy in differentiating between individuals with and without COVID-19 [36]. Another study reported a 95% area under the curve for differentiating between COVID-19 and other community-acquired pneumonia diseases in 3322 participants [39]. Furthermore, a 92% area under the curve was achieved in a study of 905 participants with and without COVID-19 by using multimodal CT, clinical, and lab testing information [44]. A study that used CT scans to differentiate between 3 multinomial classes (ie, the COVID [no clinical state information], non-COVID viral pneumonia, and healthy classes) achieved an 89%-96% accuracy based on a total of 230 participants [38]. In addition, professionally trained human radiologists have achieved a 60%-83% accuracy in differentiating COVID-19 from other types of community-acquired pneumonia diseases [45]. Therefore, the performance of our model is on par with, or superior to, the performance of these benchmark models and exceeds the performance of human radiologists. Moreover, previous studies have generally focused on differentiating patients with COVID-19 from individuals without COVID-19 or patients with other types of pneumonia. In other words, the current COVID-19 classification models are mostly binary classifiers. Our study not only detected COVID-19 in healthy individuals, but also addressed the more important clinical issue of differentiating COVID-19 from other viral infections. Our study also distinguished between different COVID-19 clinical states (ie, severe vs nonsevere). Therefore, our study provides a novel and effective breakthrough for clinical applications, not just incremental improvements for existing ML models. The success of this study sheds light on many other disease systems that use multimodal biomedical data inputs. Specifically, the feature fusion of high-and low-dimensional biomedical data modalities can be applied to more feature modalities, such as individual-level high-dimensional "-omics" data. Currently, a study on the genome-wide association between individual single nucleotide polymorphisms and COVID-19 susceptibility has revealed several target loci that are involved in COVID-19 pathology. Following a similar approach, we may also conduct another study, in which we first carry out the dimensional reduction of "-omics" data, and then perform data fusion with other low-dimensional modalities [46][47][48]. With regard to classification, this study adopts a hybrid of DL (ie, CNN) and ML (ie, RF, SVM, and kNN ML) models via feature late fusion. By using various data-driven methods, we avoided the potential cause-effect pitfall and focused directly on the more important clinical question. For instance, many comorbidities, such as diabetes [49,50] and cardiovascular diseases [51,52], are strongly associated with the occurrence of severe COVID-19. It is still unclear whether diabetes or reduced kidney function causes severe COVID-19, whether SARS-CoV-2 infection worsens existing diabetes, or whether diabetes and COVID-19 actually mutually influence each other and result in undesirable clinical prognoses. Future studies can use data-driven methods to further investigate the causality of comorbidities and COVID-19. There are some limitations in this study and potential improvements for future research. For instance, to perform multinomial classification across the 4 classes, we had to discard a lot of features, especially those in the lab testing modality. The non-COVID viral pneumonia class used a different electronic health record system that collected different lab testing features from participants in Wuhan (ie, participants in the severe COVID-19, nonsevere COVID-19, and noninfected healthy classes). Many lab testing features were able to accurately differentiate between severe and nonsevere COVID-19 in our preliminary study, such as high-sensitivity Troponin I level, D-dimer level, and lactate dehydrogenase level. However, these features were not present, or largely missing, in the non-COVID viral infection class. Eventually, only 10 lab testing features were included, which is small compared to the average of 20-30 features that are usually available in different electronic health record systems. This is probably the reason why the lab testing feature modality alone was not able to provide accurate classifications (ie, the highest accuracy achieved was 67.7% with the RF model) across all 4 classes in this study. In addition, although we had a reasonably large participant pool of 638 individuals, more participants are needed to further validate the findings of this study. Another potential practical pitfall was that not all feature modalities were readily available at the same time for feature fusion and multimodal classification. With regard to single-modality features, CT had the best performance in generating accurate predictions. However, CT is usually performed in the radiology department. Lab testing may be outsourced, and obtaining lab test results takes time. Consequently, there might be lags in data availability among different feature modalities. We believe that when multimodal features are not available all at once, single-modality features can be used to perform first-round triaging. Multimodal features are needed when accuracy is a must. It should be noted that although the participants in this study came from different health care facilities, the majority of them were of Chinese Han ethnicity. The biomedical features among the different COVID-19 and non-COVID classes may be different in people of other races and ethnicities, or people with other confounding factors. The cross-validation of the findings in this study based on other ethnicity groups and larger sample sizes is needed for future research. This study used a common CNN architecture (ie, a ResNet). The 10 CT features extracted from the FC2 layer of the ResNet were used to match the dimensionality of the other 2 low-dimensional feature modalities. Future research on different disease systems can explore and compare other architectures that use different biomedical imaging data (eg, CT, X-ray, and histology data). The actual dimensionality of the FC2 layer can also be optimized to deliver better performance. Finally, this study presented the results of individual classification models. To achieve even higher performance, the combination of multiple models can be explored in future studies. Conclusion In summary, different biomedical information across different modalities, such as clinical information, lab testing results, and CT scans, work synergistically to reveal the multifaceted nature of COVID-19 pathology. Our ML and DL models provided a feasible technical method for working directly with multimodal biomedical data and differentiating between patients with severe COVID-19, patients with nonsevere COVID-19, patients with non-COVID viral infection, and noninfected healthy individuals at the same time, with >97% accuracy.
2020-11-12T09:06:26.003Z
2020-11-05T00:00:00.000
{ "year": 2021, "sha1": "b25d33f1d81f247239ef4b643d448010ffdf6371", "oa_license": "CCBY", "oa_url": "https://www.jmir.org/2021/1/e25535/PDF", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bf0c8b7aef6ca4bb9e1ef9bde1398a80de797844", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
246385011
pes2o/s2orc
v3-fos-license
Salviae miltiorrhizae Liguspyragine Hydrochloride and Glucose Injection Protects against Myocardial Ischemia-Reperfusion Injury and Heart Failure Purpose . Myocardial ischemia-reperfusion (MIR) injury is a common stimulus for cardiac diseases like cardiac arrhythmias and heart failure and may cause high mortality rates. Salviae miltiorrhizae liguspyragine hydrochloride and glucose injection (SGI) has been widely used to treat myocardial and cerebral infarctions in China even though its pharmacological mechanisms are not completely clear. Methods . The protective e ff ect and mechanism of SGI on MIR injury and heart failure were investigated through the H9c2 cell model induced by hypoxia/reoxygenation (H/R) and rapamycin, zebra fi sh model induced by H/R and isoprenaline, and rat MIR model. Results . SGI signi fi cantly reduced the infarct size and alleviated the impairment of cardiac functions in the MIR rat model and H/R zebra fi sh model and promoted cell viability of cardiomyocyte-like H9c2 cells under H/R condition. Consistently, SGI signi fi cantly downregulated the serum level of biomarkers for cardiac damage and attenuated the oxidative damage in the MIR and H/R models. We also found that SGI could downregulate the increased autophagy level in those MIR and H/R models since autophagy can contribute to the injurious e ff ects of ischemia-reperfusion in the heart, suggesting that SGI may alleviate MIR injury via regulating the autophagy pathway. In addition, we demonstrated that SGI also played a protective role in the isoproterenol-induced zebra fi sh heart failure model, and SGI signi fi cantly downregulated the increased autophagy and SP1/GATA4 pathways. Conclusion . SGI may exert anti-MIR and heart failure by inhibiting activated autophagy and the SP1/GATA4 pathway. Introduction Cardiovascular diseases, mostly ischemic heart disease and stroke, are the leading cause of global mortality [1]. Myocardial ischemia-reperfusion (MIR) injury plays a critical role in the pathogenesis of ischemic heart disease [2,3]. Myocardial tissue relies on blood perfusion to maintain normal function, metabolism, and morphological structure, and reduced coronary blood supply to the myocardium causes an insufficient supply of oxygen and nutrients resulting in myocardial ischemia (MI) [3,4]. Although restoration of blood flow to an ischemic heart prevents irreversible myocardial tissue injury, reperfusion alone often causes more tissue damage than that is caused by ischemia, and this reperfusion-induced injury is known as MIR injury [5]. MIR injury is characterized by the structural, functional, and biochemical changes in the myocardial tissue and results in arrhythmia, infarct size enlargement, and persistent ventricular systolic dysfunction [6]. Many pathophysiological processes including ion accumulation, mitochondrial membrane damage, the formation of reactive oxygen species, disturbances in nitric oxide metabolism, endothelial dysfunction, platelet aggregation, immune activation, apoptosis, and autophagy are involved in MIR injury [7]. Importantly, autophagy plays differential roles in MI injury and reperfusion injury, respectively [8]. Activation of autophagy in cardiomyocytes has been shown to protect against ischemic damage through providing energy substrate, removing damaged mitochondria, and reducing oxidative stress [9,10]. Autophagy activation during cardiac ischemia is triggered by activation of the AMPK pathway and inhibition of the Rheb/mTOR pathway, and disruption of these mechanisms impairs autophagy activation and exacerbates myocardial injury [11][12][13][14]. Additionally, hypoxia-induced autophagy has been shown to promote cardiomyocyte survival through the PI3K/AKT/mTOR pathway [15]. In contrast, autophagy is massively activated during reperfusion in Beclin1-dependent but AMPK-independent manners [11], and cardiac damage during ischemia-reperfusion can be protected by preventing excessive autophagy [16,17]. Heart failure is described by ventricular systolic or diastolic dysfunction associated with a high rate of mortality and morbidity [18], and the most common cause of heart failure is MI [19]. Before heart failure, patients usually show myocardial hypertrophy phenotype, which is a sign of heart remodeling. Increased autophagy activation appears in a variety of animal models of heart failure [20]. Atrial natriuretic peptide (ANP) and brain natriuretic peptide (BNP) are secreted by damaged cardiomyocytes and are widely used as important indicators for clinical diagnosis of heart failure and cardiac dysfunction [21,22]. In terms of the pharmacological therapy of MIR injury, more than 20 drugs have shown certain clinical efficacy on MIR injury by targeting one or more pathophysiological processes including reactive oxygen species production, calcium increase, and inflammation [23]. Myocardial ischemia and heart failure are not completely different models, they are closely related, and long-term myocardial ischemia will lead to heart failure. Clinically, ischemic heart disease is one of the main causes of heart failure, and treatment of myocardial ischemia disease is closely related to the treatment of heart failure. However, a universally accepted treatment is still lacking, and it is necessary to continue the discovery of new therapeutic agents applied during reperfusion to protect the heart against ischemia-reperfusion damage. Salivae miltiorrhizae liguspyragine hydrochloride and glucose injection (SGI) is an infusion dosage form consisting of Salvia miltiorrhiza extract and ligustrazine, with main acting substances represented by tanshinol and ligustrazine hydrochloride [24]. The SGI has been proved with a protective effect on the acute myocardial infarction rat model [25], and SGI was also reported to significantly improve the neurological deficit score in patients with acute cerebral infarction [26]. However, it is still unknown whether SGI plays therapeutic roles in other cardiovascular diseases like MIR injury and heart failure. We demonstrated the role of SGI in the prevention and alleviation of MIR injury and heart failure in the present study, and the molecular mechanism study showed that SGI may exert those effects via limiting the activation of the autophagy-related signal pathways and the natriuretic peptide pathway. Determination of the Zebrafish Cardiac Function. To detect the heart function of zebrafish, the zebrafish embryos at 4 days post fertilization were placed at room temperature, and the heartbeat of 15 s was recorded under a microscope and multiplied by 4 to get the heartbeat for 1 minute [27]. For the imaging, we placed the zebrafish embryo in a Petri dish and fixed it with 1% low-melting agar to make it lie on its side. An inverted fluorescence microscope was used (Leica, Germany) to image continuously for 5 seconds to generate 58 pictures, and the diastolic and systolic pictures were selected and processed with the ImageJ to quantify the length (a) and width (b) of the ventricle. We calculated the zebrafish's stroke volume (SV), ejection fraction (EF), fractional area change (FAC), and cardiac output (CO). See the supplementary materials for the specific calculation formula. 2.4. Quantitative RT-PCR. Zebrafish RNA extraction, reverse transcription, and quantitative amplification were performed following the instructions of the kit obtained from Nanjing Vazyme Biotech Co., Ltd., and the article numbers are RC101, R223-01, and Q711-02. The primer sequence is shown in the supplementary material (Table S2). The data were processed using the 2 -ΔΔCt method for the calculation of relative gene expression [28][29][30][31]. 2.5. Western Blot Analysis. Total proteins were extracted from heart tissues. Protein concentrations were determined by the BCA protein assay. Each sample containing 30 mg of protein was, respectively, separated into 8%, 10%, and 15% sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred to a polyvinylidene difluoride membrane. Thereafter, the membranes were incubated in 5% (volume weight) milk solution for 2 h and then incubated overnight at 4°C. See the supplementary materials for other detailed procedures. 2 Computational and Mathematical Methods in Medicine 2.6. Immunofluorescent Analysis. Immunofluorescence was used to detect Beclin1 and LC-3 in myocardial tissue. The left ventricular tissues were frozen and sectioned, rewarmed at room temperature, air-dried, and fixed with paraformaldehyde for 10 min. After the paraformaldehyde was completely dried, the tissues were washed 3 times with PBS, shaken on a decolorizing shaker, and fixed for 5 min each time. See the supplementary materials for other detailed procedures. Myocardial Tissue Structure Was Observed by Transmission Electron Microscopy. Myocardial tissue samples were collected and fixed with 2.5% glutaraldehyde solution at 4°C overnight. Then, the samples were treated as follows (see the supplementary material for the detailed process). Statistical Analysis. The experimental data were processed with SPSS 20.0 software, and one-way ANOVA was Computational and Mathematical Methods in Medicine used to compare and analyze significant discrepancies between the groups. P < 0:05 indicates a statistically significant discrepancy. The experimental results are expressed as the means ± SEMs and analyzed using GraphPad Prism 6. SGI Reduces Myocardial Infarction Damage in a Rat MIR Model. To study the role of SGI treatment in MIR injury, we established a rat MIR injury model, in which the normal and infarct areas were colored in dark blue and red, respectively. Similar to the Tan IIA treatment group, which has been clinically widely used for treating coronary heart diseases and thus selected as the positive control, SGI treatments with multiple dosages all ameliorated ischemia-reperfusion injuries to the hearts by lessening ischemia-reperfusion-induced changes in infarction areas (Figures 1(a) and 1(b)). In addition, we also measured the heart rate and S-T segment among the treatment groups, and we observed a decrease in heart rate under ischemic conditions, although there was no significant difference between the SGI groups and other groups under the normal, ischemic, and reperfusion conditions (Figures 1(c) and S1a). Compared to the model group, the S-T segment of the SGI groups was significantly lower both under the reperfusion and ischemia conditions (Figures 1(d) and S1b). In addition, an electrocardiogram was monitored to predict the severity of ischemia-induced myocardial damage. And the S-T segment of the SGI group and Tan IIA group showed elevation during ischemia while decreasing during the reperfusion condition ( Figure S1c). We further performed the transmission electron microscopy analysis of the fine details of the myocardial structure. The transmission electron microscopy analysis results showed a disordered arrangement of myocardial fibers, dissolution of myofilaments, the sonic velocity of mitochondria, and the presence of cytoplasmic vesicles that may represent autophagosomes in the model group (Figure 1(e)). And there was a significant improvement in the myocardial fibers' alignment and mitochondria shape in the SGI groups. Taken together, the above results indicated that SGI protects against MIR injury. As a result of myocardial damage, many proteins and enzymes are released from the heart tissue into the blood, which usually includes lactate dehydrogenase (LDH), BNP, cardiac troponin I (cTn-I), and creatine kinase myoglobin (CK-MB) [32]. The SGI treatments attenuated the upregulation of serum BNP, CK-MB, cTn-I concentrations, and LDH activity stimulated by ischemia-reperfusion ( Figure S2a-d, g). Considering oxidative stress contributes to the pathogenesis of MIR injury, we measured the changes of oxidative stress-associated marker molecules including malondialdehyde (MDA) and superoxide dismutase (SOD) upon SGI treatment. The results showed that SGI downregulated the MDA level and upregulated the SOD activity both in the serum and in the myocardial tissue ( Figure S2e-f, h-i), suggesting that SGI attenuated the oxidative stress under MIR condition [33][34][35]. SGI Improves Cardiac Function in Zebrafish Induced by H/ R. A zebrafish larvae hypoxia/reoxygenation (H/R) model has been established to simulate MIR [36]. Here, we utilized this zebrafish H/R model to further study the role of SGI in MIR injury. Firstly, we titrated the SGI concentration in fish by analyzing its effects on the livability, heart rate, and gross morphology of zebrafish embryos. SGI at 0.025 mg/ml showed no effect on those characteristics and was selected for follow-up studies ( Figure S3a-c). Under H/R stimulation, the heart rate of zebrafish embryos decreased significantly, and the SV, EF, FAC, and CO all decreased obviously, and the SGI treatment restored those characteristics to a level close to control fish without H/R stimulation ( Figure S4a-e). Those results indicated that SGI can improve the cardiac impairment of zebrafish induced by H/R. The Protective Effect of SGI on H9c2 Cardiomyocytes Induced by H/R. After demonstrating a protective role of SGI on MIR injury with the rat and zebrafish models, we further consolidated those findings by using a cardiomyocyte H/R model. The survival rates of H9c2 cardiomyocytes Figure S5b). As shown in Figure S5c, SGI in the range of 0.48-480 μM significantly increased the survival rate of H9c2 cardiomyocytes under the H/R condition. In addition, SGI can significantly increase the level of SOD ( Figure S6a) and reduce the levels of LDH and MDA ( Figure S6b, c) in the H9c2 cardiomyocytes, which is consistent with the above observation that SGI alleviated the oxidative stress in the rat MIR model. SGI Attenuates the Activation of Autophagy under MIR Condition. Autophagy is massively activated during cardiac reperfusion, and therapeutic targeting of autophagy has shown beneficial effects under MIR condition. We further investigated whether SGI regulated the autophagy status in our rat MIR model. The results showed that ATG7, ATG5, Beclin1, and LC-3II/I expression levels in the model group were significantly increased, and SGI alleviated the overactivation of autophagy (Figure 2(a)). The regulatory effects of SGI on autophagy were further studied by detecting the changes in the expression or activation of autophagy regulators including JNK1, Bcl-2, mTOR, and Akt upon SGI treatment. We found that SGI treatment downregulated the p-JNK1 level while did not significantly affect the Bcl-2 level compared to the model group (Figure 2(b)). In addition, SGI inhibited the p-AKT and upregulated the p-mTOR level (Figure 2(c)). Those results suggested that SGI may alleviate the MIR injury via downregulating the activated autophagy under MIR conditions. Beclin1 and LC-3, as markers of autophagy, have become indicators of the degree of autophagy. Immunofluorescence staining was used to detect the localization of Beclin1 and LC-3 in the rat myocardial tissues, and the result showed that the expression levels of Beclin1 and LC-3 were significantly higher in the model group, while SGI significantly reduced the expres-sion of Beclin1 and LC-3 ( Figure S7a-b). This result further confirmed the inhibitory role of SGI in autophagy under MIR conditions. Rapamycin (Rapa), as an autophagy inducer, caused a reduction of the survival rate of H9c2 cells in a dosagedependent manner (Figure 2(d)). And SGI significantly improved the survival rate of H9c2 cells induced by Rapa (Figure 2(e)). Consistently, Rapa significantly deformed H9c2 cells, and 3-MA (autophagy inhibitor) and SGI significantly alleviated the deformation of H9c2 cells ( Figure S7c). SGI Improves Cardiac Function in Zebrafish Induced by ISO. ISO treatment in rats and zebrafish represents wellestablished heart failure models to investigate novel mechanisms and test new therapeutic strategies [37,38]. In this study, we investigated whether SGI plays a protective role in ISO-induced heart failure. The ISO-induced myocardial injury resulted in an obvious decrease in heart rate accompanied by the formation of pericardium edema (Figures 3(a) and 3(b)), and in SV, EF, FAC, and CO as well. Zebrafish pericardium edema was relieved obviously upon SGI treatment, and the heart rate, SV, EF, FAC, and CO were all improved remarkably (Figures 3(c)-3(g)); these results indicated that SGI could improve the cardiac impairment of zebrafish induced by ISO. Interestingly, we observed that ISO induced a shift of atrium-ventricle relative spatial position from top-bottom to a left-right pattern ( Figure S8a), which may be caused by the formation of pericardial edema by ISO. And SGI restored the relative position of the ventricle and atrium to the normal ( Figure S8a). In addition, SGI significantly inhibited the apoptosis of zebrafish cardiomyocytes induced by ISO treatment (Figure S8b). SGI Alleviates ISO-Induced Myocardial Injury in Zebrafish through Multiple Pathways. ISO has been previously proved to induce cardiac hypertrophy via increasing the expression of SP1, GATA4, ANP, BNP, and Prkg1a. In this study, we found that SGI significantly reduced the expression of these genes (Figures 4(a)-4(e)), suggesting that Computational and Mathematical Methods in Medicine SGI can reduce the expression of ANP and BNP by inhibiting the SP1/GATA4 pathway, and playing a protective role on the heart. Furthermore, ISO was reported to activate autophagy by inhibiting Akt/mTOR [39], and SGI treatment significantly downregulated the mRNA expressions of ATG5, ATG7, LC-3, and Beclin1 (Figures 4(f)-4(i)), suggesting that SGI can play a protective role on zebrafish heart by inhibiting autophagy. In addition, SGI significantly reduced the expressions of β 1 adrenergic receptor (β1-AR), tyrosine hydroxylase 1 (Th1), and tyrosine hydroxylase 2 (Th2) mRNA (Figures 4(j)-4(l)), suggesting that SGI can play an antimyocardial injury effect by inhibiting the binding of ISO to β1-AR, inhibiting the expression of tyrosine hydroxylase, and reducing the production of catecholamine. Discussion SGI is being clinically used for occlusive cerebrovascular disease and other ischemic vascular diseases in China. Studies have shown that SGI has a protective effect on acute myocardial infarction induced by ISO in rats [40]. However, it is still unknown whether SGI plays therapeutic roles in other 9 Computational and Mathematical Methods in Medicine cardiovascular diseases including MIR injury and heart failure, and the underlying molecular mechanism of SGI acting on myocardial diseases remains a mystery. Here, we demonstrated that SGI protected against the MIR injury and heart failure by using in vivo rat and zebrafish MIR and heart failure models, and our preliminary mechanistic study showed that SGI may exert those effects via limiting the activation of the autophagy-related signal pathways and the natriuretic peptide pathway. When the heart is ischemic or hypoxic, aerobic oxidation of sugars is blocked, anaerobic glycolysis is enhanced, and permeability is elevated, leading to the release of various enzymes into the bloodstream, ultimately resulting in elevated levels of CK-MB, LDH, cTn-I, and BNP in serum or myocardial tissue [41]. The present study found that SGI could significantly reduce serum CK-MB, LDH, cTn-I, and BNP levels; NBT staining results also showed that the area of myocardial infarction in the SGI group was significantly reduced. This indicates that SGI has a better protective effect on MIR injury rats. Autophagy plays an important role in the heart, especially during MIR injury. In the reperfusion phase, autophagy over activates injured cells and even leads to cell death, thus negatively affecting the heart [42]. mTOR acts as a central regulator of physiology and pathology in the cardiovascular system by integrating intracellular and extracellular signals [43]. Both the mTOR-associated AMPK-mTOR pathway and the PI3K-Akt-mTOR pathway play an important role in MIR injury. During MIR injury, ATP stored in cardiomyocytes is rapidly depleted, and an increase in the adenosine monophosphate/ adenosine triphosphate ratio rapidly activates AMPK energy receptors. Activated AMPK phosphorylates TSC2 [43], which in turn inhibits mTOR. Beclin1 was the first identified mammalian autophagy protein. After MIR injury, both Beclin1 protein levels and autophagy levels are upregulated [44]. The levels of autophagy and apoptosis caused by MIR were reduced after inhibition of Beclin1 expression by small-molecule interfering RNA technology at the in vitro level [45]. Immunofluorescence detection of Beclin1 and LC-3 performed in this study revealed high expression of Beclin1 and LC-3 in the myocardial tissue. Western blot analysis showed that SGI significantly downregulated the expression of ATG7, ATG5, and Beclin1 protein. Thus, our results indicate that autophagy occurred in rat myocardial tissue after MIR injury. The LC-3II/I ratio was significantly increased, suggesting that autophagy was overactivated. Simultaneously, we also found that SGI significantly upregulated the expression of mTOR and p-mTOR proteins and significantly downregulated the expression of p-JNK1 and p-AKT proteins. Those findings suggested that SGI may play an anti-MIR damage effect in rats by downregulating autophagy. ANP and BNP are closely related to MIR, and ANP was shown to improve cardiac function by promoting the release of cGMP in isolated hypoxic reperfusion rat heart [46]. The ANP/PKG signaling pathway can regulate the K ATP channel of ventricular cardiomyocytes [47]. ANP-modified adenosine oleate precursor drugs have a better therapeutic effect on acute myocardial ischemia in rats [48]. In this study, SGI significantly inhibited the expressions of ANP and BNP mRNA and significantly reduced the expressions of SP1 and GATA4 mRNA at the upstream and Prkg1a mRNA at the downstream, suggesting that SGI can play a protective role on the heart by inhibiting ANP and BNP. Both ISO and catecholamine can increase myocardial contractibility, heart rate, and oxygen consumption. However, a long-term increase in myocardial contractibility can lead to myocardial injury. Therefore, reducing the production of ISO and catecholamine or inhibiting their binding to their receptors can effectively treat myocardial injury. In this study, SGI significantly reduced the levels of β1-AR and TH mRNA, suggesting that SGI could reduce the binding of ISO to the receptor and reduce the generation of catecholamines by inhibiting TH expression. Conclusions SGI has a protective effect on myocardial cell injury induced by H/R and Rapa. In addition, it can improve the cardiac function of zebrafish with H/R and ISO-induced injury, and the mechanism of action is related to inhibiting the expression of ANP and BNP, alleviating autophagy, reducing the binding between ISO and receptors, limiting the expression of TH, and reducing the generation of catecholamines. Furthermore, SGI protects cardiomyocytes by reducing oxidative stress and reducing the excessive release of CK and LDH. Meanwhile, the expression of Beclin1, ATG5, ATG7, and LC-3 autophagy proteins was downregulated through p-JNK1-Beclin1 and p-AKT-mTOR/p-mTOR-Beclin1 signaling pathways to rescue the cardiomyocyte from injury caused by excessive autophagy and to exert an anti-MIR injury effect. Data Availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Ethical Approval The experiments were approved by the Laboratory Animal Ethics Committee of Jinan University (No. 201891004) and conducted following the guidelines of the National Institute of Health. Conflicts of Interest The authors declare no competing interests. Table S1 shows the information of reagents used in the experiment. Table S2 shows the gene primer sequences of zebrafish used for RT-PCR. Figure 7) Western blot was used to detect the protein expression level and specific operation steps in rat myocardial tissue. (8) The expression of LC-3 and Beclin1 proteins in rat myocardium was detected by immunofluorescence, and the specific procedures were followed. (9) Transmission electron microscopy (TEM) was used to observe the ultrastructure of rat myocardial tissue and the specific experimental methods. (Supplementary Materials)
2022-01-29T16:02:29.047Z
2022-01-27T00:00:00.000
{ "year": 2022, "sha1": "eff422474e3754d7dfb81048fcc90d04bf8195e1", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cmmm/2022/7809485.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1deaf9861c7a05280981d5e7d734805845df137", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125277039
pes2o/s2orc
v3-fos-license
Optimization of Amplitude and Frequency Modulated Magnetic Field Parameters in a Square Mold Wall The possibility of optimization of parameters of amplitude and frequency modulated rotating magnetic field (AFM RMF) acting on the copper wall of a mold with a square cross-section aimed at an increase of the efficiency of electromagnetic impact on the mold in the process of continuous casting of a steel billet has been theoretically substantiated. Introduction To improve the continuous ingot quality, a variety of methods of action on liquid metal are used and constantly modernized [1]. In the presented study, we analyze the possibility of optimizing the nonstationary electromagnetic forces excited in the wall of the crystallizer by means of an amplitudefrequency modulated rotating magnetic field (AFM RMF) [2], which ensures a maximal amplitude of electro-magnetic fields (EMF) in it. The authors suggest a method of optimization of non-stationary electro-magnetic forces excited in the mold wall using AFM RMF. The conditions for the resonant energy transfer by a part of the Fourier-pack harmonics to the mold wall are determined. Problem Formulation and Solution A problem of excitation of oscillations in the wall of a square copper mold using an AFM RMF source is examined for the optimization (determining the maximal amplitude of the vectorial potential) of this influence. Electrodynamic processes in the mold wall arising under the action of a circularly polarized AFM RMF excited by an ideal inductor are described in rectangular coordinates (x, y, z) by a dimensionless equation for the zcomponent of the vectorial potential a z ( = rot ) 1 π 2 ( ∂ 2 a z ∂x 2 + ∂ 2 a z ∂y 2 ) − ω ̅ π ∂a z ∂t = U(a z ) = 0, where ω ̅ = µ 0 σ m ω 0 x 0 2 π 2 -dimensionless frequency of EMF in a mold wall with specific electrical conductivity σ m and with the half of external mold side length x 0 , μ 0 -magnetic permeability in the vacuum, ω 0 -carrier angular frequency of AFM RMF, tdimensionless time. Boundary conditions for Equation (1) are determined in the first approximation by the magnetic induction b of the RMF excited by an ideal inductor in whose circular boring the mold is located (Fig.1). In rectangular Cartesian coordinates, these conditions have the following forms: 1234567890''"" 9th , 〈δ〉 is a dimensional averaged mold wall thickness which is determined by the following formula: According to [3], the function of time ϕ(t) can be presented in the following form: where χ is the depth of amplitude modulation, ω adimensionless angular velocity of amplitude modulation, J n (ξ) -Bessel function of the order, Ω n -dimensionless angular velocity of the -th Fourier harmonic, ξ = ∆ω ω f frequency modulation index, ∆ωangular frequency deviation, ω fangular frequency of frequency modulation. To solve the problem (1) with boundary conditions (2), we represent vectorial potential a z in the form where F(x, y) = ∬ (x − iy) The transformation (5) reforms a semi-uniform problem (1) into the following semi-uniform problem: where 0, 1 are external and internal contours of the mold cross-section (Fig. 1). 1234567890''"" 9th We look for the solution of the problem (5) by Galerkin's method in the form where u k (x) = sin (kπ Exploring (11) for the presence of extremes using the condition ∂A Ik ∂Ω n = 0 we obtain Ω 2 + 2P k ( P k Int 4 + Int 2 P k Int 3 + Int 1 ) Ω n − P k 2 = 0 (12) Owing to the symmetry of the problem with respect to variables x and y, Int 1 = Int 2 ; Int 3 = Int 4 The solution of Equation (12) looks as follows Ω nr = P k (√2 − 1) = 0.414P kl (13) 1234567890''"" 9th we obtain where ω is the resonance frequency of the RMF frequency modulation. As an example, we present a plot of the dependence of the vectorial potential amplitude A Ik on the Ω n frequency harmonic in the copper mold wall with a square cross-section (Fig.1). Parameters for calculation: carrier frequency of the current f 0 = 10 Hz; billet cross-section 150x150 mm 2 ; 0 = 12 mm; Ω 0 = 0.015, dimensionless resonance frequency Ω nr = 14.1. In view of the fact that the AFM RMF is characterized by a superposition of two Fourier-packs (see Equation (4)), the total amplitude of resonance harmonics of the vector potential is more than twice greater than when using a harmonic field. Conclusion Results of the analysis show that the amplitudes of resonance values of the vectorial potential exceed more than twice the amplitudes of the vectorial potential corresponding to harmonic currents used at the existing continuous casting machine. Since the electromagnetic body forces are proportional to a squared vectorial potential, the effect of force impact grows more than four times. This point indicates to the possibility of development the highly-efficient continuous casting technology using the amplitude-and-frequency modulation of rotating magnetic field acting on the copper wall of the mold.
2019-04-22T13:12:46.493Z
2018-10-13T00:00:00.000
{ "year": 2018, "sha1": "baf3b418d4b2e06e624202126416ba053bcdd00f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/424/1/012044", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ea2406b73d958a32a4b86df72471034d422ee67c", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
12942360
pes2o/s2orc
v3-fos-license
PAX3 and ETS1 synergistically activate MET expression in melanoma cells Melanoma is a highly aggressive disease that is difficult to treat due to rapid tumor growth, apoptotic resistance, and high metastatic potential. The MET tyrosine kinase receptor promotes many of these cellular processes, and while MET is often overexpressed in melanoma, the mechanism driving this overexpression is unknown. Since the MET gene is rarely mutated or amplified in melanoma, MET overexpression may be driven by to increased activation through promoter elements. In this report, we find that transcription factors PAX3 and ETS1 directly interact to synergistically activate MET expression. Inhibition of PAX3 and ETS1 expression in melanoma cells leads to a significant reduction of MET receptor levels. The 300 bp 5′ proximal MET promoter contains a PAX3 response element and two ETS1 consensus motifs. While ETS1 can moderately activate both of these sites without cofactors, robust MET promoter activation of the first site is PAX-dependent and requires the presence of PAX3, while the second site is PAX-independent. The induction of MET by ETS1 via this second site is enhanced by HGF-dependent ETS1 activation, thereby MET indirectly promotes its own expression. We further find that expression of a dominant negative ETS1 reduces the ability of melanoma cells to grow both in culture and in vivo. Thus, we discover a pathway where ETS1 advances melanoma through the expression of MET via PAX-dependent and independent mechanisms. Introduction ETS1 is a member of the ETS-family of transcription factors, defined by the presence of a winged helix-turn-helix DNA binding domain (1). Expression of ETS1 is absent or low in melanocytes of normal skin or within simple nevus structures (lentigo simplex); however, ETS1 is overexpressed in melanoma in situ and in invasive and metastatic primary tissues, as well as in melanoma cell lines (2,3). While the role of ETS1 in melanoma is unclear, its major functionality likely lies in transcriptional regulation. Evidence supports that ETS1 promotes cell survival, tumor progression, and invasion. ETS1 may act as either a pro-or anti-apoptotic factor depending on the cell type. In melanoma, ETS1 plays an anti-apoptotic role, at least partially due to upregulation of MCL1 (4). In terms of tumor invasion and progression, inhibition of ETS1 leads to a decrease in expression of uPA, MMP1, MMP3, and integrin-β3 (3). In addition, ETS1 directly activates the integrin-αv promoter (5). There are several lines of evidence supporting that ETS1 is upstream of MET, a receptor tyrosine kinase that promotes melanoma cell growth and survival (6)(7)(8). An increase in ETS1 protein levels raises MET levels, while inhibition of ETS1 decreases MET receptor expression (9)(10)(11)(12). In addition, in esophageal cancer, levels of MET and ETS1 protein correlate significantly (13). While in silico studies predict that ETS1 is directly upstream of the MET promoter (9), this has not been proven definitively through experimentation in any cell type. We previously identified the transcription factor PAX3 as an upstream regulator of MET in melanoma (14). During normal melanocyte development PAX3 is necessary for the regulation of genes involved in cell type specification while maintaining an undifferentiated state, proliferation, and migration (reviewed in (15)). These characteristics are mirrored in melanoma, where our group and others find PAX3 expression (16)(17)(18)(19). Along with MET, PAX3 mediates its cellular effects in melanoma through the regulation of down-stream targets, such as BRN2 and TBX2 (20,21). However, PAX3 is a weak transcription factor on its own, and often recruits other factors to synergistically regulate gene expression. Here, we discover a pathway for promoting MET receptor expression by the transcription factors ETS1 and PAX3. We find that both transcription factors directly interact and synergistically drive MET expression by binding to promoter enhancer elements. The MET promoter contains two ETS1 sites, and activation through these two elements is enhanced by different mechanisms that are either PAX3-or HGF-dependent. Our data support a model for an oncogenic pathway where PAX3 and ETS1 drive MET expression, and this pathway is further driven in a feed-forward manner through the ligand for MET, HGF. PAX3, ETS1, and MET are expressed in melanoma cell lines and tumors To determine the presence of PAX3, ETS1, and MET proteins in human melanoma cell lines, a panel of 7 independent lines was analyzed ( Figure 1A). All cell lines expressed these three proteins to varying degrees. ETS1 contains a Ras-responsive site at threonine 38 (T38), and phosphorylation of this epitope strongly increases the protein's transcriptional activity (22)(23)(24)(25). The phosphorylation status of T38 in ETS1 was measured in the melanoma cell line panel ( Figure 1B). In comparison to CIP controls or samples that were ETS1 negative, phospho-ETS1 (pETS1) levels are considered high for A375, SKMEL5, and SKMEL23 (p<0.0005), and significant for mel537 (p<0.05)(n=3). The pETS1 levels are considered undetectable for mel888 (p=0.051) and SKMEL28 (p=0.234) cells. Expression of PAX3, ETS1, MET, pETS1 and phosphorylated-MET (active form of MET receptor, pMET) was measured in twenty superficial spreading melanoma primary tissue samples. Representative results for PAX3 and MET, or ETS1 and MET immunofluorescence and co-expression are shown in Figure 1C-J,M. PAX3 and ETS1 expression is predominantly nuclear, while MET is principally located in the membrane with focal regions of pan-cellular expression. The majority of samples express PAX3 and ETS1. No melanoma samples in the panel were both PAX3 and ETS1 negative. All twenty samples expressed MET receptor. Half of the samples expressed pETS1, while 6/20 samples presented with pMET ( Figure 1K-M). The MET proximal promoter contains potential PAX3 and ETS1 binding sites The transcriptional start site of the MET gene has been determined and potential ETS family binding sites have been predicted through in silico analysis (9,26). While ETS family members all bind to a core sequence of GGA(A/T) (27), each factor has longer high affinity DNA binding sites. High affinity recognition motifs for ETS1 have been experimentally determined to be C(C/A)GGA(A/T)G(T/C) (28), AC(C/A)GGA(A/T)(G/A) (29), and (G/ C)CGGAAGT (30). Combining these studies produces a high affinity consensus site of C(A/ C)GGA(A/T)(G/A), or CMGGAWR. Within a short segment of the MET promoter (297 bp 5′ proximal to the transcriptional start site, and 25 bp of the 5′ UTR), there is one CMGGAWR ETS1 consensus site located −85 to −79 from the transcriptional start ( Figure 2A). While ETS1 binds to CCGGAA with 100 fold higher affinity than to CCGGAG, ETS1 recruitment to CCGGAG is enhanced 1,000 fold when ETS1 is in complex with PAX5 (31,32). In contrast, converting this CCGGAG ETS/PAX site to an optimum site for independent ETS1 binding (CMGGAWR) reduces the ability of PAX5 to interact with DNA (33,34). A CCGGAG ETS/PAX site is located in the proximal MET promoter at −241 to −236 (reverse strand, Figure 2A). This ETS/PAX site is adjacent to a previously characterized PAX3 site (14,35). PAX3 and ETS1 activate the proximal MET promoter PAX3 drives expression of MET in melanoma and in muscle cells (14,35). To test the ability of PAX3 and ETS1 to activate the MET promoter alone or together, a DNA fragment containing 297 base pairs of the MET 5′ proximal promoter and 28 base pairs of 5′ UTR was sub-cloned into the reporter vector pGL2-Luciferase (Figure 2A, B). Separately, PAX3 or ETS1 drive reporter expression 2.9±0.1 fold and 4.0±0.4 fold over light levels of reporter alone ( Figure 3A, set 1). Together, PAX3 and ETS1 activated the MET promoter synergistically 17.0±2.8 fold (p<0005). To test the importance of the putative PAX and ETS sites, METpm was utilized as a template to create reporter constructs containing single, double, or triple mutations in these sites ( Figure 2B). The constructs were transfected into HEK-293T cells in the presence or absence of PAX3 and/or ETS1 expression constructs. Any mutation of the PAX site alone ( Figure 3A, set 2) or in concert with ETS mutations (sets 6, 7, and 8) completely abrogated the ability of PAX3 to drive reporter expression (p<0.005). Mutation of the PAX site did not disrupt the ability of ETS1 to drive reporter expression, but the synergistic activation was abolished. Mutation of either ETS site did not block the ability of ETS1 to activate the MET promoter (grey bars, set 3, 3.6±0.8 fold, set 4, 2.5±0.2 fold). Mutation of both sites reduced the ETS1 activation significantly (set 5 grey bar, 1.8±0.2 fold, p<0.005). Synergistic activation of the MET promoter by PAX3 and ETS1 was lost when the E1 site was mutated ( Figure 3A, set 3,5 black bar, p<0.005), or loss of the functional PAX site (sets 2,6,7,8 black bars, p<0.0005). However, if both the PAX and E1 sites were intact ( Figure 3A, sets 1,4, black bars) PAX3 and ETS1 activated the MET promoter synergistically (p<0.0005). EMSA assays were performed in order to determine if these factors bind directly to the MET promoter. Utilizing radiolabeled oligonucleotide probes containing the P and E1 sites ( Figure 3B), a slower migrating band was seen in EMSA assays with the addition of PAX3 protein ( Figure 3C, arrows B,C, lane 3) but not with ETS1 alone (lane 5). A slower migrating band was also seen when ETS1 was added to a probe containing the E2 site ( Figure 3B,C arrows A,C, lane 9). All slower migrating bands were significantly reduced with the addition of 100 fold excess cold probe (lanes 6,10). Our findings support that PAX3 and ETS1 bind to and activate the MET promoter through specific enhancer sites. PAX3 and ETS1 directly interact and promote MET expression in melanoma cells PAX3 and ETS1 are co-expressed in melanoma and activate the MET promoter more robustly together than either factor alone (Figures 1,3A). Immunoprecipitation assays were performed to determine if these proteins interact within melanoma cells. Immunoprecipitation using an anti-ETS1 antibody yielded a band when probed for PAX3 by western analysis in A375 and mel-624 melanoma cells ( Figure 4A, lanes 3,6). No PAX3 was detected with beads alone or with a non-specific IgG antibody. To determine if ETS1 and PAX3 directly interact, in vitro immunoprecipitation experiments were preformed utilizing recombinant ETS1 and PAX3 proteins. PAX3 and ETS1 directly interacted ( Figure 4B) only when both proteins were present ( Figure 4B, lane 6). We find that PAX3 and ETS1 directly interact in A375 and mel-624 melanoma cells. To determine if endogenous PAX3 and ETS1 bind to the MET promoter, Chromatin Immunoprecipition (ChIP) assays were performed on melanoma cell lysates as well as cells that do not express MET (3T3 cells). The MET promoter sequence was amplified from DNA precipitated with proteins bound to antibodies specific for PAX3 or ETS1 ( Figure 4C, lanes 1,2,6,7) but not with control IgG antibody (lane 3,8) in both A375 and mel-624 melanoma cells. These bands were absent when 3T3 cell lysates were utilized. These experiments support that PAX3 and ETS1 are located on the MET promoter in melanoma cells. To determine if the PAX3 and ETS sites discovered in the MET promoter are active in melanoma cells, METpm vector or constructs with engineered mutations in the PAX, E1 and E2 sites ( Figure 2B) were transfected into A375 and mel-624 melanoma cells ( Figure 4D). Mutation of any of the sites resulted in a significant decrease (p<0.005) in reporter activity in both cell lines. Mutation of all three sites resulted in the most dramatic decrease in luciferase activity, 11.78%±3.98% (A375, p<0.0005), or 19.75%±5.74% (mel-624, p<0.0005). These findings demonstrate that these binding sites within the MET promoter are functional in melanoma cells. Blocking PAX3 and ETS1 expression in melanoma cells lead to a reduction of MET levels. Inhibiting PAX3 alone reduced MET expression in some but not all melanoma cell lines (14). Reduction of PAX3 or ETS1 levels by less than half of control cells did not significantly decrease MET levels in mel-624 cells ( Figure 4E). Reduction of both PAX3 and ETS1 leads to a significant loss (p<0.02) of MET expression to 30.0%±14.7% of control levels ( Figure 4F). These data demonstrate that expression of MET is at least partially dependent on PAX3 and ETS1 in mel-624 cells. ETS1 synergistically activates the MET promoter with either PAX3 or Hepatocyte Growth Factor (HGF) ETS1 may be activated by HGF via the RAS signaling pathway (22). This supports a positive feed-forward loop model wherein MET is activated by HGF, which indirectly activates ETS1 and consequently its own promoter. The ability of HGF to synergistically activate the MET promoter was tested in cells expressing the MET receptor (HEK-293T) and cells lacking this protein (3T3)( Figure 5A). In HEK-293T cells, HGF synergistically activated the MET reporter construct significantly (16.9±3.7 fold, p<0.005) at levels parallel to ETS1/PAX3 activation ( Figure 5B). In the MET-deficient 3T3 cells, the ability of ETS1 to activate the MET promoter was similar without (2.2±0.3 fold) or with (2.2±0.7 fold) the addition of HGF ( Figure 5C). The addition of HGF did not significantly increase luciferase levels (p=0.48). ETS1 and PAX3 still yielded a significant synergistic activation of the MET promoter in the 3T3 cells (p<0.005). HGF activates ETS1 by a RAS signaling pathway, leading to phosphorylation of threonine 38 (T38) within the ETS1 Pointed domain ( Figure 5D) (22)(23)(24)(25). Mutation of T38 blocks the ability of the RAS pathway to phosphorylate ETS1. HEK-293T cells were transfected with ETS1 or a mutant form of ETS1, ETS1(T38A) ( Figure 5D) and were expressed with or without HGF or PAX3 ( Figure 5E). Both versions of ETS1 activated the MET promoter to similar levels (ETS1, 2.8±1.1 fold, ETS1(T38A), 3.1±0.4 fold). Both proteins synergistically activated the MET promoter with PAX3, while only the wild-type protein was able to have the same effect with HGF (all groups p<0.005 except for ETS1(T38A) + HGF, p=0.172). To verify that the ETS1(T38A) protein is not phosphorylated by HGF, protein levels of MET, ERK, and ETS were measured in transfected HEK-293T cells. While all cell sets responded to HGF through the phosphorylation and activation of MET and ERK, the wild-type ETS1 protein but not the mutant was phosphorylated ( Figure 5F). Here, we find that exogenous wild-type ETS1 protein is phosphorylated concurrently to ERK activation due to HGF stimulation, and is able to promote MET expression when the MET receptor is present. To determine if HGF activates endogenous ETS1 protein within melanoma cells, HGF was added to a panel of six melanoma cell lines and MET and ETS1 protein levels were measured. Levels of phosphorylated ETS1 (pETS1) were significantly higher in 5/6 cell lines after HGF treatment in comparison to untreated cells ( Figure 5G, H, p<0.05). SKMEL-28 cells also had an increase in pETS1 levels post HGF treatment, but not to a significant level (p<0.218). The increase in pETS1 levels corresponded to a parallel rise in pMET levels. To determine if ETS1 phosphorylation levels are dependent on MEK and ERK activity, HGF induced melanoma cells were treated with a specific inhibitor of MEK, PD184352. Levels of pETS1 decreased in A375 and mel-624 cells in the presence of PD184352 but not in untreated or mock treated cells ( Figure 5I). These data support a model where MET receptor activation via HGF activates ETS1 and consequently its own promoter. The proximal MET promoter contains two ETS binding sites that function synergistically with PAX3 or through HGF-activation To determine if the E1 and E2 enhancers where necessary for ETS1-dependent synergistic activation of METpm with PAX3 or HGF, MET promoter reporter and ETS1 expression constructs were transfected into HEK-293T cells in the presence or absence of PAX3 or HGF. Reporter constructs contained intact or mutated E1 and E2 sites (Figure 2,6A). ETS1 was able to synergistically activate METpm vector with the addition of either PAX3 or HGF ( Figure 5B,6A, set 1). A mutation in the E1 site resulted in a loss of synergistic activation with PAX3, but not with HGF treatment ( Figure 6A, set 2). Conversely, mutation of the E2 site did not significantly effect ETS1-PAX3 activation of the MET reporter vector, but the robust activation of the reporter after HGF treatment was lost ( Figure 6A, set 3). Loss of both E1 and E2 sites abrogated both the ETS1-PAX3 and ETS1-HGF activation ( Figure 6A, set 4). These data support a model of two distinct ETS binding sites in the MET promoter ( Figure 6B). The E1 site, along with an adjacent PAX site, comprises an "ETS-PAX enhancer." Optimum activation of this site requires the presence of both PAX3 and ETS1. The other site, E2, contains the ideal binding sequence for independent ETS1 binding (CCGGAWR). The ability of ETS1 to drive expression from this "ETS-P ETS1 enhancer" is heightened by ETS1 phosphorylation as a result of HGF signaling. Expression of a dominant-negative ETS1 protein inhibits MET induction and melanoma cell growth To determine if disruption of normal ETS1 function would affect MET expression and melanoma cell growth, cells were transfected with a dominant-negative ETS1 expression construct (DN-ETS1). A truncated ETS1 protein containing only the C-terminal DNAbinding ETS domain acts as a dominant-negative against wild-type activity by competitive DNA binding ( Figure 7A) (36)(37)(38). The presence of DN-ETS1 significantly (p<0.005) inhibits activation of MET by PAX3 and ETS1 ( Figure 7B). In addition, melanoma cells demonstrated significant inhibition of growth both in cellulo and in vivo ( Figure 7C,D,E). A375 and mel-624 melanoma cells, transfected with either a control GFP-expressing vector or a dual DN-ETS1/GFP expressing construct, demonstrated a significantly (p< 0.005) attenuated growth at 48 hours post-transfection when DN-ETS1 was present ( Figure 7D). The transfected A375 cells, when transplanted into athymic Nu/Nu mice, grew tumors for all GFP transfected cells ten days post transplantation, but only 2/6 times when the DN-ETS1 was present ( Figure 7E). In summary, ETS1 drives expression of MET in melanoma cells, either through a PAXdependent enhancer, or a feed-forward PAX-independent mechanism via ETS1 phosphorylation and activation downstream of HGF. Discussion While it is evident that MET overexpression drives tumor proliferation, survival, and progression in melanoma, the mechanism for this overexpression is unknown. Our group and others have previously found PAX3, MITF, and SOX10 upstream of MET (14,39,40). While PAX3 and MITF are able to promote MET expression independently, SOX10 alone was unable to drive expression from the proximal promoter element utilized in this study. However, SOX10 was able to function as a cofactor with either PAX3 or MITF. While PAX3 is able to drive expression and bind to the MET promoter in a number of different melanoma cell lines, inhibition of PAX3 lead to a significant reduction of MET in SK-MEL23 and SK-MEL28 but not in A375 cells ((14) and Figure 4E). These earlier studies suggest that PAX3 is a regulator of MET in melanoma cells, but that the mechanism for activation was not the same for all melanoma cells. Here, we identify a pathway wherein transcription factors ETS1 and PAX3 promote excess MET protein expression. We find that PAX3 and ETS1 are commonly expressed in melanoma (Figure 1), efficient at driving MET promoter expression ( Figure 3A), and bind directly the MET promoter ( Figure 3C,4B). The PAX and ETS enhancer sites are important for promoter expression in melanoma, and inhibition of these factors lead to a reduction of MET levels ( Figure 4). In this report, we discover that ETS1 drives MET expression in melanoma, and ETS1 activity is increased multifold with the addition of HGF or PAX3. Our group and others find that PAX3 is expressed in melanoma, where it actively promotes migration and invasiveness (20). These qualities are the most deadly in melanoma, and they are reacquired during resistance to the small molecule BRAF inhibitor, vemurafenib. Since PAX3 promotes migration and metastasis, it may be an active player in vemurafenib resistance. Indeed, recent reports find that PAX3 is overexpressed in vemurafenib resistant cell lines, and PAX3 inhibition restores drug sensitivity in these cells (21). One major mechanism for melanoma progression and drug resistance may be through PAX3-dependent MET expression. An added mechanism of PAX3-related resistance may be through its interaction with ETS1, a protein activated through pathways downstream of BRAF. Our findings suggest that ETS1 activates MET both in a PAX3-dependent and independent manner ( Figure 5,6). This is the first report to find that ETS1 directly interacts with the transcription factor PAX3, and that these factors synergistically activate the expression of MET (Figures 3,4,6). ETS1 also binds to the PAX3-related protein PAX5 and the PAX5 epitopes that directly bind to ETS1 are also found in PAX3 (31,34,41). It is likely that ETS1 drives the expression of many genes together with PAX factors. In this report, we find the first example of a PAX3-ETS1 regulated gene in melanoma, the tyrosine kinase receptor MET. We find that ETS1 promotes melanoma progression, and the implementation of a dominantnegative ETS1 (DN-ETS1) protein inhibits MET promoter expression and melanoma cell growth (Figure 7). This DN-ETS1 protein also inhibits tumor growth, invasion and migration in other tumor models (37,42). DN-ETS1 has also been previously expressed in melanoma cells to study in vivo invasiveness (43). This group found that DN-ETS1 enhances invasion and metastasis in one overexpressing clone (TM4) but not in another (TM5). Our findings, coupled to these prior discoveries, support that ETS1 may have differential roles in melanoma progression in terms of growth and metastasis. This falls in line with recent findings for MITF and BRN2, where both factors promote melanoma progression, but higher levels of each protein drives either cellular proliferation or migration. (44,45). The precise role of ETS1 in melanoma initiation, establishment, and metastasis still needs to be elucidated. In our report, we discover a pathway for MET to promote its own expression by HGFdependent induction of ETS1 ( Figure 5, 6). HGF signals through the MET receptor, which in turn activates ETS1 downstream of the MAPK pathway. This has potential long-term clinical implications as well, since HGF has been found to be a major driver of resistance against mutant BRAF inhibitors in melanoma therapy (46). This resistance may be supported, at least in part, by ETS1 activity. It is not known if the consequential phosphorylation and activation of ETS1 contributes to the progression of this tumor, or how dependent the cells are on ETS1 activity. Our report further supports that ETS1, along with PAX3, drives the expression of oncogenic genes that promote tumor progression. Immunofluorescence analysis of PAX3, ETS1, and MET expression Archival paraffin-embedded primary melanoma specimens were obtained through the University of Chicago Hospital Dermatology tissue bank following approved IRB and Clinical Trials Committee protocols. For antigen retrieval, 5μm tissue sections were boiled in EDTA buffer (pH 8.5). Samples were blocked in 1% normal goat serum then incubated with primary antibodies PAX3 (1:250) and ETS1 (1:500), pETS1 (1:500), MET (1:1000), or pMET (1:500). Samples were washed in PBS buffer, and incubated with goat anti-rabbit fluorescein or goat anti-mouse dyl647 secondary antibody (1:1000, Pierce Biotechnology/ Thermo Scientific, Rockford, IL). Staining was scored as "positive" or "negative" with a positive score ≥25% of the tumor tissue expressing the antigen. Plasmid construction The MET reporter construct (METpm) contains the genomic DNA fragment shown in Figure 2A and cloned as described (14). Mutations in putative PAX and ETS sites were created by site directed mutagenesis. The sites were mutated by changing the PAX site from GTCCCGC to ACTAGTC, the E1 site from CTCCGG to CTCGAG, and the E2 site from GCAGGAAG to GCTAGCAG. The PAX3 expression construct was created as described (47). The pcDNA3-ETS1 expression construct was provided by Eric Svensson (University of Chicago, Chicago, IL). ETS1(T38A) was created by site directed mutagenesis, changing codon 38 from ACT (Threonine) to GCT (Alanine). The pGex2T-PAX3 vector was a kind gift from Jonathan Epstein (University of Pennsylvania, Philadelphia, PA). The coding sequence of ETS1 was cloned into pGex2T (GE Healthcare, Uppsala, Sweden) for overexpression in bacteria and purification using primers AACCCGGGTATGAAGGCGGCCGTCGATCTCAA and TTCCCGGGTCACTCGTCGGCATCTGGCTT. The pcDNA3-DN-ETS1-IRES-GFP construct contains amino acids 306-441 of ETS1 which encodes the DNA binding domain only and functions as a dominant negative protein against wild-type ETS1 activity (36)(37)(38). This region was amplified using primers ATAAGCTTATGGACTATGTGCGGGACCGTGCT and ATGGATCCTCACTCGTCGGCATCTGGCT then cloned into pcDNA3 (Invitrogen). The internal ribosomal entry site (IRES) was amplified from the pIRES-hrGFP-1a vector (Agilent Technologies, Santa Clara, CA) with primers TAGATATCCTTGGGTTACCCCCCTCTCCCT and TACTCGAGGCGGCCGCCATTATCATCGTGTTTTTCA then cloned into the previous construct. To add the green fluorescent protein (GFP) gene to this construct, Primers AAGCGGCCGCAATGGTGAGCAAGGGCGAGGAG and AATCTAGATTACTTGTACAGCTCGTCCAT were used to amplify the GFP gene of the pEGFP-N1 vector (Clontech Laboratories, Inc., Palo Alto, CA). The pcDNA3-GFP vector was constructed by amplifying the EGFP gene and cloning it into pcDNA3. Luciferase assays Cells were transfected with MET promoter luciferase reporter constructs, an internal control beta-galactosidase expressing construct pCMV (Clontech), PAX3 and ETS1 expression constructs. Cells were transfected with Lipofectamine 2000 (Invitrogen), incubated for 48 hours, then luciferase and beta-galactosidase levels were measured (Promega Biosciences, Inc., Madison, WI). For the calculation of fold induction, luciferase activity was measured in arbitrary light units, normalized against beta-galactosidase activity, and divided by the measurements obtained for reporter vector alone (or with HGF, if the experimental group also had HGF treatment.) All experiments were performed in at least triplicate. Chromatin Immunoprecipitation (ChIP) assays Cells were fixed in 1% formaldehyde, quenched in 0.125M glycine, then lysed and sonicated in SDS lysis buffer (1% SDS, 10mM EDTA, 50mM Tris-HCl pH 8.1). Protein-DNA complexes were incubated with antibodies against PAX3 or ETS1. Normal IgG (Sigma-Alrich) was used as a negative control against non-specific DNA precipitation by an antibody. PCR was performed with primers to the MET enhancer, with TCCGCCTCTAACAATGAACTCC (F,human) or TTGTCTGTGACAATGAGCGCC (F,mouse) and AAGGTGAAACTTTCTAGGTGG (R). All ChIP samples were tested for false positive PCR amplification using beta-tubulin gene sequence primers. The nested primer set for this control was AAAGGCCACTACACAGAGGG (F) and TACCAACTGATGGACGGAGAGG (R,human) or AACCAACTGATGGACAGACAGG (R,mouse). Co-immunoprecipitation of PAX3 and ETS1 proteins Proteins collected from melanoma cells were obtained via sonication in 2X GS buffer (40mM HEPES, 100mM KCl, 40% glycerol, 2mM 2-mercaptoethanol) with protease inhibitor cocktail (Sigma-Aldrich) and 1mM PMSF. GST, GST-ETS1, and GST-PAX3 were expressed in BL21 bacteria and purified with Glutathione-Sepharose 4B beads according to the manufacturer's protocol (GE Healthcare). Protein was incubated with either PAX3 or ETS1 antibodies for 2 hours at 4°C, then with Protein A/G agarose beads (EMD Millipore) for an additional 2 hours. The resulting precipitates were washed 3 times in lysis buffer, resolved on 10% SDS-PAGE gels, and evaluated by standard Western blot analysis. Normal mouse IgG antibody was used as controls. Electrophoretic Mobility Shift Assay GST, GST-ETS1, or GST-PAX3 proteins were mixed in reaction buffer (10mM Tris-HCl, pH 8.0, 150mM KCl, 0.5mM EDTA, 0.1% Triton-X 100, 12.5% glycerol, 0.2mM DTT, and 100μg/ml poly(dI-dC) for 30 minutes at 25°C followed by the addition of probe for 15 minutes at 25°C. Primers for probe sequences ( Figure 3B) were annealed and labeled by end filling with DNA polymerase I Large (Klenow) fragment (New England Biolabs) then purified on Illustra ProbeQuant G-50 Micro columns (GE Healthcare). Electrophoresis was performed on 5% native gels. The gels were dried then exposed to autoradiography film. In vivo tumor formation assays Athymic Nu/Nu mice (6-7 month old females, Charles River Laboratories,Roanoke, IL) and treated and maintained in accordance to institutional guidelines. A375 cells were transfected with either the pcDNA3-GFP or pcDNA3-ETS1-DN-IRES-GFP vectors with TransIT-2020 Transfection Reagent (Mirus, Madison, WI) according to manufacturer's instruction. Eighteen hours post-transfection, cells were trypsinized and resuspended at 20×10 6 cells/mL in DMEM media containing 20% FBS. GFP-positive cells were collected by the University of Chicago Flow Cytometry Core Facility using the FACSAria III (BD, Franklin Lakes, NJ). Cell counts and viability were confirmed using hemacytometer readings and trypan blue exclusion. Sorted cells were washed once in PBS and resuspended in a final volume of 15×10 6 cells/mL. Mice received subcutaneous injections of 200μL (3×10 6 cells) in each flank (n=6, pcDNA3-GFP; n=6, pcDNA3-ETS1-DN-IRES-GFP). Tumor volume was measured using calipers, with volume calculated as (length × width 2 ) × (π/6). Statistical Analysis and densitometry For each experiment, three or more replicates were analyzed. Western band intensities were measured using ImageJ64 (http://rsbweb.nih.gov/ij/). Data presented as percentages or fold are normalized to a control group as stated. Data error bars represent standard error of the mean (SEM) for comparisons of samples within one group. One-tailed unpaired student Ttest analysis determined significance for all graphical data, where all sample groups were compared to a control (Microsoft Excel). For in vivo tumor growth experiments, significance was determined by two-tailed Mann-Whitney U tests. P-values of ≤0.05 are considered significant. The MET promoter 5′ proximal to the transcriptional start contains putative PAX and ETS sites. (A) Sequence from the MET locus, including 297 bp 5′ upstream to the transcriptional start site (arrow), and 25 bp of 5′ UTR that includes a SMAD SBE site (48), MITF site (39,40), and HES/NOTCH site (49). A PAX site is indicated −752 to −746 by a black box. Putative ETS sites are shown −241 to −236 on the reverse strand of the sequence (ETS/P (E1), box) and −85 to −79 (ETS (E2), underline). (B) Nomenclature and schematics of MET reporter constructs containing the promoter sequence shown in (A) driving the reporter gene lucifierase. Constructs contain wild-type PAX (P) and ETS (E1 and E2) sites as shown in A or the sites that are mutated shown schematically with an X. PAX3 and ETS1 activate the proximal MET promoter. (A) MET promoter reporter constructs, shown schematically in Figure 2, were transfected into 293T cells with PAX3 (white bars), ETS1 (grey bars), or both (black bars). Each bar represents n=9, with standard error of the mean as shown. Differences between synergistic activation of set 1 and the other sets are significant (p<0.05 set 4, p<0.005 other sets). (B) P/E1 and E2 probe sequences utilized in EMSA analysis. (C) PAX3 and ETS1 bind to elements within the MET promoter. Two EMSA probes were utilized, containing either the P and E1 sites (lanes 1-6) or the E2 site (lanes 7-10). Lanes contain either probe alone (lanes 1,7), with the addition of GST (lanes 2,8), PAX3 (lanes 3,4), or ETS1 (lanes 5,6,9,10) proteins. Cold probe was added in 100 times excess as a specific competitor (lanes 4, 6,10). Slower migrating bands are seen with the addition of PAX3 (lane 3, arrows B, C) or ETS1 (lane 9, arrows A, C). Figure 2B) and ETS1, with the addition of PAX3 or exogenous HGF. Increased levels of luciferase were significant for all samples in comparison to vector alone (293T: p<0.005, 3T3: p<0.05), and between ETS1 + HGF in 293T cells and ETS1 + PAX3 in both cell lines in comparison to ETS1 alone (p<0.05). The levels of luciferase did not increase significantly in 3T3 cells with the addition of HGF and ETS1 in comparison to ETS1 alone (p=0.475). (D) Schematic of the HGF/MET/MAPK phosphorylation site on epitope T38 of ETS1. In the ETS1(T38A) mutant protein, a mutation of T38 to an alanine abrogates the ability of MAPK to phosphorylate ETS1 (22)(23)(24)(25). (E) The ETS1(T38A) mutant synergistically activates MET expression with PAX3 but not with HGF. Luciferase assays of 293T cells transfected with either ETS1 or ETS1(T38A), with or without the addition of HGF or PAX3. Each bar represents n=9, with standard error of the mean as shown. The addition of HGF or PAX3 to ETS1, or HGF to ETS1(T38A) transfected cells led to a significant fold increase in luciferase versus ETS1 proteins alone (p<0.005). Conversely, HGF was unable to increase overall luciferase levels in comparison to ETS1(T38A) alone (p=0.172). (F) Mutation of ETS1(T38A) abrogates ETS1 phosphorylation after HGF treatment. 293T cells transfected with empty vector (pcDNA3), ETS1 wild-type, and ETS1(T38A) expression constructs are grown in the absence (−) or presence (+) of exogenous HGF. Western blots were probed with antibodies against MET, phosphorylated MET (pMET), ERK, pERK, ETS1, and pETS1. Vinculin antibody was included as a loading control. (G, H) HGF increases the levels of pETS1 in 5/6 melanoma cell lines. Protein lysates from cells without (−) or with (+) the addition of HGF treatment were tested for the expression of MET, pMET, ETS1, The MET promoter contains two ETS-responsive enhancers that work synergistically with either PAX3 or HGF stimulation. (A) ETS1 synergistically activates the MET promoter with PAX3 through enhancer E1, and HGF via enhancer E2. Luciferase assays of 293T cells transfected with MET promoter constructs with wild-type E1 and E2 enhancer sequence (set 1), E1 mutated (set 2), E2 mutated (set 3), or with both E1 and E2 mutated (set 4). The reporter constructs were transfected with ETS1 (white bars), ETS1 and PAX3 (grey bars), or ETS1 with exogenous HGF (black bars). Fold induction was calculated by measuring luciferase activity in arbitrary light units, normalized against beta-galactosidase activity, then divided by the measurements obtained for reporter vector alone (n=9). (B) Schematic summarizing ETS1 activation of the MET promoter through the E1 and E2 elements. The E1 site has a PAX-dependent ETS1 binding sequence with a PAX site directly proximal; ETS1 synergistically activates the MET promoter with PAX3 through E1. The E2 site contains an ideal ETS1 binding site, and ETS1 synergistically activates the MET promoter through this site with exogenous HGF. Expression of a dominant-negative ETS1 (DN-ETS1) protein inhibits MET induction and melanoma cell growth. (A) A schematic of wild-type ETS1 and DN-ETS1 proteins. (B) DN-ETS1 inhibits the activation of the MET promoter by both PAX3 and ETS1. Luciferase assays of 293T cells transfected with METpm, in the presence (+) or absence (−) of PAX3, ETS1, and/or DN-ETS1 expression vectors. Fold induction was calculated by measuring luciferase activity in arbitrary light units, normalized against beta-galactosidase activity, and divided by the measurements obtained for reporter vector alone (n=9, p<0.005). (C) A western analysis detects DN-ETS1 expression in A375 melanoma cells utilizing an antibody that recognizes the C-terminal of ETS1 (ETS1/2 antibody). (D) DN-ETS1 attenuates melanoma cell growth. A375 and mel-624 cells were transfected with either a GFP or a dual DN-ETS1/GFP expressing construct, and green cells were counted after cells were transfected and at 48 hours post transfection. The "% starting cell numbers" were calculated as GFP-expressing cell numbers at 48 hours post-transfection divided by starting cell numbers, then multiplied by 100 (n=3, p<0.005). (E) DN-ETS1 attenuates tumor formation in vivo. A375 cells were transfected with either a GFP or a dual DN-ETS1/GFP expression construct and transplanted into the flanks of nu/nu mice. While all A375 cells transfected with GFP formed tumors ten days after transplantation, only 2/6 of DN-ETS1 formed any palpable tumors with significant inhibition of tumor formation (p<0.05). For each group, n=6.
2016-05-12T22:15:10.714Z
2014-10-22T00:00:00.000
{ "year": 2014, "sha1": "ce6c70907f7b6f92916ea4baed51ee5aaae660bf", "oa_license": null, "oa_url": "https://www.nature.com/articles/onc2014420.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "63e79281f40f25e41c0cb50e92fed4b2309dcba9", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
220282241
pes2o/s2orc
v3-fos-license
Purification and Functional Reconstitution of Monomeric μ-Opioid Receptors Despite extensive characterization of the μ-opioid receptor (MOR), the biochemical properties of the isolated receptor remain unclear. In light of recent reports, we proposed that the monomeric form of MOR can activate G proteins and be subject to allosteric regulation. A μ-opioid receptor fused to yellow fluorescent protein (YMOR) was constructed and expressed in insect cells. YMOR binds ligands with high affinity, displays agonist-stimulated [35S]guanosine 5′-(γ-thio)triphosphate binding to Gαi, and is allosterically regulated by coupled Gi protein heterotrimer both in insect cell membranes and as purified protein reconstituted into a phospholipid bilayer in the form of high density lipoprotein particles. Single-particle imaging of fluorescently labeled receptor indicates that the reconstituted YMOR is monomeric. Moreover, single-molecule imaging of a Cy3-labeled agonist, [Lys7, Cys8]dermorphin, illustrates a novel method for studying G protein-coupled receptor-ligand binding and suggests that one molecule of agonist binds per monomeric YMOR. Together these data support the notion that oligomerization of the μ-opioid receptor is not required for agonist and antagonist binding and that the monomeric receptor is the minimal functional unit in regard to G protein activation and strong allosteric regulation of agonist binding by G proteins. Opioid receptors are members of the G protein-coupled receptor (GPCR) 2 superfamily and are clinical mainstays for inducing analgesia. Three isoforms of opioid receptors, , ␦, and , have been cloned and are known to couple to G i/o proteins to regulate adenylyl cyclase and K ϩ /Ca ϩ ion channels (1)(2)(3). An ever growing amount of data suggests that many GPCRs oligomerize (4,5), and several studies have suggested that -opioid receptors (MORs) and ␦-opioid receptors heterodimerize to form unique ligand binding and G protein-activating units (6 -10). Although intriguing, these studies utilize cellular overexpression systems where it is difficult to know the exact nature of protein complexes formed between the receptors. To study the function of isolated GPCRs, our laboratory and others have utilized a novel phospholipid bilayer reconstitution method (11)(12)(13)(14)(15)(16). In this approach purified GPCRs are reconstituted into the phospholipid bilayer of a high density lipoprotein (HDL) particle. The reconstituted HDL (rHDL) particles are monodispersed, uniform in size, and preferentially incorporate a GPCR monomer (14,15). Previous work in our lab has shown that rhodopsin, a class A GPCR previously proposed to function as a dimer (17)(18)(19), is fully capable of activating its G protein when reconstituted as a monomer in the rHDL lipid bilayer (15). Moreover, we have demonstrated that agonist binding to a monomeric ␤ 2 -adrenergic receptor, another class A GPCR, can be allosterically regulated by G proteins (14). This led us to determine whether a monomer of MOR, a class A GPCR that endogenously binds peptide ligands, is the minimal functional unit required to activate coupled G proteins. We additionally investigated whether agonist binding to monomeric MOR is allosterically regulated by inhibitory G protein heterotrimer. To study the function of monomeric MOR we have purified a modified version of the receptor to near homogeneity. A yellow fluorescent protein was fused to the N terminus of MOR, and this construct (YMOR) was expressed in insect cells for purification. After reconstitution of purified YMOR into rHDL particles, single-molecule imaging of Cy3-labeled and Cy5-labeled YMOR determined that the rHDL particles contained one receptor. This monomeric YMOR sample binds ligands with affinities nearly equivalent to those observed in plasma membrane preparations. Monomeric YMOR efficiently stimulates GTP␥S binding to G i2 heterotrimeric G protein. G i2 allosteric regulation of agonist binding to rHDL⅐YMOR was also observed. Single-particle imaging of binding of [Lys 7 , Cys 8 ]dermorphin-Cy3, a fluorophore-labeled agonist, to rHDL⅐YMOR supports the notion that the rHDL particles contain a single YMOR. Taken together, these results suggest that a monomeric MOR is the minimal functional unit for ligand binding and G protein activation and illustrate a novel method for imaging ligand binding to opioid receptors. EXPERIMENTAL PROCEDURES Materials-G protein baculoviruses encoding rat G␣ i2 , His 6 -G␤ 1 , and G␥ 2 were provided by Dr. Alfred G. Gilman (University of Texas Southwestern, Dallas, TX). DNA encoding human -opioid receptor (NM 000914.2) was provided by Dr. John R. Traynor (University of Michigan, Ann Arbor, MI). Expired serum was generously provided by Dr. Bert La Du (University of Michigan, Ann Arbor, MI). Spodoptera frugiperda (Sf9) and Trichoplusia ni (HighFive TM ) cells, pFastBac TM baculovirus expression vectors and Sf900 TM serum-free medium were from Invitrogen. InsectExpress TM medium was purchased from Lonza (Allendale, NJ). N-Dodecyl-␤-D-maltoside was from Dojindo (Rockville, MD). All of the lipids were from Avanti Polar Lipids (Alabaster, AL). [ 3 H]diprenorphine (DPN, 54.9 Ci/mmol) and [ 35 S]GTP␥S (1250 Ci/mmol) were obtained from PerkinElmer Life Sciences. EZ-Link TM NHS-Biotin reagent was from Pierce. Ovomucoid trypsin inhibitor was purchased from United States Biological (Swampscott, MA). GF/B and BA85 filters, Cy3 and Cy5 NHSester mono-reactive dyes, Cy3-maleimide dye, and Source 15Q and Superdex 200 chromatography resins were from GE Healthcare. Talon TM resin was from Clontech. Bio-Beads TM SM-2 absorbant resin was from Bio-Rad. Chromatography columns were run using a BioLogic Duo-Flow protein purification system from Bio-Rad. Amicon Ultra centrifugation filters were from Millipore (Billerica, MA). Amino acids for ligand synthesis were obtained from Advanced ChemTech (Louisville, KY) or Sigma-Aldrich. All other chemicals and ligands were from either Sigma-Aldrich or Fisher Scientific. YFP--Opioid Receptor Fusion Protein Expression and Purification-Baculoviruses were created using transfer vectors (pFastBac TM ) that encoded a fusion protein of an N-terminal cleavable hemagglutinin signal sequence (MKTIIALSYIF-CLVF), a FLAG epitope (DYKDDDD), a decahistidine tag, the monomeric and enhanced yellow fluorescent protein (Clontech), and the human MOR. High titer viruses (10 7 -10 8 plaque-forming units/ml) were used to infect Sf9 or High-Five TM suspension cultures at a multiplicity of infection of 0.25 to 1. FLAG-His 10 -mEYFP-MOR (YMOR) was expressed for 48 -52 h in the presence of 1 M naltrexone (NTX). The cells were resuspended in Buffer A (50 mM Tris⅐HCl, pH 8.0, 50 mM NaCl, 100 nM NTX, and protease inhibitors (3.2 g/ml leupeptin, 3.2 g/ml ovomucoid trypsin inhibitor, 17.5 g/ml phenylmethanesulfonyl fluoride, 16 g/ml tosyl-L-lysine-chloromethyl ketone (TLCK), 16 g/ml tosyl-L-phenylalanine chloromethyl ketone (TPCK)) and lysed by nitrogen cavitation. Supernatants from a 500 ϫ g spin were then subjected to 125,000 ϫ g spin for 35 min, and pellets containing membrane fractions were resuspended in Buffer A and either stored at Ϫ80°C for later binding assays or further processed to purify receptor. All of the purification steps were performed sequentially and at 4°C or on ice. Membrane preparations were diluted to 5 mg/ml in Buffer A plus 1% DDM (w/v) and 0.01% cholesteryl hemisuccinate (CHS) (w/v) and gently stirred for 1 h to solubilize YMOR out of the membrane. Detergent extracted YMOR was enriched via Talon TM metal affinity chromatography resin (Clontech) in Buffer A plus 0.1% DDM and 0.01% CHS and eluted with Buffer A plus 0.1% DDM, 0.01% CHS, and 150 mM imidazole. Fractions containing YMOR (based on Coomassie staining after SDS-PAGE separation) were pooled, diluted 5-fold in Buffer B (20 mM Hepes, pH 8.0, 5 mM MgCl 2 , 0.1% DDM, 0.01% CHS, 100 nM NTX, and phenylmethanesulfonyl fluoride-TLCK-TPCK protease inhibitors), and eluted from a 1-ml Source 15Q strong anion exchange column with a 50 -300 mM NaCl linear gradient. Peak fractions were identified by radioligand binding assays using [ 3 H]DPN (2-4 nM). Peak fractions were then pooled and concentrated on Amicon Ultra centrifugation filters (10-kDa molecular mass cut-off). As a final purification step this YMOR sample was resolved based on size using a Superdex 200 gel filtration column (GE Healthcare) in Buffer C (20 mM Hepes, pH 8.0, 100 mM NaCl, 0.1% DDM, 0.01% CHS, 100 nM NTX, and protease inhibitors). Coomassie staining after SDS-PAGE was used to identify fractions containing YMOR, which were pooled and concentrated to ϳ1-2 M. Glycerol was added to a final concentration of 10% (v/v), and the samples were flash-frozen in liquid nitrogen and then stored at Ϫ80°C until further use. In Vitro HDL Reconstitution-HDL particles were reconstituted according to previously reported protocols (14). Briefly, 21 mM sodium cholate, 7 mM lipids (1-palmitoyl-2-oleoyl-snglycero-3-phosphocholine (POPC) and 1-palmitoyl-2-oleoylsn-glycero-3-[phospho-rac-(1-glycerol)] (POPG) at a molar ratio of 3:2), purified YMOR (DDM-solubilized), and 100 M purified apoA-1 were solubilized in 20 mM Hepes, pH 8.0, 100 mM NaCl, 1 mM EDTA, 50 mM sodium cholate. The final concentration of YMOR varied from 0.2 to 0.4 M, but YMOR always comprised 20% of the total reconstitution volume. In some reconstitutions the lipid component was modified such that porcine polar brain lipid extract (Avanti Polar Lipids) was used in addition to POPC and POPG for a final concentration of 7 mM lipids at a molar ratio of 1.07:1.5:1 brain lipid:POPC: POPG. Following incubation on ice (1.5-2 h), the samples were added to Bio-Beads TM (Bio-Rad, 0.05 mg/ml of reconstitution volume) to remove detergent and to form HDL particles. Particles containing YMOR were purified via M1 anti-FLAG immunoaffinity chromatography resin (Sigma) and eluted with 1 mM EDTA plus 200 g/ml FLAG peptide. To assess the efficiency of HDL reconstitution, the total protein concentration of rHDL⅐YMOR was compared with the concentration of active reconstituted YMOR. FLAG affinity column-purified rHDL⅐YMOR was resolved from BSA and FLAG peptide on a Superdex 200 gel filtration column, and the peak fraction was analyzed for protein content with Amido Black staining (20), whereas [ 3 H]DPN saturation assays were used to measure active YMOR content. YMOR reconstituted into HDL was stored on ice until further use. G Protein Addition to rHDL⅐YMOR Particles-G i2 heterotrimer (G␣ i2 -His 6 G␤ 1 -G␥ 2 ) was expressed in Sf9 cells and purified as previously described (21). The concentration of active G protein was determined by filter binding of 20 M [ 35 S]GTP␥S (isotopically diluted) in 30 mM NaHepes, pH 8.0, 100 mM NaCl, 50 mM MgCl 2 , 1 mM EDTA, 0.05% C 12 E 10 , and 1 mM dithiothreitol after 1 h of incubation at room temperature. Purified G protein heterotrimer was added to preformed, FLAG-purified rHDL⅐YMOR particles at a molar ratio of 1:10 receptor to G protein. G i2 was purified in 0.7% CHAPS, and the final volume of G protein added to rHDL⅐YMOR was such that the final CHAPS concentration was well below its critical micelle concentration. Reconstituted receptor and G protein samples were then incubated in Bio-Beads TM for 30 -45 min at 4°C to remove residual detergent. were measured for radioactivity on a liquid scintillation counter, and the data were fit with one-site saturation, one-site competition, or two-site competition binding models using Prism 5.0 (GraphPad, San Diego, CA). [ 35 S]GTP␥S Binding Assay-100-l volume reactions were prepared containing 1 g of total membrane protein from YMOR expressing HighFive TM cells or ϳ50 -60 fmol of YMOR incorporated into rHDL particles in 30 mM Tris⅐HCl, pH 7.4, 100 mM NaCl, 5 mM MgCl 2 , 0.1 mM dithiothreitol, 10 or 1 M GDP (membranes or HDL particles), and 0.1% BSA. Membrane assays and rHDL assays were incubated with 10 nM isotopically diluted [ 35 S]GTP␥S (12.5 Ci/mmol). YMOR samples were incubated with increasing concentrations of agonists (1 pM to 1 mM) for 1 h at room temperature, then rapidly filtered through GF/B (membrane samples) or BA85 filters (HDL samples), and washed three times with 2 ml of ice-cold 30 mM Tris⅐HCl, pH 7.4, 100 mM NaCl, 5 mM MgCl 2 . The samples were measured for radioactivity on a liquid scintillation counter, and the data were fit with a log dose-response model using Prism 5.0. Single-molecule Imaging of Reconstituted Cy3-and Cy5-YMOR-Purified YMOR (ϳ200 pmol) was incubated with NHS-ester Cy3 or Cy5 mono-reactive dye (GE Healthcare) in 20 mM Hepes, pH 8.0, 100 mM NaCl, 5 mM MgCl 2 , 6 mM EDTA, 0.1% DDM, 0.01% CHS, 100 nM NTX for 30 min at 25°C and then 1 h at 4°C. Conjugation reactions were quenched by the addition of Tris⅐HCl, pH 7.7 buffer (10 mM final). Cy-labeled YMOR was then separated from free dye using a 12-cm Sephadex G-50 column. The final dye to protein molar ratio was 3.8:1 for Cy3-YMOR and 3.7:1 for Cy5-YMOR, determined by absorbance at 280 nm for total protein, 550 nm for Cy3 dye, and 650 nm for Cy5 dye as measured on a NanoDrop ND-1000 spectrophotometer. A separate aliquot of YMOR was co-labeled with both Cy3 and Cy5 for final molar ratios of 2:1 Cy3: YMOR and 1.8:1 Cy5:YMOR. HDL reconstitutions of Cy3-YMOR alone, Cy5-YMOR alone, a mixture of Cy3-and Cy5-YMOR, and Cy3-Cy5-YMOR were then performed. Reconstitutions were performed as previously described, using apoA-1 that had been biotinylated at a 3:1 molar ratio using EZ-Link NHS-Biotin according to the manufacturer's protocol (Pierce). Reconstituted samples were diluted 5000-fold in 25 mM Tris⅐HCl, pH 7.7, and injected into a microfluidic channel on a quartz slide that was previously coated with biotinylated polyethylene glycol and treated with 0.2 mg/ml streptavidin to generate a surface density of ϳ0.05 molecules/m 2 . An oxygen scavenging system of 10 mM Trolox (Sigma-Aldrich), 100 mM protocatechuic acid, and 1 M protocatechuate-3,4-dioxygenase was included in the sample dilution (22). Following a 10-min incubation to allow binding of the biotin-HDL⅐Cy-YMOR complex to the streptavidin-coated slide, the channel was washed with ice-cold 25 mM Tris⅐HCl, pH 7.7, containing the oxygen scavenging system. An Olympus IX71 inverted microscope configured for prism-based total internal reflection fluorescence (TIRF) and coupled to an intensified CCD camera was used to image Cy3 and Cy5 fluorophores (CrystaLaser, 532 nm; Coherent CUBE laser, 638 nm; Chroma band pass filters HQ580/60 and HQ710/130 nm). Fluorophore intensity time traces were collected for 30 -100 s at 10 frames/s. Time traces were analyzed for photobleaching with in-house software (MatLab 7.0). Synthesis of [Lys 7 , Cys 8 ]Dermorphin and Labeling with Cy3 Dye-[Lys 7 , Cys 8 ]dermorphin (Tyr-D-Ala-Phe-Gly-Tyr-Pro-Lys-Cys-NH 2 ) was synthesized on Rink resin (solid support) using an Applied Biosystems 431A peptide synthesizer and standard Fmoc (N-(9-fluorenyl)methoxycarbonyl) chemistry. The samples were characterized on a Waters reverse phase HPLC using a Vydac C18 (Protein and Peptide) 10-micron column. The samples were run on a linear gradient of 0 -45% acetonitrile in an aqueous phase containing 0.1% trifluoroacetic acid at 35°C and monitored at 254 and 230 nm (supplemental Fig. S3). Peptides were similarly purified on a Waters semipreparative reverse phase HPLC using a Vydac C18 10-micron column at room temperature. The purified peptide was labeled with Cy3-maleimide (GE Healthcare) according to the manufacturer's instructions using a ratio of 1.5:1 peptide to fluorophore and repurified by HPLC as before. The labeled peptide was further purified via semi-preparative HPLC using a 5-micron Vydac C18 column as described above. The potency and efficacy of [Lys 7 , Cys 8 ]dermorphin-Cy3 at MOR were con-firmed in radiolabeled [ 3 Expression of a Functional Allosteric regulation of agonist binding to opioid receptors by G proteins has been well established in plasma membrane preparations of brain homogenates and overexpression systems (26 -30) and was also observed for YMOR expressed in insect cells. In the absence of G i2 , DAMGO competed [ 3 H]DPN (0.5 nM) binding in a concentration-dependent manner with a K i of ϳ580 nM (Fig. 1D). In membranes expressing both YMOR and Gi 2 , DAMGO exhibited a biphasic mode of inhibition of [ 3 H]DPN binding with a K i hi of ϳ2.9 nM and a K i lo of ϳ1.5 M (fraction of K i hi ϳ0.53). The addition of 10 M GTP␥S to the YMOR ϩ Gi 2 membranes eliminated the high affinity DAMGO-binding site (K i ϭ ϳ300 nM), illustrating that YMOR is allosterically regulated by G proteins. Therefore, YMOR expressed in HighFive TM cells proved fully functional in regards to G protein coupling. YMOR Purification-The capacity of a variety of detergents (zwitterionic, polar, and nonionic detergents such as Triton X-100, digitonin, Nonidet P-40, CHAPS, C 12 E 10 , and n-octyl-␤-glucoside) to solubilize YMOR from insect cell membranes was assessed by anti-FLAG Western blot analysis. Extraction of YMOR with DDM proved to be most efficient (data not shown). The addition of CHS (0.01% w/v) improved the stability of the solubilized receptor as measured by [ 3 H]DPN binding, consistent with the stabilizing effects of cholesterol moieties observed for solubilized ␤ 2 AR (31) (supplemental Fig. S1). The expression levels of YMOR and the efficiency of DDM solubilization, assessed by [ 3 H]DPN binding and anti-FLAG Western blots, were enhanced in the presence of naltrexone. Therefore this antagonist was present (100 nM) during all subsequent purification steps. DDM-extracted YMOR was purified through a series of chromatographic steps including metal chelate (Talon TM ), anion exchange (Source 15Q), and size exclusion (Superdex 200) chromatography columns. Peak fractions from each column were determined by Coomassie staining, anti-FLAG antibody Western blotting, and [ 3 H]DPN binding. Fractions displaying the highest [ 3 H]DPN binding capacity and appropriate molecular weight were pooled and subjected to the next chromatographic step. Superdex 200 peak fractions were pooled, concentrated, flash-frozen in liquid N 2 , and then stored at Ϫ80°C until later use. A representative silver-stained SDS-PAGE separation of each enrichment step is shown in Fig. 2A. Yields of YMOR were typically ϳ50 g of Ͼ95% pure receptor/ liter of insect culture. YMOR, deemed pure based on silver staining, bound [ 3 H]DPN at ϳ20% of the predicted binding sites calculated from the molecular mass. This loss of activity is likely due to the detrimental freeze/thaw process. Reconstitution of YMOR into High Density Lipoprotein Particles-The reconstitution of YMOR into high density lipoprotein (rHDL) particles was performed with a 500-fold molar excess of apoA-1 to YMOR (250-fold excess of rHDL) to favor reconstitution of a single YMOR molecule/rHDL particle. Resolution of rHDL particles containing YMOR (rHDL⅐ YMOR) with size exclusion chromatography (SEC) suggested an apparent Stokes diameter of 10.5 nm (Fig. 2B). FIGURE 2. Purification of YMOR from HighFive TM insect cells and reconstitution into HDL particles. A, YMOR was extracted from membranes in the presence of 100 nM naltrexone with 1% n-dodecyl-␤-D-maltoside and 0.01% cholesteryl hemisuccinate and enriched on a Talon TM metal affinity column. The Talon TM pool containing YMOR (predicted 73-kDa molecular mass) was applied to a Source 15Q anion exchange column followed by a size exclusion gel filtration column (Superdex 200). Samples of the purification steps were resolved by SDS-PAGE and silver-stained. YMOR was enriched to ϳ95% purity (GF peak). B, purified YMOR was reconstituted into HDL particles (see "Experimental Procedures") and resolved by size exclusion chromatography (Superdex 200). Fractions were analyzed for total protein content (UV absorbance), active YMOR ([ 3 H]DPN binding), and YFP fluorescence. UV absorbance showed a major peak corresponding to rHDL particles (stokes diameter of ϳ10.5 nm). The elution volume of active YMOR corresponded with the rising slope of the rHDL peak. YFP fluorescence eluted as two peaks, corresponding to the active YMOR and a larger YMOR aggregate that was not incorporated into rHDL particles. This aggregated YMOR was not active based on [ 3 H]DPN binding. The data were normalized such that the maximal value for each parameter was set to 100%. tional YMOR in particles that eluted slightly earlier than the main UV absorbance peak (Stokes diameter of ϳ10.3 nm), indicating the slightly larger size of rHDL⅐YMOR compared with rHDL. The YFP fluorescence of the SEC fractions indicated an additional peak that elutes near in the Superdex 200 void volume, suggestive of aggregated YMOR. Because these fractions did not bind [ 3 H]DPN, we surmised that this receptor population was inactive. This aggregated and inactive receptor is consistent with the previous observation that ϳ80% of the purified YMOR is inactive post freeze/thaw and prior to reconstitution into HDL. The (Fig. 2C). This significant disruption of high affinity ligand binding for the soluble receptor is not surprising, because detergents are known to decrease the ligand affinity of opioid receptors (32) and disrupt GPCRs in general (33). Replacement of detergents with phospholipids reverses the deleterious effects of the detergent and restores YMOR conformation to one that binds [ 3 H]DPN with native, membrane-bound affinity. YMOR Is Monomeric When Incorporated into rHDL-To determine the number of YMOR molecules present in each rHDL particle, we assessed the degree of receptor co-localization between Cy3-and Cy5-labeled YMOR using single-particle imaging. Purified YMOR was labeled with Cy3-or Cy5reactive fluorescent dyes and reconstituted into HDL using biotinylated ⌬(1-43)-His 6 -apoA-1. This biotin-rHDL⅐Cy-YMOR complex was then incubated on a streptavidin-coated microfluidic slide and excited with 532-and 638-nm lasers. Fluorescence emission was detected using prism-based SM-TIRF. Reconstituted Cy3-YMOR and Cy5-YMOR were visualized as mono-disperse fluorescent foci (Fig. 3, A and B). When Cy3-YMOR and Cy5-YMOR were mixed prior to reconstitution (Cy3-YMORϩCy5-YMOR), a low level of co-localization of the two fluorophores (3.4%) was observed (Fig. 3, C and E). A false-positive co-localization signal of 2.8 and 2.2% was observed for the Cy3-YMOR and Cy5-YMOR samples, respectively (Fig. 3E). As a positive control, co-localization for YMOR labeled with both Cy3 and Cy5 was also measured (Fig. 3D). Considering that our method does not allow for the differentiation between YMOR labeled with multiple Cy probes of the same fluorophore versus co-localization of multiple YMORs labeled with the same Cy probe, we may underestimate colocalization by as much as a factor of 2. Thus we estimate an upper limit for co-localization, i.e. incorporation of two or more YMORs into a single rHDL particle, of ϳ1.8% ((3.4% ϫ 2) Ϫ (2.8% ϩ 2.2%)). These data suggest that HDL reconstitution resulted in a sample containing mostly monomeric YMOR. Therefore the ligand binding and G protein coupling characteristics of the HDL-reconstituted YMOR presented here are indicative of the properties of a monomeric receptor. To assess the amount of active versus inactive YMOR incorporated into rHDL, we compared the total protein concentration of purified rHDL⅐YMOR particles to the maximal concen-tration of YMOR based on [ 3 H]DPN saturation binding assays. Reconstituted YMOR was purified using FLAG affinity resin and resolved by SEC into a sample containing only apoA-1 and YMOR (based on SDS-PAGE and Coomassie staining). Amido Black staining of these peak fractions indicated a protein concentration of ϳ5.2 g/ml. Considering that the protein sample consists of two apoA-1 molecules (molecular mass, ϳ25,500 Da) and one YMOR (molecular mass, ϳ73,500 Da), the rHDL⅐YMOR sample has a total molecular mass of 124,500 Da and therefore a molar concentration of ϳ42 nM. Saturation binding assays on the rHDL⅐YMOR SEC peak indicated a molar concentration of ϳ31 nM for YMOR which bound ligand. Given the inherent limitations of protein detection assays at the low concentration of our samples, these data suggest that at least 75% of the YMOR in rHDL is active based on [ 3 H]DPN binding. Monomeric YMOR Binds Antagonists with High Affinity-Reconstituted and monomeric YMOR binds antagonists with affinities similar to those observed in the plasma membrane. The antagonists naloxone and naltrexone and the -specific peptide CTAP (D-Phe-Cys-Tyr-D-Trp-Arg-Thr-Pen-Thr-NH 2 ) compete [ 3 H]DPN binding in similar fashion when in HighFive TM cell membrane preparations or in reconstituted Purified YMOR was labeled with Cy3 or Cy5 fluorescent dyes and reconstituted separately or together into biotin-labeled rHDL. The particles were imaged using single-molecule total internal reflection fluorescence microscopy on quartz slides coated with streptavidin. Representative overlay images of reconstituted rHDL⅐Cy3-YMOR (A), rHDL⅐Cy5-YMOR (B), rHDL⅐(Cy3-YMOR ϩ Cy5-YMOR) (C), and rHDL⅐Cy3-Cy5-YMOR (D) are shown. Quantification of Cy3 and Cy5 co-localization (E) showed that when both Cy3-YMOR and Cy5-YMOR were mixed together prior to reconstitution (mixture, 4c), only ϳ3.4% of rHDL particles contained two labeled receptors, compared with a false-positive co-localization signal of 2.8 and 2.2% observed for the rHDL⅐Cy3-YMOR and rHDL⅐Cy5-YMOR samples. YMOR co-labeled with both Cy3 and Cy5 was also imaged as a positive control for co-localization. Approximately 24% of rHDL⅐Cy3-Cy5-YMOR particles (co-labeled, 4d) exhibited colocalization, indicating that not all YMOR received a secondary fluorescent dye under the labeling conditions. SEPTEMBER 25, 2009 • VOLUME 284 • NUMBER 39 HDL particles (Fig. 4A). Therefore reconstitution into HDL allows YMOR to adopt a conformation that appears identical to receptor in plasma membranes in terms of high affinity antagonist binding. Functional G Protein Coupling by -Opioid Receptor Monomer Monomeric YMOR Functionally Couples Inhibitory G Proteins-We next analyzed the capacity of monomeric YMOR to functionally couple to heterotrimeric G protein. Functional measurements included allosteric regulation of agonist binding (Fig. 4B) and agonist-mediated stimulation of [ 35 S]GTP␥S binding (Fig. 4C). Anti-FLAG affinity column purification of rHDL⅐YMOR was performed to remove "empty" rHDL particles, and purified G i2 heterotrimer was added at a 10:1 G i2 : rHDL⅐YMOR molar ratio. The concentration of purified G protein was determined by [ 35 S]GTP␥S binding assays, and receptor concentration was measured with [ 3 H]DPN saturation assays of rHDL⅐YMOR samples. G i2 addition results in high affinity agonist binding (Fig. 4B). Morphine competed [ 3 H]DPN binding with a K i hi of ϳ1.7 nM and a K i lo of ϳ320 nM, whereas DAMGO inhibited with a K i hi of ϳ7.6 nM and a K i lo of ϳ1.8 M. As in membranes the addition of 10 M GTP␥S to the monomeric rHDL⅐YMORϩG i2 uncouples the G protein and results in a single-site, low affinity agonist competition curve. Similarly, when rHDL⅐YMOR was not coupled to G i2 , DAMGO competed [ 3 H]DPN binding with micromolar affinity (supplemental Fig. S2). Although G i2 was added to rHDL⅐YMOR at a 10:1 molar ratio in these reconstitutions, the observed high and low agonist affinities illustrate that not all YMOR was coupled to G i2 . In fact, the agonist competition assays indicate that ϳ54% of the receptor was coupled to G i2 . These data are consistent with previous studies on the ␤ 2 AR where the addition of purified G protein heterotrimer to rHDL particles in the absence of detergents resulted in 90 -95% G protein loss, largely because of aggregation (14). Indeed, SEC analysis of rHDL⅐YMOR before and after G i2 addition confirmed that a large amount of the heterotrimer aggregates during its addition (data not shown). As such, the two populations of ϳ54% coupled and ϳ46% uncoupled YMOR correspond well with the expected final G protein to YMOR ratio of 0.5-1:1. DISCUSSION Intense research over the past decade has focused on the existence and functional consequences of GPCR oligomerization. Recently, our laboratory and others have taken advantage of a unique phospholipid bilayer platform in the form of rHDL particles (13)(14)(15)(16). Using this system we previously demonstrated the functional reconstitution of two prototypical GPCRs, rhodopsin and ␤ 2 AR (14,15). Having illustrated that these receptors are monomeric following reconstitution, we observed complete functional coupling to their respective G proteins. For the ␤ 2 AR, agonist binding to monomeric receptor is allosterically regulated by the stimulatory heterotrimeric G protein G␣ s ␤␥. Therefore GPCR dimerization is not required for functional G protein coupling in the prototypical class A GPCRs rhodopsin and ␤ 2 AR. In contrast, the accumulation of considerable biochemical and biophysical evidence suggests that the -, ␦-, and -opioid receptors may function in a fashion that is dependent on their oligomerization. Analysis of receptor overexpression in co-transfected cells suggested the existence of ␦-␦ homodimers (34) and demonstrated unique pharmacology resulting from -␦ (6, 7, 10) and ␦heterodimers (35). Bioluminescence resonance energy transfer studies have shown that homo-and heterodimers can be formed by all three opioid receptor isoforms (8). These findings have promoted the notion that opioid receptor oligomerization is important for function (36,37). However, with co-transfection systems the final profiles of and ␦ monomers, homodimers, and heterodimers are unknown. In light of these reports, we rationalized that it was vital to determine whether the monomeric form of the MOR behaves in a pharmacologically similar manner as the native, membrane-bound form. The HDL reconstitution system was used to investigate the function of monomeric MOR. Reconstitution of MOR required the purification of active receptor in large enough quantities for subsequent biochemical manipulation and analysis. Previous reports of MOR purification from endogenous or recombinant sources have yielded either low quantities (38 -42) or poor agonist binding affinities (43)(44)(45). Several aspects were key to the expression and purification of YFP-MOR (YMOR): a cleavable hemagglutinin signal sequence at the N terminus of the receptor (46), the presence of naltrexone during expression, and the inclusion of cholesteryl hemisuccinate and naltrexone during the entire purification process. These modifications contrib- FIGURE 5. Single-molecule imaging of Cy3-labeled agonist binding to rHDL⅐YMOR ؉ G i2 confirms that the majority of YMOR is monomeric when reconstituted into HDL. Binding of [Lys 7 , Cys 8 ]dermorphin-Cy3, a fluorescently labeled MOR-specific agonist, to rHDL⅐YMORϩG i2 was observed with prism-based total internal reflection fluorescence microscopy. YMOR was reconstituted into HDL particles using biotinylated apoA-1, followed by G i2 heterotrimer addition at a 30:1 G protein to receptor molar ratio. Reconstituted HDL (A) or rHDL⅐YMORϩG i2 (B) were then incubated with a saturating concentration (500 nM) of [Lys 7 , Cys 8 ]dermorphin-Cy3, adhered to a streptavidin-coated quartz slide, and washed with ice-cold 25 mM Tris⅐HCl, pH 7.7 buffer. Bound [Lys 7 , Cys 8 ]dermorphin-Cy3 was continuously excited at 532 nm to observe photobleaching of the fluorophore. C, representative fluorescence intensity traces for a one-and two-step photobleach event are shown. The arrows indicate photobleach events. D and E, quantification of photobleaching showed that ϳ95% of the bound [Lys 7 , Cys 8 ]dermorphin-Cy3 bleached in a single step, suggesting that 95% of rHDL particles contained monomeric YMOR. [Lys 7 , Cys 8 ]dermorphin-Cy3 binding was reversible, as shown the addition of 5 M NTX. A minimum of four slide regions and 200 fluorescent spots were counted for each sample. uted to the stabilization of YMOR, leading to an increase in yields during the chromatography process as well as increasing the specific activity of the detergent solubilized receptor. Reconstitution into HDL facilitated the isolation of a monomeric form of YMOR that couples to and activates heterotrimeric G proteins. Furthermore, we illustrate that the inhibitory G protein G␣ i ␤␥ can allosterically regulate agonist binding to monomeric MOR. These results suggest that G protein regulation of monomeric receptors is likely a phenomenon common to all G proteins and class A GPCRs. The capacity of MOR to couple to various isoforms of G i/o heterotrimers in a differential manner are currently under investigation in our laboratory. Taken together our results demonstrate the functionality of monomeric MOR, illustrating that the opioid GPCR does not require dimerization to bind ligands and signal to G proteins. However, these data do not refute the existence of opioid receptor oligomerization in cells. It remains possible that homo-and heterodimerization create unique ligand binding entities that may be subject to differential regulation, desensitization, and internalization. One of our future goals is to isolate various oligomeric states of opioid receptors and other GPCRs in rHDL particles and compare their activities toward ligands and signaling partners directly. A major technical obstacle is the formation of "anti-parallel" receptor dimers, where the N termini for the reconstituted GPCRs in the oligomer are on opposite sides of the phospholipid bilayer. Indeed, reconstitution of two GPCRs in a single rHDL particle has been previously demonstrated for rhodopsin (13,16), but a detailed analysis by nanogold labeling and single-particle electron microscopy of the reconstituted "dimers" reveals that a significant fraction were anti-parallel (16). It is plausible, and even likely, that an antiparallel GPCR dimer will result in suboptimal or disrupted G protein coupling. The development of approaches to ensure parallel GPCR dimer incorporation into rHDL particles to confidently study the functional relevance of opioid receptor dimerization is currently a major priority in the laboratory. HDL reconstitution of YMOR also provided a platform for analysis of ligand binding using single-molecule microscopy. In this study we examined the reversible binding of a -opioid receptor specific agonist [Lys 7 , Cys 8 ]dermorphin (47) with SM-TIRF. To the best of our knowledge these data represent the first reported observation of a peptide agonist binding to an isolated GPCR in a lipid bilayer using single-particle imaging. We are currently refining our methods to resolve the kinetics of ligand binding to YMOR, as well as visualizing ligand binding to opioid receptors in intact cells. Peptide receptors such as the opioid family are particularly amenable to fluorescence spectroscopy because the relatively large ligand size can tolerate the incorporation of fluorophores without drastically impairing binding affinities. This single-molecule visual approach of studying ligand binding to opioid receptors, utilizing labeled agonists and antagonists with unique binding affinities toward receptor homo-and heterodimers, may potentially address opioid receptor oligomerization in physiologically relevant tissue preparations.
2019-08-18T17:50:27.303Z
2009-06-19T00:00:00.000
{ "year": 2009, "sha1": "8a9b096bb9181b64e6ce03fdf1eeaddd033874a7", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/284/39/26732.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "bdb1de18acec3ea01ea7da152e5460023aef714b", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
104213698
pes2o/s2orc
v3-fos-license
Influence of water intercalation and hydration on chemical decomposition and ion transport in methylammonium lead halide perovskites The use of methylammonium (MA) lead halide perovskites \ce{CH3NH3PbX3} (X=I, Br, Cl) in perovskite solar cells (PSCs) has made great progress in performance efficiency during recent years. However, the rapid decomposition of \ce{MAPbI3} in humid environments hinders outdoor application of PSCs, and thus, a comprehensive understanding of the degradation mechanism is required. To do this, we investigate the effect of water intercalation and hydration of the decomposition and ion migration of \ce{CH3NH3PbX3} using first-principles calculations. We find that water interacts with \ce{PbX6} and MA through hydrogen bonding, and the former interaction enhances gradually, while the latter hardly changes when going from X=I to Br and to Cl. Thermodynamic calculations indicate that water exothermically intercalates into the perovskite, while the water intercalated and monohydrated compounds are stable with respect to decomposition. More importantly, the water intercalation greatly reduces the activation energies for vacancy-mediated ion migration, which become higher going from X=I to Br and to Cl. Our work indicates that hydration of halide perovskites must be avoided to prevent the degradation of PSCs upon moisture exposure. Introduction Perovskite solar cells (PSCs) using methylammonium lead halide perovskites MAPbX 3 (MA = methylammonium cation CH 3 NH + 3 ; X = halogen anion I -, Bror Cl -) as light harvesters have opened up a new stage in the field of photovoltaics. The rapid rise of power conversion efficiency of PSCs from the initial 3.81% reported by Kojima et al. in 2009 [1] to over 20% [2,3] within a few years is quite remarkable, as compared with other types of solar cells [4,5,6]. Moreover, much low cost of cell fabrication by chemical methods and much abundance of raw materials have attracted great attention, inspiring people with hope that solar power will become competitive with fossil fuels in the electricity market in the near future. In spite of a great advance in research and development of PSCs during the past years, there still remain some obstacles to prevent commercialization of PSCs, such as toxicity of lead [7] and mostly poor material stability of these halide perovskites [8,9,10,11,12]. First-principles calculations revealed that MAPbI 3 , the most widely used hybrid perovskite halide in PSCs, may be decomposed exothermally [13] and such intrinsic instability can be cured by mixing halogen ions [14,15]. On the other hand, moisture has been reported to play a critical role as an extrinsic factor in the degradation of PSC performance [16]. Although controlled humidity condition or a small amount of H 2 O has positive effects on PSC performance such as enhancement of the reconstruction in perovskite film formation and triggering nucleation and crystallization of perovskite phase [17,18,19], H 2 O molecule is known to damage the integrity of perovskite films. Some researchers asserted that MAPbI 3 may readily hydrolyze in the presence of water due to deprotonation of CH 3 NH + 3 by H 2 O, resulting in degradation products such as CH 3 NH 2 , HI and PbI 2 [8,20,21]. On the contrary, different degradation reactions in perovskite films under humidity condition have been reported. Huang et al. [22] focused on the interaction between H 2 O and MAPbI 3 and suggested that degradation of MAPbI 3 under ambient condition may produce PbCO 3 and α-PbO rather than PbI 2 . Some reports found that the formation of monohydrated CH 3 NH 3 PbI 3 ·H 2 O or dehydrated (CH 3 NH 3 ) 4 PbI 6 ·2H 2 O intermediate phase is the initial step in the perovskite decomposition process under 80% humidity exposure [23,24,25,26]. It was found subsequently that these intermediate phases are unstable in ambient condition, so that further decomposition into the final products could occur [27,28,29,30,31,32]. As a consequence, the decomposition of MAPbI 3 in humid air is a rather complicated process and their reaction processes or mechanisms are in an active controversy. To unveil the mechanism of MAPbI 3 decomposition upon humidity exposure, theoretical simulations based on the density functional theory (DFT) have been performed, focused on H 2 O adsorption on MAPbI 3 surfaces [33,34,35,36]. However, comprehensive understand-ing of water-assisted decomposition of MAPbI 3 is not yet fully established, and is urgent to facilitate materials engineering for enhanced material stability. In this work, we investigate the influence of water intercalation and hydration on the decomposition of MAPbX 3 (X = I, Br, Cl) by performing first-principles calculations. The crystalline and atomistic structures of water intercalated phases, as we denoted MAPbX 3 H 2 O, and monohydrated phases MAPbX 3 ·H 2 O are explored carefully. These phases can be regarded as the intermediate ones on the process of water-assisted decomposition of MAPbX 3 . The intercalation energies of a water molecule into the pseudo-cubic phases and the decomposition energies of the water intercalated and monohydrated phases into PbX 2 , CH 3 NH 3 X and H 2 O are calculated to draw a meaningful conclusion about the stability. Finally we consider vacancy-mediated diffusion of an Xanion, MA + cation, and H 2 O molecule in pristine, water intercalated and monohydrated phases, since such migrations seem to give important implication of the material stability [37,38,39,40,41]. Pb 2+ migration is excluded due to high formation energy of the Pb vacancy, V Pb . Computational Methods All of the pimary DFT calculations were carried out using the pseudopotential plane wave method as implemented in Quantum-ESPRESSO package [42]. We used the ultrasoft pseudopotentials provided in the package 1 , where the valence electronic configurations of atoms are H-1s 1 , C-2s 2 2p 2 , N-2s 2 2p 3 , Cl-3s 2 3p 5 , Br-4s 2 4p 5 , I-5s 2 5p 5 , and Pb-5d 10 6s 2 6p 2 . The exchange-correlation interaction between the valence electrons was estimated using the Perdew-Burke-Ernzerhof (PBE) [43] form within the generalized gradient approximation, which is augmented by dispersive van der Waals interaction (vdW-DF-OB86), already shown to be important for calculations of perovskite halides [44,45]. The structures of MAPbX 3 and water intercalated phases were assumed to be pseudo-cubic. Unit cells containing one formula unit (f.u.) for these phases and two formula units for monohydrated phase were used for structural optimization and decomposition energetics. For diffusion process, (2×2×2) supercells for pseudocubic phase and (2×2×1) supercells for monohydrated phases, including 120 and 96 atoms, were used. The cutoff energy for plane-wave basis set is as high as 40 Ry and k-points are (2 × 2 × 2) with Monkhorst-Pack method, which guarantee the total energy accuracy as 5 meV per unit cell. Atomic positions were fully relaxed until the forces converge to 5×10 −5 Ry/Bohr. The activation energies for migrations were calculated using the climbing image nudged elastic band (NEB) method [46]. The structural optimizations of pseudo-cubic MAPbX 3 phases produced lattice constants of 6.330, 5.949, and 5.682 Å for X=I, Br, and Cl, which are in good agreement with 1 the experimental values [47] within 1% relative errors. A water molecule was put into the interstitial space of these optimized cubic phases, which were re-optimized. The supercell shape becomes triclinic following optimization. We have also performed optimization of the monohydrated phases MAPbX 3 ·H 2 O with monoclinic crystalline lattice and experimentally identified atomic positions [24,26,48,49]. For the case of MAPbI 3 ·H 2 O, the determined lattice constants a = 10.460 Å, b = 4.630 Å, c = 11.100 Å, and β = 101.50 • agree well with the experimental results [48]. It is worth noting that the MAPbI 3 crystal has a 3-dimensional structure with corner-sharing PbI 6 octahedra, while the monohydrated phase MAPbI 3 ·H 2 O is characterized by a 1-dimensional edge-sharing PbI 6 structure. The optimized atomistic structures of water intercalated phase MAPbI 3 H 2 O and monohydrated phase MAPbI 3 · H 2 O are shown in Figure 1. It should be emphasized that the total energies per formula unit of MAPbX 3 H 2 O are typically 0.3 eV higher than those of MAPbX 3 · H 2 O, indicating that the water intercalated phase would be an intermediate phase in a transformation to the monohydrated phase. In order to compare the free energies of formation for pristine and hydrated phases, additional calculations of the harmonic phonon density of states were calculated using the codes Phonopy [50] and VASP (PBEsol functional). [51] The Gibbs free energy of formation (G solid ) is calculated from the sum of the DFT internal energy (U), the vibrational free energy (F vib ), and the configurational entropy (S con f ). S con f takes account of the configurational entropy (rotational activity) of the MA + ion from statistical mechanics, which cannot be extracted directly from static DFT calculations. [52] To complement the solid-state calculations, the free energy of water vapour was estimated from a standard ideal gas expression. Water-perovskite interaction For the water intercalated phase MAPbI 3 H 2 O, the intercalated water molecule, with organic molecular MA + ion, resides in the large interstitial space formed by a huge framework consisting of Pb and I atoms of inorganic PbI 6 octahedra (for X=Br and Cl, similar structures are observed). It is bonded with H atoms of NH 3 moiety of MA and I atoms of PbI 6 ; the bond length between the O atom of water and the H atom of NH 3 is ∼1.63 Å, and those between the H atoms of water and the I atoms of PbI 6 are 2.43 and 2.54 Å. In Figure 1(b) for the monohydrated phase MAPbI 3 ·H 2 O, the corresponding bond lengths are observed to be slightly longer, i.e., 1.78, 2.83, and 2.92 Å. These indicate that the interactions between water molecule and inorganic PbI 6 as well as organic MA are through hydrogen bonding, as already pointed out in the previous works [35,53,54]. When going from I to Br and to Cl, the (H 2 O)O−H(MA) bond length increases a little for the water intercalated phases or hardly change for the monohydrated phases, whereas the (H 2 O)H−X(PbX 6 ) bond lengths decrease distinctly (see Table 1). On consideration that water intercalation causes a volume expansion, we estimate a relatively volume expansion rate as r vol = (V − V 0 )/V 0 × 100%, where V is the volume of water intercalated or monohydrated phase and V 0 the volume of pseudo-cubic MAPbX 3 phase per formula unit. This gradually increases going from X=I to Br and to Cl. Consequently, it can be deduced that decreasing the atomic number of halogen component induces an enhancement of the interaction between water and inorganic PbX 6 matrix through hydrogen bonding, while maintaining those interactions between water and the MA ion, resulting in the contraction of the interstitial space and the volume, which might cause difficulty of water intercalation and ion migration. To estimate the ease of water intercalation, we calculated the intercalation energy by where E MAPbX 3 ·H 2 O , E MAPbX 3 , and E H 2 O are the total energies of the water intercalated or monohydrated bulk, pseudo-cubic bulk unit cells per formula unit, and an isolated water molecule, respectively. The intercalation energies in the water intercalated phases were calculated to be −0.53, −0.43, and −0.36 eV for X=I, Br, and Cl, which are smaller in magnitude than those in the monohydrated phases as −0.87, −0.78, and −0.69 eV, as presented in Table 2. Therefore, it can be said that formation of the monohydrated phases is easier than formation of water intercalated phases. When going from X=I to Br and to Cl, the magnitude of water intercalation energy decreases, indicating that water intercalation becomes more difficult as the atomic number of halogen component decreases. This can be interpreted with the analysis of the interaction between water and perovskite, and the structural property. It should be noted that, although negative intercalation energies imply the exothermic process of water intercalation, a certain amount of energy (kinetic barrier) could be required for water molecule to intercalate into the perovskite halides, as penetration of water into the perovskite iodide surface [33,34,35,36]. We further considered decomposition of water intercalated and monohydrated phases by calculating the decomposition energy, where E PbX 2 and E MAX are the total energies of crystalline PbX 2 (space group: P3m1) and MAX (space group: Fm3m), respectively. As shown in Table 2, the decomposition energies were calculated to be positive, indicating that the decomposition is an endothermic process. At this moment, it is worth to compare with those for the pristine perovskite halides without water. The decomposition of MAPbI 3 → MAI + PbI 2 is exothermic due to the negative decomposition energy of −0.06 eV, whereas those of MAPbBr 3 and MAPbCl 3 are also endothermic with the positive decomposition energies of 0.10 and 0.23 eV [14,15]. Interestingly, the decomposition energy in magnitude is in the reverse order to the intercalation energy going from X=I to Br and to Cl. This indicates that water could intercalate more readily into the halide perovskite but the formed water-included compounds would be more resistant to decomposition into their relevant components as decreasing the atomic number of halogen component. To consider finite-temperature effects, additional calculations were performed to compute the Gibbs free energy of each phase (see Figure 2). Formation of the hydrated compound is found to be favourable at lower temperatures due to the large energy gain from creation of the hydrate versus a state containing MAPbI 3 and water vapour. The calculations confirm the results stated earlier, but also predict that at higher temperatures the pristine Figure 2: Calculated Gibbs free energies as a function of temperature for the monohydrate at fixed pressure p 0 = 1 bar. Also shown are the reference free energies of MAPbI 3 assuming different values for the (statistical mechanical) configurational entropy due to the motion of the MA + cation, S con f . Additional entropy shifts the transition temperature for hydrate formation to lower temperature values (from T=445 K to T=345 K). MAPI 3 phase becomes stabilized by its vibrational entropy. The inclusion of configuration entropy associated with the rotational freedom of the MA + ion acts to further stabilize the material against hydration. Dynamics of water incorporation and charged defects We turn our attention to how the ions and water molecule diffuse inside the water intercalated or monohydrated perovskites, trying to find out their role in material instability. It is well known that diffusion of ions in crystalline solid is associated with point defects such as a site vacancy and/or interstitial. While such migrations of ions or defects can provide explicit explanations for the performance of PSC device such as ionic conduction, hysteresis, and field-switchable photovoltaic effect [37,38,39,40,41,55,56,57], these might have important implications for material stability. It was established that, although as in other inorganic perovskite oxides several types of point defects could be formed in the hybrid perovskite halides, including vacancies (V MA , V Pb , V X ), interstitials (MA i , Pb i , X i ), cation substitutions (MA Pb , Pb MA ) and antisite substitutions (MA X , Pb X , X MA , X Pb ), vacancies except V Pb have the lowest formation energies, while others are unstable both energetically and kinetically [41,57]. In this work we thus considered only vacancies V MA and V X in the pristine, water-intercalated, and monohydrated phases, which could support vacancy-mediated ionic diffusion. For migration of the water molecule inside the water intercalated or monohydrated phases, water vacancy V H 2 O is formed and allowed to migrate. For vacancy-mediated ion migration in the pristine and water intercalated MAPbX 3 phases, we follow the three vacancy transport mechanisms established in the previous works [37,38,40], where vacancies are allowed to conventionally hop between neighbouring equivalent sites. According to these mechanisms, Xat a corner site of PbX 6 octahedron migrates along the octahedron edge towards a vacancy in another corner site, and MA + hops into a neighbouring vacant cage formed by the inorganic scaffold. Water molecule migrates along the similar path to MA + case. For the cases of monohydrated phase, we have devised plausible paths for each of the three defects, and pick out one that has the lowest activation energy, as discussed below in detail. Figure 3 shows the schematic view of vacancy-mediated ion and molecule migration paths. Special attention was paid to obtaining the well-converged structures of the start and end point configurations with structural relaxations with convergence criteria as 0.01 eV/Å atomic forces. The activation energies for these vacancy-mediated migrations are summarised in Table 3. To see whether our computational models and parameters could give reasonable results for the ionic migrations, the pseudo-cubic MAPbI 3 was first tested. As listed in Table 3, the activation energies for Iand MA + migrations were calculated to be 0.55 and 1.18 eV, respectively, which are comparable with 0.58 and 0.84 eV reported in ref. [37], and 0.32−0.45 and 0.55−0.89 eV in ref. [38], but higher than 0.16 and 0.46 eV in ref. [40]. With respect to the crystalline lattice, we used the pseudo-cubic lattice as in the work in ref. [37], while Haruyama et al. [38] and Azpiroz et al. [40] used the tetragonal lattice. An exchange-correlation (XC) functional including dispersion (vdW) interactions was used in our work and the work in ref. [38], whereas PBEsol and PBE without vdW correction were used in ref. [37] and ref. [40], respectively. Therefore, the slight discrepancies might be associated with the different crystalline lattices and XC functionals without vdW correction, not with the supercell size. Most important, the activation energy for Imigration is lower than that for MA + migration in all the above-mentioned works, convincing that the results obtained in this work can be used to find out the influence of water on ion diffusion. For Imigration in the monohydrated phase MAPbI 3 · H 2 O, which is structurally characterized by 1-dimensional edge sharing PbI 6 octahedra connected in [010] direction, we devised four migration pathways along the three octahedron edges in different directions and across the space between separated octahedra. The lowest activation energy was found for the migration along the edge in [010] direction, while the highest value over 2 eV was found for the one across the space, implying this pathway less likely. Figure 4 shows the Imigration pathways along the octahedron edge in the three kinds of phases and the corresponding activation energy profile. It is found that, when water intercalates into the perovskite, the activation energy decreases, indicating more facile diffusion of Iion upon water intercalation. Meanwhile, the activation energy in the monohydrated phase is higher than in the water intercalated phase but still lower than in the pristine phase. This demonstrates that the intercalated water molecule enhances diffusion of ions in hybrid perovskite halides, facilitating the formation of hydrated phases. However, once the hydrated phase is formed, diffusion of ions becomes a little harder. Similar arguments hold for MAPbBr 3 and MAPbCl 3 . An MA + ion in the monohydrated phase was enforced to migrate along the almost straight pathway in [010] direction since there is no channel in [100] direction due to the wall formed by PbI 6 octahedra and the channel in [001] direction has much longer distance. It should be noted that MA + cation in the water intercalated phase is allowed to diffuse equally both in [100] and [010] directions but not allowed to move in [001] direction due to the presence of water molecule on the path. Figure 5 shows the intermediate states during MA + migrations in the pristine, water-intercalated, and monohydrated MAPbI 3 phases and the corresponding energy profile. We can see distortions of PbI 6 octahedra during migration, much more clearly in the case of the water intercalated phase due to the rather strong interaction between PbI 6 and methylammonium, which may cause difficult MA + ion migration. Relatively high activation energies for MA + ion migrations were found for the pristine (1.18 eV) and monohydrated phases (1.14 eV), while low activation energy of 0.38 eV was found for the water intercalated phase. This indicates that inclusion of water in the perovskite halides reduces the activation energy for MA + ion migration as in the case of halogen ion migration. As discussed above, the volume expansion rate of the water intercalated phase is larger than that of the monohydrated phase, and thus, water intercalation can induce much more space ex- pansion, resulting in the enhancement of MA + ion diffusion. When compared to Imigration, MA + migration can be said to be more difficult due to higher activation energy in agreement with the previous works [37,38,40], in which the bottleneck comprising four Iions and the high level of orientational motion of the MA + ion were pointed out to be the reasons for such hard migration. Finally, a H 2 O molecule was allowed to migrate in the same direction as MA + ion in the water intercalated and monohydrated phases, as represented in Figure 6. The activation energy in the water intercalated phase was calculated to be 0.28 eV, which is low enough to diffuse inside the bulk crystal and form the hydrated phase. As in the cases of ion migrations, this is lower than the one in the monohydrated phase. At this stage, it is worth to compare with the initial process of water penetration into MAPbI 3 surface. From the first-principles calculations [36,58,33], it was found that, when water molecules are brought into contact with MAPbI 3 surface, the water molecule in the inside region is 0.2∼0.3 eV more stable than the one in the outside region, and thus, the water molecule is strongly driven to diffuse into the inside of MAPbI 3 . The activation barrier for this process was calculated to be 0.31 eV [33] or 0.27 eV [36] at low coverage of water and 0.82 eV [33] at high coverage. These are comparable to the barrier of water diffusion within the bulk crystal in this work, which can be regarded as a continuation of the water penetration, indicating easy formation of water intercalated and further hydrated phase. As can be seen in Table 3, water molecule can migrate more easily than MA + ion, which might be due to larger molecular size of MA and its stronger interaction with environmental components, but more toughly than Xion in both phases. On the other hand, the activation barrier of water migration in the monohydrated phase is higher than the one in the water intercalated phase, as in the cases of Xand MA + ion migrations. When the atomic number of halogen component decreases from X=I to Br and to Cl, the activation barriers for ions and water molecule migrations increases monotonically, as shown in Table 3. In fact, when changing from I with larger ionic radius of 2.2 Å to Br with smaller ionic radius of 1.96 Å and to Cl with further smaller ionic radius of 1.81 Å, the lattice spacing and PbX 6 −MA bonding shrink while maintaining the intramolecular MA spacing [14,15]. This results in the enhancement of Pb−X interaction and makes passages of ions and water molecule more difficult. Similar arguments hold for water intercalated and monohydrated phases. The increase of activation barrier for ion migrations going from X=I to Cl describes enhancement of material stability when mixed I atom with Br or Cl atom, and is coincident with aforementioned decomposition energetics. Conclusion To understand the degradation mechanism of PSCs upon exposure of MAPbI 3 to moisture, we have investigated the influence of water intercalation and hydration on decomposition and ion migrations of MAPbX 3 (X=I, Br, Cl) by firstprinciples calculations. The crystalline lattices and atomistic structures of water intercalated MAPbX 3 phases were suggested and optimized, together with those of monohydrated phases MAPbX 3 ·H 2 O. It was found that water interacts with PbX 6 and MA in these compounds through hydrogen bond, and the former interaction enhances gradually while the latter hardly change when going from X=I to Br and to Cl. The calculated results for water intercalation energies and decomposition energies indicate that water can exothermically intercalate into the hybrid perovskite halides, while the water intercalated and monohydrated compounds are stable with respect to decomposition. More importantly, the hydrogen bond interaction induced by water greatly affects the vacancy-mediated ion migrations, for which the activation barrier decreases upon the water inclusion inside the perovskite halides. The activation energies for ion and water molecule migrations become higher as going from X=I to Br and to Cl. These results clarify that degradation of the PSCs upon moisture exposure originates from the formation of water intercalated and further hydrated compounds and then decomposition of these compounds, which provides the idea to prevent this degradation at the atomic level. Acknowledgments This work was supported partially by the State Committee of Science and Technology, Democratic People's Republic of Korea, under the state project "Design of Innovative Functional Materials for Energy and Environmental Application" (No. 2016-20). The work in the UK was supported by the Royal Society and the Leverhulme Trust, and the Imperial College High Performance Computing Service. A.P.M. was supported by a studentship from the Centre for Doctoral Training in Theory and Simulation of Materials at Imperial College London, funded by the EPSRC under grant EP/G036888. The calculations have been carried out on the HP Blade System C7000 (HP BL460c) that is owned and managed by Faculty of Materials Science, Kim Il Sung University.
2017-10-14T23:45:34.000Z
2017-08-25T00:00:00.000
{ "year": 2018, "sha1": "2e31760cedc1ecacbbfff99910141bff680bfcae", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ta/c7ta09112e", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "2e31760cedc1ecacbbfff99910141bff680bfcae", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
85098621
pes2o/s2orc
v3-fos-license
Demographic population structure and fungal associations of plants colonizing High Arctic glacier forelands, Petuniabukta, Svalbard The development of vegetation in Arctic glacier forelands has been described as unidirectional, non-replacement succession characterized by the gradual establishment of species typical for mature tundra with no species turnover. Our study focused on two early colonizers of High Arctic glacier forelands: Saxifraga oppositifolia (Saxifragaceae) and Braya purpurascens (Brassicaceae). While the first species is a common generalist also found in mature old growth tundra communities, the second specializes on disturbed substrate. The demographic population structures of the two study species were investigated along four glacier forelands in Petuniabukta, north Billefjorden, in central Spitsbergen, Svalbard. Young plants of both species occurred exclusively on young substrate, implying that soil conditions are favourable for establishment only before soil crusts develop. We show that while S. oppositifolia persists from pioneer successional stages and is characterized by increased size and flowering, B. purpurascens specializes on disturbed young substrate and does not follow the typical unidirectional, non-replacement succession pattern. Plants at two of the forelands were examined for the presence of root-associated fungi. Fungal genus Olpidium (Fungus incertae sedis) was found along a whole successional gradient in one of the forelands. Keywords: Colonizer; deglaciation; endophyte; High Arctic; Olpidium ; succession. (Published: 30 April 2014) Citation: Polar Research 2014, 33 , 20797, http://dx.doi.org/10.3402/polar.v33.20797 Large areas of Arctic and alpine regions have been deglaciated and thus have become open to plant colonization since the end of the Little Ice Age (LIA) about 120 years ago. The boundaries of the last maximum development of glaciers are usually easily distinguished in the field (Rachlewicz et al. 2007), making glacier forelands an outdoor laboratory for the study of plant strategies in cold regions. Colonization of glacier forelands is hindered by a cold and short growing season, small and variable seed production, rough substratum, scarcity of nutrients and organic matter and disturbance by runoff, solifluction, cryoturbation and permafrost (Svoboda & Henry 1987;Chambers 1995;Hodkinson et al. 2003;Moreau et al. 2005;Moreau et al. 2008;Yoshitake et al. 2010). Plants that are especially successful in primary succession of deglaciated areas are recruited from a pool of stress-tolerant species of cold regions (Caccianiga et al. 2006). These colonizers are dispersed to the deglaciated areas, where seeds germinate at low temperatures and become established (Marcante et al. 2009;Schweinbacher et al. 2012). In contrast to succession on new substrates in temperate regions (e.g., Ř ehounková & Prach 2010), the early stages of succession in deglaciated areas of the Arctic lack ruderal species characterized by a short lifespan and large investment into fecundity (Moreau et al. 2008;Moreau et al. 2009;Prach & Rachlewicz 2012). That is, in the temperate alpine regions, pioneer plants rapidly colonize recently deglaciated areas within a few years and are replaced with late-successional plant assemblages in the following few decades of succession (Whittaker 1993;Raffl et al. 2006). In the Arctic, in contrast, pioneer plants take decades to colonize and the colonizers often persist in old tundra communities (Svoboda & Henry 1987;Hodkinson et al. 2003;Moreau et al. 2008;Mori et al. 2008;Prach & Rachlewicz 2012). Owing to a reduced species pool and environmental conditions, early successional specialist species are sparse in the High Arctic (Svoboda & Henry 1987) and/or they persist in old growth tundra on disturbed microsites, so their successional status is blurred. Positive biotic interactions*for example, mycorrhizal symbioses (Nakatsubo et al. 2010;Fujiyoshi et al. 2011), amelioration of substrate by cyanobacterial crusts (Yoshitake et al. 2010) and accumulated organic matter (Hodkinson et al. 2003)*play an important role in early stages of the succession in this low energy environment. It has been suggested that facilitation and competition are rare due to low plant cover (Nakatsubo et al. 2010). During succession in Arctic deglaciated areas substrates change over time. Initially they can be relatively nutrient rich, then reach a quasi-steady-state of development during the fourth or fifth decade due to low precipitation, which limits the mineral weathering (Kabala & Zapart 2012). Only after 60 years do soils develop (Hodkinson et al. 2003). In conjunction with this substrate succession, several studies have reported changing diversity and frequency of root-associated fungi (e.g., Jumpponen 2003;Cazaré s et al. 2005;Fujiyoshi et al. 2011). While arbuscular mycorrhiza is generally considered rare at higher latitudes (Newsham et al. 2009), being only rarely reported from Svalbard (Vä re et al. 1992;Ö pik et al. 2013), it has been suggested that diverse functional groups of fungi (e.g., dark septate endophytes) could influence nutrient uptake and growth in Arctic plants (Newsham et al. 2009). Ectomycorrhiza and association with dark septate endophytes have repeatedly been observed in the Arctic regions (Kohn & Stasovski 1990;Vä re et al. 1992;Fujimura & Egger 2012). Most pioneer species in the High Arctic are regarded as generalists capable of colonizing young substrate, in addition to other habitats, resulting in the unidirectional, non-replacement succession pattern (Svoboda & Henry 1987;Prach & Rachlewicz 2012). Here, we argue that in addition to these generalists, some pioneer species are specialists restricted to young substrate. The generalist species can be expected to display an increase of size and proportion of flowering individuals (i.e., an ageing pattern) along the successional gradient of time since deglaciation. By contrast, an early-succession specialist should depend more on local conditions and population structure should be more or less independent of the time elapsed since deglaciation. To identify these patterns, we investigated demographic population structures of two pioneer species*a widely spread generalist, Saxifraga oppositifolia, and an assumed specialist, Braya purpurascens*along the successional gradients of four glacier forelands in the Petuniabukta area, northern Billefjorden, Spitsbergen (Fig. 1). Further, to contribute to our understanding of the role of fungal symbioses in succession in the High Arctic, we examined roots of the two species for the presence of root-associated fungi on two of the forelands. Study species Saxifraga oppositifolia (Saxifragaceae) and Braya purpurascens (Brassicaceae) have been reported as pioneer species in glacier foreland succession in Svalbard (Hodkinson et al. 2003;Moreau et al. 2008;Nakatsubo et al. 2010;Prach & Rachlewicz 2012). Saxifraga oppositifolia holds its position up to late stages of succession and is common in old growth tundra. In contrast, B. purpurascens is frequent in initial stages of succession, but its abundance decreases in later stages, becoming limited to disturbed places in old growth tundra (Moreau et al. 2008;. Saxifraga oppositifolia is a long-lived perennial clonal species, which regenerates both from seeds and vegetative diaspores. In the study area, young plants are prostrate, non-clonal and have a main root. Older plants produce adventitious roots on branches and start to disintegrate from old parts . Branches can be deposited by runoff water and re-root (Hagen 2002). Saxifraga oppositifolia produces a large number of tiny, well-dispersed seeds forming a long-term persistent seed bank (Cooper et al. 2004;Schweinbacher et al. 2010;Mü ller et al. 2012). The species is facultatively mycorrhizal, forming arbuscular mycorrhiza or can be non-mycorrhizal (Harley & Harley 1987). Saxifraga oppositifolia displays great morphological, genetic and ploidy variability through its circumpolar Arctic and alpine distribution, including Svalbard (Mü ller et al. 2012). There are two main morphological forms: prostrate and cushion (Kume et al. 1999;Kume 2003). Most plants in this study were growing on young substrate (less than 15Á20 years) and were too young and small ( Fig. 2)* often just a single unbranched shoot*to be classified into a growth form. Nevertheless, the vast majority of older plants (ca. 95%) clearly belonged to the prostrate form, with only five clear cushion-type plants found at sites 2Á5 of the Sven foreland. Braya purpurascens (syn. Braya glabella subsp. purpurascens) is restricted to the Arctic. It is a non-clonal plant with a perennial main root and one to several leaf rosettes and flowering stems Fig. 3). Bledsoe et al. (1990) found no fungal colonization on roots for this species. Data collection Data on both study species were collected on forelands of four glaciers*Ferdinand, Hørbye, Ragnar and Sven, in Petuniabukta, northern Billefjorden, central Spitsbergen, Svalbard*in late July through early August 2011. A transect was established on each glacier foreland beginning at the closest occurrence of either of the studied species to the glacier front. The transects were ca. 1500 m long in the cases of the Ferdinand, Hørbye, Ragnar forelands, while the transect on the Sven foreland ended on the oldest LIA moraine. Using a global positioning system device, series of plots were placed along each time-since-deglaciation gradient at distances of 100, 300, 700 and 1500 m from the starting point (Fig. 1b). Plants were sampled from central parts of those plots until about 20 plants were sampled. Therefore, the sampled area was affected by the number of plant individuals on a site and plot size varied from about 100 m 2 to several thousand m 2 . For each foreland, the age since deglaciation was derived from dates of glacier retreat presented by Rachlewicz et al. (2007;Fig. 2). Up to 20 plants of each species were sampled at each site to determine the demographic structure of the populations. Saxifraga oppositifolia was the most frequent species and 20 individuals were sampled at most sites. In contrast, Braya purpurascens was in low abundance and rather scattered at most sites. Therefore, the B. purpurascens data set does not cover the whole glacier forelands in most cases and less than 20 individuals were sampled at many sites. The following traits of S. oppositifolia individuals were measured: size of plant (product of length of the longest axis of a tuft and that orthogonal to it) and reproductive status of the plant (flowering/fruiting vs. sterile). For B. purpurascens, the reproductive status of the plant (flowering/fruiting vs. sterile) and number of sterile and fertile rosettes per individual were counted. The tuft size, number of rosettes or proportion of flowering individuals represent the demography of the population integrating age of plant individuals and variation in growth rates at different sites (Gatsuk et al. 1980). Roots of five randomly chosen individuals of both species were collected at Ragnar and Hørbye forelands at the youngest inhabited site, at 300 m and 1500 m distant sampling sites. While S. oppositifolia individuals occurred at all the sampling sites, B. purpurascens was missing from the initial point and the 300 m distant site at Ragnar. Roots were washed, silica gel-dried for transport, and subsequently re-hydrated and stained by Chlorazol Black E following the procedure in Š milauerová et al. (2012). Ten randomly chosen roots per species and sampling site were inspected by light microscopy. Statistical analyses Patterns of demographic population structure (i.e., size and reproductive status of plant individuals) of both study species along the gradients of the age since deglaciation were assessed by a series of generalized linear and non-linear models. Due to different shapes of the gradients on the four forelands, each foreland was analysed separately, though the data and graphical representation of the models were subsequently embedded in single figures. Saxifraga oppositifolia size was analysed by non-linear hyperbolic models with the following equation: where a, b and c are parameters defining the shape of the relationship and age is the site age since deglaciation as derived from Rachlewicz et al. (2007). The size was log 10transformed prior to the analysis. This model assumes a greater size difference between plots in the young part of the age gradient than in the old part and an asymptote to which size will converge with the age. The asymptote is represented by the parameter c of the model. In addition, the projected intercept of the hyperbola with the x-axis (i.e., estimated age of the colonization start; backtransformed size 10 0 01 mm 2 ) is fitted by the model and corresponds to age 0 0(a/c(b. Reproductive status of both S. oppositifolia and B. purpurascens was analysed by binomial generalized linear models. The demographic population structure of B. purpurascens, based on numbers of plants differing in reproductive status (flowering vs. sterile) and number of rosettes (single vs. multiple), was visualized by bar-plots and analysed by goodness-of-fit test as contingency tables where possible. All statistical analyses were conducted in R version 2.13.2 (R Development Core Team 2011). Demographic population structure of plants colonizing glacier forelands J. Tě š itel et al. Results While both species occurred on the glacier forelands there was variability in their abundance, shoot architecture and size (Figs. 2, 3). Observed at all 20 sites, Saxifraga oppositifolia was confirmed as an omnipresent pioneer on the glacier forelands in Petuniabukta. We detected a clear relationship between the average size of S. oppositifolia plants and the time since site deglaciation (Figs. 4, 5, Table 1). Characterized by an initial steep increase of size but reaching a plateau on sites ca. 20 years older that the youngest occurrence (Fig. 5), this pattern was similar among the forelands but there were notable differences in the projected intercepts of model hyperbolas with the age axis, that is, estimated start of the colonization. The estimated start of colonization varied among the forelands (Table 1). At young sites (9Á26 years since deglaciation), S. oppositifolia plants were just simple shoots a few centimetres long. These small seedlings ( Fig. 2a; size less than two on the log-scale corresponding to cover ca. 1 cm 2 ) were frequent at most sites younger than 20 years (Fig. 4). Only rarely did these plants also occur at older sites (Ragnar foreland: sites 2 and 3 with two and one individuals, respectively; Sven foreland: site 4 with one individual). Small sterile plants ( Fig. 2b; size between two and three on the log-scale, corresponding to cover of ca. 1Á10 cm 2 ) were common. They were most frequent at young sites and were absent at the oldest sites of Ferdinand and Hørbye forelands. Plants of intermediate size ( Fig. 2c; size between three and four on the log-scale, corresponding to cover of ca. 10Á100 cm 2 ) were common along the whole gradients with the exception of Hørbye, Ragnar and Sven youngest sites. Large plants (Fig. 2d, e; size larger than four on the log-scale corresponding to cover exceeding ca. 100 cm 2 ) were common or prevailing at older sites but were still present in low numbers at most of the youngest sites. The proportion of flowering individuals in the populations increased with age since deglaciation (Table 2). Plants growing on the youngest sites were always sterile, with sparse flowering recorded at slightly older sites (ca. 20 years since deglaciation). Approximately 50% of the plants were flowering at sites older than 50 years. Braya purpurascens occurred in scattered populations, and was missing from six of the 20 sampling sites and only one individual was found at three sites. As a result the B. purpurascens data set is less comprehensive than that for S. oppositifolia. Nonetheless, some inferences can be made. Both species were able to colonize young sites. Braya purpurascens did not show clear patterns of demographic structure across the foreland gradients (Fig. 6). The youngest cohort of sterile, single rosette plants (Fig. 3a) and the most advanced flowering, multiple rosette (Fig. 3c) cohort co-occurred at most sites (although the flowering multi-rosette plants were missing from the youngest sites of Ferdinand and Hørbye; 6). There was a positive relationship between the occurrence of flowering and the age since deglaciation on the Ferdinand and Hørbye gradients; see Fig. 6 for the increasing proportion of the flowering multi-rosette plants (see also Table 2). No arbuscular mycorrhizal structures were detected in the roots of both species. Both were free from infection at the Hørbye foreland. However, roots of both species contained resting and ripe sporangia typical of Olpidium (Fig. 7) on the Ragnar foreland. These structures were found at the three sampling sites (including the youngest succession stage) in S. oppositifolia roots and also in B. purpurascens roots at the oldest site (B. purpurascens was absent from the other two sampling sites). Discussion The detected patterns in the demography of the widely distributed generalist Saxifraga oppositifolia support the expected pattern of increasing size and proportion of flowering individuals along the successional gradients. By contrast, no such trends were found in Braya purpurascens. Large flowering individuals of this species could be found at old and young sites (except for the youngest sites). Its scattered distribution on the forelands suggests this short-lived species is influenced by other environmental factors besides age since deglaciation. Together with its absence from mature tundra (our field observations and Moreau et al. 2009), these findings suggest B. purpurascens as a distinct specialized pioneer that deviates from the unidirectional, non-replacement succession pattern in the Arctic (Svoboda & Henry 1987;Prach & Rachlewicz 2012). To the best of our knowledge, there is only one study evaluating size structure of plant population along a successional gradient in an Arctic glacier foreland. Mori et al. (2008) analysed demographic population structure of Salix arctica on a successional gradient spanning 35 000 years on Ellesmere Island, in the Canadian Arctic Archipelago. Despite the long time span of succession, the vegetation was very sparse, and small individuals of Salix arctica prevailed along the whole gradient despite increasing density and mean size of the individuals. Another study with which we can compare our results comes from an alpine area in southern Norway (Whittaker 1993). The author analysed size structure of six pioneer species and four late-successional shrubs. All but one showed ageing populations, where size correlated with age since deglaciation. Smallest plants prevailed in young successional stages with abundance decreasing, or plants were completely absent with increasing competition in later successional stages. The only exception was Oxyria digyna, a snow-bed specialist showing a variable size structure due to its special habitat demands (Whittaker 1993). Our sites in Svalbard have higher precipitation than Ellesmere Island, but are drier and have shorter growing seasons than southern Norway (Serreze & Barry 2005). Therefore, it would be anticipated that successional processes in Svalbard would be shorter than Ellesmere and longer than southern Norway. Indeed, succession to a closed tundra community is estimated to take about 2000 years in Svalbard (Hodkinson et al. 2003). Although the glacier forelands in our study were characterized by sparse vegetation, the species studied were establishing successfully on new substrate whereas on older substrate, we did not find new seedlings (S. oppositifolia) or they were restricted to disturbed sites (B. purpurascens). Hence, the new substrate seems more beneficial for the establishment of these species than the old substrate, although the seed bank and seed rain increase with age (Cooper et al. 2004). Young substrate in Arctic glacier forelands is characterized by high pH and lack of soil crust (Anderson et al. 2000); the studied species are therefore either supported in establishment by basic soil reaction or hindered by soil crust. Cyanobacterial crust is formed within 16 years of succession, when small individual S. oppositifolia and Fig. 5 Patterns of Saxifraga oppositifolia size on the gradients of the age since deglaciation. Filled points represent mean values within study sites; error bars represent one standard error. See Table 1 for summaries of non-linear models describing the relationship between S. oppositifolia size and the years elapsed since deglaciation. Table 1 Summary of non-linear hyperbolic models describing the patterns in size of Saxifraga oppositifolia individuals on the gradients of the age since deglaciation on the individual glacier forelands. The hyperbola parameter c corresponds to the asymptote to which size converges with the age; age 0 ( 0(a/c (b) is the projected intercept between the hyperbolas and the x axis, which corresponds to an estimate of the start of the colonization. Cooper 2011). Soil crust changes environmental conditions by increasing concentration of carbon, nitrogen, moisture and temperature, decreasing pH, and preventing soil churning in comparison with crust-free soil (Gold 1998;Hodkinson et al. 2003). Other studies, however, report cooling of the soil surface due to the soil crust and no effect of increasing nutrient availability on vascular plants (Gold et al. 2001). It is apparent that further research on soil crust ecology and vegetation development, along the lines of Yoshitake et al. (2010) and Moreau et al. (2005), in the pioneer stages of succession in Arctic glacier forelands is required. Symbiotic associations with mycorrhizal fungi and dark septate endophytes are important for plants growing in nutrient-poor ecosystems (e.g., Newsham et al. 2009). However, they are perhaps not essential given the absence of these fungal mutualists in the roots of these pioneer species growing on recently deglaciated soils. Instead, structures of zoosporic fungal genus Olpidium comprising biotrophic pathogens were found. The genus Olpidium, formerly placed among Chytridiomycota, now has an uncertain phylogenetic position and is likely related to former Zygomycota (Sekimoto et al. 2011). Our finding add to evidence of the frequent occurrence of zoosporic fungi (including former Chytridiomycota) in high elevation or recently deglaciated Arctic soils (e.g., Jumpponen 2003; Schmidt et al. 2012). Our study shows that B. purpurascens specializes on new or disturbed substrate lacking soil crust. While S. oppositifolia also establishes itself on new substrate, because of its longevity it persists in old growth tundra. These results imply that different plant strategies in primary succession can be recognized in the High Arctic, although they are probably only represented by a few species.
2019-03-22T16:08:59.873Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "e24c39b1025d6afafd298748a084812e69e815dd", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3402/polar.v33.20797", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "a8a7d5b51a9fbb6a3248617c219e397010b39885", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
248027078
pes2o/s2orc
v3-fos-license
Integrating Bulk Transcriptome and Single-Cell RNA Sequencing Data Reveals the Landscape of the Immune Microenvironment in Thoracic Aortic Aneurysms Thoracic aortic aneurysm (TAA) is a life-threatening cardiovascular disease whose formation is reported to be associated with massive vascular inflammatory responses. To elucidate the roles of immune cell infiltration in the pathogenesis underlying TAA, we utilized multiple TAA datasets (microarray data and scRNA-seq data) and various immune-related algorithms (ssGSEA, CIBERSORT, and Seurat) to reveal the landscapes of the immune microenvironment in TAA. The results exhibited a significant increase in the infiltration of macrophages and T cells, which were mainly responsible for TAA formation among the immune cells. To further reveal the roles of immunocytes in TAA, we inferred the intercellular communications among the identified cells of aortic tissues. Notably, we found that in both normal aortic tissue and TAA tissue, the cells that interact most frequently are macrophages, endothelial cells (ECs), fibroblasts, and vascular smooth muscle cells (VSMCs). Among the cells, macrophages were the most prominent signal senders and receivers in TAA and normal aortic tissue. These findings suggest that macrophages play an important role in both the physiological and pathological conditions of the aorta. The present study provides a comprehensive evaluation of the immune cell composition and reveals the intercellular communication among aortic cells in human TAA tissues. These findings improve our understanding of TAA formation and progression and facilitate the development of effective medications to treat these conditions. Thoracic aortic aneurysm (TAA) is a life-threatening cardiovascular disease whose formation is reported to be associated with massive vascular inflammatory responses. To elucidate the roles of immune cell infiltration in the pathogenesis underlying TAA, we utilized multiple TAA datasets (microarray data and scRNA-seq data) and various immune-related algorithms (ssGSEA, CIBERSORT, and Seurat) to reveal the landscapes of the immune microenvironment in TAA. The results exhibited a significant increase in the infiltration of macrophages and T cells, which were mainly responsible for TAA formation among the immune cells. To further reveal the roles of immunocytes in TAA, we inferred the intercellular communications among the identified cells of aortic tissues. Notably, we found that in both normal aortic tissue and TAA tissue, the cells that interact most frequently are macrophages, endothelial cells (ECs), fibroblasts, and vascular smooth muscle cells (VSMCs). Among the cells, macrophages were the most prominent signal senders and receivers in TAA and normal aortic tissue. These findings suggest that macrophages play an important role in both the physiological and pathological conditions of the aorta. The present study provides a comprehensive evaluation of the immune cell composition and reveals the intercellular communication among aortic cells in human TAA tissues. These findings improve our understanding of TAA formation and progression and facilitate the development of effective medications to treat these conditions. INTRODUCTION Thoracic aortic aneurysm (TAA) is a life-threatening condition characterized by the dilation of the thoracic artery to a diameter 1.5 times greater than normal size. It is regarded as a silent killer because there are no symptoms unless the individual is suffering from acute aortic events, such as dissection and/or rupture (1,2). Once TAA ruptures, the first symptom in up to 95% of patients is death with severe pain (3). However, at present, no specific drug therapies have been clinically shown to effectively prevent the progression of TAA except moderate-risk prophylactic surgery (mortality 1-5%), duo to the incomplete understanding of the pathogenesis (4,5). According to the presence or absence of a syndromic disorder, such as Marfan Syndrome, Loeys-Dietz Syndrome, Ehlers-Danlos Syndrome, or Meester-Loeys syndrome, TAA is broadly categorized as syndromic TAA and non-syndromic TAA with a patient ratio of approximately 2:8 (6,7). It is well known that syndromic TAA is mainly a consequence of genetic mutations (for example, Marfan syndrome is due to FBN1 mutations, Loeys-Dietz syndrome is due to TGFBR1/2 mutations, and Ehlers-Danlos syndrome is due to COL3A1 mutations) (6). However, non-syndromic TAA results from a variety of etiologies, including hereditary factors, degenerative processes, hypertension, cigarette smoking, and atherosclerotic disease (8). Decades of research on the pathogenesis of TAA has shown that it represents a spectrum of disease pathologies that are the result of complex changes in the cellular and extracellular environment and not a simple degenerative process (8). In particular, immunity and inflammation play important roles in the formation and progression of non-syndromic TAA (9). Therefore, we focused on the landscape of the immune microenvironment in TAAs (refer to non-syndromic TAAs) without congenital diseases in the present study. Currently, the known molecular mechanisms of TAA caused by inflammation mainly involve three aspects. First, inflammatory responses are involved in endothelial dysfunction. For instance, multiple inflammatory molecules, such as TGFβ, MCP-1, Mst1, and NF-κB, induce endothelial cell (EC) dysfunction through proinflammatory effects and further promote aortic aneurysm formation and development (4,10,11). Second, inflammation has been confirmed to induce vascular smooth muscle cell (VSMC) stress, contractile dysfunction, and VSMC phenotypic switching. For example, abnormal activation of the NLR family pyrin domain containing 3 (NLRP3) inflammasome in VSMCs activates caspase-1, which directly cleaves and degrades contractile proteins. This leads to contractile dysfunction, biomechanical failure, and TAA formation (12). In addition, VSMCs lacking SMAD3 switched from a contractile phenotype to a synthetic phenotype (13). Third, inflammation participates in the activation of matrix metalloproteinases (MMPs), leading to alterations in extracellular matrix homeostasis. Activation of STING, a stimulator of interferon genes, induces the MMP-9 production and ECM degradation. It also promotes VSMC apoptosis and necroptosis, resulting in aortic aneurysm/dissection development (14). After decades of accumulated research, we have gained a certain understanding of the roles of inflammation in TAA pathogenesis. However, the failure of the translation of the basic research results to clinical application implicates that the molecular mechanisms of inflammation in TAA remain elusive. Thus, a comprehensive understanding of the landscape of the immune microenvironment in TAA is needed to provide novel insight into the roles of immunity and inflammation. Hence, in the present study, we downloaded multiple types of datasets, including bulk transcriptomic data and single-cell RNA sequencing (scRNA-seq) data, from the GEO database and adopted various immune-related algorithms (ssGSEA, CIBERSORT, and Seurat) to explore the landscape of immune cell infiltration in TAA at the cell level and in biological processes. Furthermore, intercellular communications among the aortic cells were inferred by the CellChat package. In particular, macrophages were regarded as the most prominent signal senders and receivers in TAA patients and controls, suggesting that macrophages play critical roles in both the physiological and pathological conditions of the aorta. This is consistent with the current state of knowledge. Previous studies only characterized the alterations of immune cell composition. However, we further explored the interactions between immune-immune cells or immune-non-immune cells and clarified the important roles of immune cells, especially macrophages, in TAA pathology. These findings could be helpful in directing future studies aiming to diagnose, treat and prevent TAA-associated complications. Transcriptional Data Acquisition After retrieving the human TAA datasets in the GEO database 1 , we ultimately selected the GSE26155 (15) (15,16). In addition, the normal thoracic aorta tissues obtained from donors (two from recipients) undergoing heart/lung transplantation served as the control group (15,16). Detailed information on the TAA samples used in the present study is shown in Supplementary Table 1. Data Preprocessing of the Microarray Dataset The raw data of GSE26155 (TAA microarray dataset) were downloaded from the GEO database. Background correction and normalization of the dataset were performed by using the oligo package (version 1.56.0) (17). Principal component analysis (PCA) was performed after data normalization to ensure that the preprocessing analysis was successful. In addition, hierarchical cluster analysis was applied to identify outlier samples by the hclust function in weighted gene co-expression network analysis (WGCNA) (18). Based on the results, two TAA samples (GSM641986, and GSM642067) in GSE26155 were removed from subsequent analyses. Identification of Differentially Expressed Genes and Functional Enrichment Analyses The differentially expressed genes (DEGs) between the control and TAA groups were identified by the limma package (version 3.48.3) (19). Fold change (FC) > 1.5 (|log 2 FC| > 0.585) and adjusted P-value < 0.05 were set as the thresholds. The clusterProfiler package (version 4.0.5) (20) was used to perform the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis for the identified DEGs, and an adjusted P-value < 0.05 (corrected by Benjamini-Hochberg multiple test) was set as the threshold for the enriched significant pathways. Estimation of Immune Cell Infiltration Single-sample gene set enrichment analysis (ssGSEA) can calculate the rank value of each gene according to the expression profile and apply the characteristics of immune cell population expression to individual samples (21,22). To estimate the population-specific immune infiltration, we performed ssGSEA to derive the enrichment score of immune cells and immune functions for each sample by using the GSVA package (version 1.40.1) (23). A total of 29 immune gene sets from Bindea et al. (24) were obtained; these sets included different immune cells and immune-related pathways and functions. In addition, to validate the effectiveness of ssGSEA, the CIBERSORT algorithm (25) 2 was applied to calculate the proportion of 22 immuneinfiltrating cells for each sample based on the LM22 signature for 100 permutations. The differences in immune cell subtypes between the control and TAA groups were validated again. Single-Cell RNA Sequencing Data Analysis To further validate and explore the landscape of immunocyte infiltration in TAA, GSE155468 (scRNA-seq dataset) was obtained from the GEO database for analysis. The scRNAseq data were transformed into Seurat objects using the CreateSeuratObject algorithm in the Seurat package (version 4.0.2) (26). Then, the data were filtered with the following criteria: genes detected in >3 cells, cells with >200 distinct genes, and percentage of mitochondrial genes <10%. Next, data normalization and scaling were performed. Then, it was subjected to dimension reduction at three stages of analysis, including the selection of variable genes, principal component analysis (PCA), and uniform manifold approximation and projection (UMAP). Cell clustering was then assessed across a range of predetermined resolution scales to ensure separation of known major aortic cell types without excessive subclustering (resolution = 0.3). Cell types were identified by their expression levels of cell-specific markers according to the CellMarker database (27) (Supplementary Figure 3). Differences in cell proportions between the control and TAA groups were evaluated using the χ 2 test. DEGs between the control and TAA groups for cell populations of interest were identified by the FindMarkers function in Seurat (26) with log 2 FC > 0.25 and adjusted P-value < 0.05 for KEGG pathway analysis by the clusterprofiler package (20). Furthermore, to discover the cellstate transitions from controls to TAA, pseudotime trajectory analysis was performed by the Monocle package (version 2.20.0) (28), which sorted individual cells by DEGs (TAA vs. controls) and constructed a tree-like structure of the entire lineage differentiation trajectory. The profile of DEGs between TAA and controls was calculated by the "differentialGeneTest" function in Monocle. Analysis of Cell-Cell Communication Based on scRNA-seq data, Jin et al. established a signaling molecule interaction database (CellChatDB) 3 and developed the R package "CellChat" (29). This package is a tool used to infer, analyze and visualize intercellular communication networks (29). Then, the TAA scRNA-seq data were used to explore intercellular communications by applying the CellChat package (version 1.1.3). It can quantitatively characterize and compare intercellular communications through the major referred signaling inputs and outputs for cell populations based on the known structural composition of ligand-receptor interactions. More details about the framework and algorithms of CellChat were described in a previous study (29). Statistical Analysis All statistical analyses were conducted in R software (version 4.1.1). The ssGSEA and CIBERSORT algorithms were used to characterize the immune-infiltrating cells in TAA. The immune infiltration levels and functions between different groups were compared by the Wilcoxon rank-sum test (control vs. TAA) (31). The P-value was corrected by the Benjamini-Hochberg multiple test, and P < 0.05 was considered statistically significant. Box plots, volcano plots, pie diagrams, and heatmap plots were generated by packages (ggplot2 and pheatmap) provided by the Bioconductor and CRAN projects. Multiple Immune-Related Pathways Involved in Thoracic Aortic Aneurysm As previous studies have reported, immune and inflammatory responses play a critical role in TAA. To further investigate the underlying roles of immunity and inflammation in TAA pathogenesis, we first screened the DEGs between TAA patients and control individuals based on TAA microarray data (GSE26155). After data preprocessing, all samples showed a reliable distribution of intensity values and two distinct group-bias clusters between TAA and controls (Figures 1A,B). In addition, two samples (GSM641986 and GSM642067) were excluded from subsequent analyses by hierarchical cluster analysis ( Figure 1C). A total of 1525 DEGs in TAA were identified with the threshold of FC > 1.5 and adjusted P-value < 0.05, including 769 upregulated DEGs and 756 downregulated DEGs ( Figure 1D). The KEGG analysis revealed that the DEGs were enriched in multiple immune-related pathways, such as the TGF-β signaling pathway, hematopoietic cell lineage, leukocyte transendothelial migration, complement and coagulation cascades, Fc gamma R-mediated phagocytosis, intestinal immune network for IgA production, and platelet activation ( Figure 1E). The above results confirm the correlation between immune responses and TAA and suggested that a comprehensive understanding of the immune microenvironment will provide more insights into the formation and progression of TAA. The Evaluation of Immune Cell Composition in Thoracic Aortic Aneurysm To reveal the landscape of immunocyte infiltration in TAA at the cellular level and functional level, 16 immune cells and 13 immune functions were incorporated to estimate the immune capacity of TAA tissues by ssGSEA. The heatmap showed the richness levels of 29 immune cells and immune functions in the TAA samples (Figure 2A). Based on the 29 immune cells and immune functions of each sample, PCA identified two distinct group-bias clusters for the TAA and control samples ( Figure 2B). Compared with control group, ten immune functions (for example, inflammation-promoting, type I IFN Response, T cell costimulation, check-point, and cytolytic activity) showed significant alterations in TAA samples ( Figure 2C). There were 10 types of immune cells displaying different infiltrating levels, including macrophages, neutrophils, Th1 cells, mast cells, and DCs (aDCs, pDCs) ( Figure 2D). To validate the reliability of the above results, we also performed the CIBERSORT algorithm to assess the fraction of twenty-two types of immunocytes based on the characteristics of immune-related gene expression via bulk transcriptome profiles in TAA samples, which exhibited a significant increase in the infiltration of M2 macrophages and CD8 + T cells and a decreased in the infiltration of M0 macrophages and activated NK cells (Figures 2E,F). The above results demonstrate that macrophages and T cells play a central role in TAA pathogenesis, which is consistent with previous studies (15,16). To further validate and clarify the underlying immunobiological characteristics of TAA, we examined the scRNA-seq data of TAA in the subsequent analysis. Single-Cell RNA Sequencing Data Preprocessing and Selection In the present study, we reanalyzed scRNA-seq data (GSE155468) to assess the cellular composition of the thoracic aortic wall and reveal how the landscape of immune cells and their gene expression profile are altered. First, we performed individual data quality control and cell cluster identification by using the Seurat package to obtain a preliminary estimate of the cell composition of each sample (Supplementary Figures 1, 2). After characterizing each cell cluster by cell type-specific markers (Supplementary Figure 3), we examined the percentages of each cell population, which indicated significant alterations in cell composition between the two groups. However, we also noticed that the composition ratio of the same cells in different control (or TAA) samples was quite different. The heterogeneity of the different samples and the smaller sample size of the control group may limit the statistical power of this comparison. In addition, two control samples (GSM4704931 and GSM4704932) came from heart transplant recipients without aortic aneurysms, which may have exhibited molecular or cellular changes in the ascending aorta related to their cardiac disease (16). Taking the above limitations into consideration, we decided to select two representative samples to perform the analysis and display the results. Based on the clustering analysis of cellular composition from the 11 samples, we chose GSM4704938 and GSM4704933 to represent the TAA and control groups, respectively, for further analysis (Figure 3A). Validation of Immune Cell Infiltration in Thoracic Aortic Aneurysm by Single-Cell RNA Sequencing Data After integrating the TAA and control data by the CAA algorithm of Seurat, unbiased cell clustering analysis was performed ( Figure 3B). Similar to the separate clustering data, 13 clusters representing eight-cell lineages were identified, encompassing ECs, VSMCs, fibroblasts, macrophages, mast cells, NK cells, T cells, and B cells (Figures 3C,D). Compared with the control group, obvious changes in cell composition were observed. Such changes included a decrease in the VSMC population and expansion of immune cell populations, especially macrophages (control vs. TAA: 21.14 vs. 57.42%) and T cells (control vs. TAA: 2.33 vs. 27.82%) (Figures 3E,F), consistent with previous studies (16). In addition, we quantified the expression of CD3 (a biomarker of T cell) and ITGAM (a biomarker of macrophage) (27) to assess T cell and macrophage infiltration in TAA by IHC staining, which confirmed the results of the scRNA-seq data (Figures 3G,H). Notably, we also verified the decrease in VSMCs population and an increase in B cells by IHC staining in Supplementary Figure 4. To gain more insights into the macrophages and T cells in TAA, we compared the gene expression profiles of the macrophage/T cell population between control and TAA tissues. There were 39 DEGs identified in the macrophage population with an adjusted P-value < 0.05 and log 2 FC > 0.25 ( Figure 4A). The KEGG analysis revealed that the 39 DEGs were mainly enriched in Antigen processing and presentation, Th-cell differentiation (Th17, Th1, and Th2), Phagosome, and IL-17 signaling pathway ( Figure 4B). In the T cell population, a total of 30 DEGs were screened with an adjusted P-value < 0.05 and log 2 FC > 0.25, which were involved in Ribosome, Antigen processing and presentation, Lipid and atherosclerosis, Protein processing in endoplasmic reticulum, Estrogen signaling pathway (Figures 4C,D). Based on the above results, we found that some pathways enriched by the DEGs of macrophages were the same as the immune-related pathways identified by the bulk transcriptome data (GSE26155). These pathways included Complement and coagulation cascades, Hematopoietic cell lineage, Rheumatoid arthritis, and Phagosome (Figures 1E, 4B), which suggests that the inflammatory responses facilitating the initiation and progression of TAA were mainly induced by macrophages. In addition, we performed pseudotime analysis for macrophages and T cells to reveal the cell transition during the biological process from normal aorta to TAA, respectively (Supplementary Figure 5). Our results showed the different subtypes (states) of macrophages and T cells between TAA and control samples. Along with trajectory progression, T cells experienced seven states from normal aorta to TAA, while macrophages experienced more states, which further validated the recruitment and differentiation of macrophages and T cells contributing to the formation of TAA. Therefore, the next questions that we should answer are: which cells recruit macrophages to the aortic wall, and how the alterations of macrophages lead to the loss of VSMCs, i.e., intercellular communications? Inference of Intercellular Communications in Thoracic Aortic Aneurysm To further explore the interactions among the identified cells, the selected scRNA-seq data (GSM4704938 and GSM4704933) were used to infer intercellular communications by applying the CellChat package (29). It is well known that signaling crosstalk among different cells is essential for informing diverse cellular decisions (32). The joint analysis of CellChat provides an opportunity to discover major signal changes that might drive the pathogenesis of TAA. Compared with the controls, the inferred interactions between cells in the TAA samples increased in both the number of interactions and the strength of the interaction, especially the cell-cell communications of macrophages, fibroblasts, ECs and VSMCs (Figures 5A,B). Based on the incoming/outgoing interaction strength, the cells were jointly mapped onto a shared twodimensional manifold, which showed that macrophages were the most prominent signal senders and receivers in TAA and controls, suggesting that macrophages play an important role in both the physiological and pathological conditions of the aorta (Figure 5C). Furthermore, CellChat detected 45 signaling pathways among the eight cell populations, including multiple inflammatory signals, such as the MIF, CXCL, GLAECTIN, CCL, ANNEXIN, TNF, IFN-II, TGFb, IL-16, LIGHT, and COMPLEMENT pathways ( Figure 5D). Therefore, we focused on the intercellular communications between macrophages and other cells and the changes to the signaling pathway between TAA and control samples (Figures 5E,F). The predicted results showed that as the signaling sources, a large portion of the outgoing macrophage signaling was received by ECs, fibroblasts and VSMCs (Figure 5E). By comparing the overall communication probability from macrophages to ECs between the two groups, multiple pathways were found to be highly active in TAA. For example, network centrality analysis of the CXCL signaling pathway revealed that macrophages are the main sources of the network (Figures 6A,B). The ligands CXCL2/3/8 and their receptor ACKR1 were inferred to be the most significant signaling pathways contributing to the communication from macrophages to ECs (Figures 5E, 6). In particular, the expression levels of CXCL2, CXCL3, and CXCL8, which are mainly located in macrophages, were significantly increased in TAA (Figures 6C-E), while ACKR1 was upregulated in ECs ( Figure 6F). As the signaling targets, most of the incoming macrophage signaling came from fibroblasts and ECs ( Figure 5F). Specific to MIF signaling, the MIF ligand and its multisubunit receptors CD74/CD44 and CD74/CXCR4 were found to act as major signaling from fibroblasts and T cells to macrophages in TAA compared to control samples ( Figure 5F and Supplementary Figure 6). The results showed that CD74, the main receptor of MIF, was significantly upregulated in TAA and almost expressed in macrophages, while the expressions of CD44 and CXCR4 were also increased in the macrophages of TAA (Supplementary Figures 6D-F). Thus, in the present study, we not only provide a comprehensive evaluation of immune cell composition in human TAA tissues but also explore the interactions between immune cells (especially macrophages) and non-immune cells. These findings improved our understanding of TAA formation and progression. DISCUSSION The progression from normal aorta to aortic dilatation to final aortic aneurysm is a multifactorial process that has only been partially revealed. The main pathological changes of TAA are medial degeneration consisting of VSMC depletion, elastic fiber fragmentation, and collagen degradation, which was originally described by Erdheim as a non-inflammatory lesion (4,33). However, with the deepening of our understanding of TAA pathogenesis, increasing evidence shows that immune and inflammatory responses are involved in the medial degeneration that is related to dilated aortas (9,16,34). To explore the underlying mechanisms of immunity and inflammation in TAA, we first performed KEGG pathway analysis for the identified DEGs of the GSE26155 dataset. Unsurprisingly, multiple immune-related pathways were enriched, such as the TGF-β signaling pathway, Hematopoietic cell lineage, Leukocyte transendothelial migration, Complement and coagulation cascades, Fc gamma R-mediated phagocytosis, and Platelet activation. To further depict the immune landscape of TAA and reveal how the immunological pathways are altered, we compared the composition of immune cells between TAA and control samples by ssGSEA. The results showed that multiple types of immune cells and immune functions displayed different infiltrating levels and significant alterations in TAA. These types of altered immune cells included macrophages, T helper cells, neutrophils, DCs (aDCs and pDCs), and mast cells. As a well-known infiltrating immunocyte, macrophages have been confirmed to be associated with the formation of TAA. Previous reports have suggested that macrophages could contribute to TAA initiation and progression by secreting MMPs and proinflammatory factors to facilitate ECM destruction and VSMC apoptosis (35). In addition, Boytard et al. (36) observed the distribution of subtypes of macrophages as the course of the abdominal aortic aneurysm (AAA) progressed. At the early stage of aortic aneurysm, M1 macrophages dominate at the site of injured aortic tissue and function in inflammatory factor expression, proteolysis, and phagocytosis. At the late stage, M2 macrophages accumulate preferentially and facilitate reparative processes, such as ECM deposition and angiogenesis (35,36). In the present study, we found M2 macrophages rather than M1 macrophages increased significantly in TAA (Figure 2F), which maybe because the samples in GSE26155 were largely the elderly degenerative TAAs within the advanced stage. Notably, whether macrophage subsets have similar evolution in TAA compared to AAA requires further experimental verification in the future. Recently, an intriguing type of VSMC with a macrophagelike phenotype was identified as macrophage-like VSMCs, which acquired characteristics reminiscent of immune cells but continued to express markers of the original VSMCs (37). However, it's more appropriate to identify them as macrophages rather than VSMCs. The macrophage-like VSMCs suppress the expression of classic VSMC markers (such as SM22a, ACTA2, and MYH11), and turn on the expression of multiple macrophage markers, including CD68, CD11b, and LGALS3 (37)(38)(39). It's reported that macrophage-like VSMCs were involved in chronic inflammatory processes by producing different cytokines (such as IL1b, IL8, IL6, and CCL2) and various adhesion molecules (37). In turn, these cytokines further recruited immune cells and promoted various biological processes. Chen et al. (38) concentrated on the VSMC reprogramming in aortic aneurysm and identified a modulated VSMC with a macrophage-like phenotype by using cell fate tracing with the CreER T 2 -loxP system and scRNA-seq analysis. Furthermore, they found that the key driver of this phenotype switch appeared to be a large increase in Klf4 expression and VSMC-specific Klf4 knockout largely prevented aneurysm development in this model. In addition, an increasing body of VSMC lineage tracing studies have revealed VSMCs undergoing remarkable changes in phenotype during development of atherosclerosis (39)(40)(41)(42). It's demonstrated that VSMCs in atherosclerosis expressed increased Klf4 and the transition to macrophage-like state was Klf4-dependent by utilizing VSMC-specific lineage tracing mice ± simultaneous VSMC-specific conditional knockout of Klf4 (41). However, there is no study using cell lineage tracking to investigate the process of phenotypic switching or cellular dedifferentiation in TAA. Whether macrophagelike VSMCs also exist in TAA, and which regulator is responsible for the transition, and what roles the special VSMCs play in initiation and progression of TAA all need further research to elaborate. In addition to macrophages, T lymphocytes are frequently observed in TAA samples by IHC staining and flow cytometry (9). A number of previous studies reported that T cells are significantly increased in samples from patients with TAA compared with control aortas (9,34), which is consistent with our results. To assess the infiltration of T cells in aortic walls, Wang et al. (43) detected CD3-positive cells in TAA samples and found accumulating CD3-positive cells predominantly in the adventitia and media of TAA, which implies that T cells might migrate from the adventitia into the media of the aortas. We demonstrated that the aortic walls of TAA patients display significantly more T cells than those in normal aortas. However, it should be noted that different T cell subsets have complex effects during the inflammatory response and might even contradict each other (44). For example, Th1 and Th2 (subsets of naïve CD4-positive T cells) are characterized by the production of IFN-γ and IL-4, respectively (45). Several studies reported that Th1 cells, as the predominant type of CD4 + T cells in TAA, positively correlated with aortic expansion in aneurysm patients (46)(47)(48). Upregulation of IFN-γ produced by Th1 cells significantly correlated with both the outward vascular remodeling and intimal expansion of TAA (46). After examining the immunocytes of TAA tissues, Tanimura et al. reported that stimulation of the Th1/IFN-γ system could lead to aortic aneurysm formation by inhibiting Treg cell proliferation (48). In contrast, other groups have found that CD4 + T cells in aortic aneurysms are predominantly IL-4-producing Th2 cells. Ntika et al. (49) reported that no change was observed in the Th1 cytokines in the TAA group; however, cytokines associated with Th2 cells, such as IL-4, IL-5, and IL-10, were significantly altered in TAA patients compared to the control individuals. Currently, in addition to some algorithms for assessing the infiltration of immune cells in tissues based on bulk transcriptome profiles, scRNA-seq provides an opportunity for us to reveal complex immune cell populations (50,51). To further validate the above-described findings based on the ssGSEA algorithm, we not only analyzed scRNA-seq data but also detected the infiltration of macrophages and T cells in human TAA samples by IHC staining. These results also revealed the expansion of macrophages and T cells. As the most suitable technology for revealing the cellular composition of tissues and gene expression profiles at the single-cell level, various studies have used the scRNA-seq technique to delineate the cellular heterogeneity of cardiovascular diseases. Zhao et al. (52) and Yang et al. (53) performed scRNA-seq on mouse AAA models to reveal the cellular heterogeneity of AAA. After integrative analysis of scRNA-seq data from ascending aortic aneurysm tissue in Marfan syndrome mice and humans, Pedroza et al. characterized the disease-specific signature of modulated VSMCs, a distinct cluster of cell populations, which might be driven by TGF-β signaling and Klf4 overexpression (54). However, at present, there is just one study that uses scRNA-seq analysis to uncover the cellular and molecular landscape of human TAA tissues without heritable diseases (16). In general, they identified 10 major cell types in human ascending aortic tissue, most of which were the same as our results. Compared with the control tissues, TAA tissues had fewer non-immune cells (such as VSMCs) and more immune cells, especially T lymphocytes. Their differential gene expression data suggested the presence of extensive mitochondrial dysfunction in TAA tissues. Notably, scRNA-seq data inherently contain transcriptomic profile information that could be used to infer intercellular communications. The CellChat package was developed to predict and visualize cell-cell communications from scRNA-seq data based on the known structural composition of ligand-receptor interactions, as well as stimulatory and inhibitory membranebound coreceptors (29). By applying CellChat, macrophages were determined to be the most prominent signal senders and receivers in the TAA and control samples. These findings suggest that macrophages play critical roles in both the physiological and pathological conditions of the aorta, which was consistent with the current state of knowledge (35,55). Whether in normal aorta or TAA tissues, the cells that interact most frequently are macrophages, ECs, fibroblasts, and VSMCs, which implies that cross-talk between these cells is essential for maintaining physiological homeostasis in the aorta, and dysfunction of the intercellular interactions can lead to the aortic disease. Unsurprisingly, the interaction between macrophages and ECs (or fibroblasts) is significantly enhanced, while the interaction with VSMCs is reduced in TAA. Notably, the transwell migration assays of VSMCs and macrophages showed that rs12455792 variant of SMAD4 gene in VSMCs significantly increased macrophages recruitment via activated TGF-β signaling pathway (56). In our study, we observed the activation of TGF-β signal from NK cells and T cells to macrophages ( Figure 5F). Saito et al. (57) demonstrated that the endothelium played important roles in triggering macrophage infiltration and inflammation in the aorta, resulting in the vascular remodeling and aneurysm formation through intracellular NF-kB signal interaction. In addition, monocyte recruitment to the aorta requires the upregulation of chemoattractant or activating cytokines and adhesion molecules, such as Cyr61, ICAM-1/2, VCAM-1, and MCP-1, that are produced and secreted by ECs, fibroblasts, and other cell types (58). Zhou et al. (59) found that myelin debris significantly increased endothelial secretion of MCP-1 and other proinflammatory mediators (for example IL-4 and IL-6), which may contribute to macrophage infiltration in microvessels. In contrast, Gitzin et al. (60) demonstrated that following carotid injury, patrolling monocytes were recruited to the endothelium at wound sites to promote EC proliferation and tissue repair. Combining the above results and the anatomy of the aorta, we hypothesize that ECs and fibroblasts recruit macrophages by enhanced intercellular communications via CXCL and MIF signals. The recruitment of macrophages to the aortic wall is the primary culprit of VSMC loss, which ultimately results in aortic dilatation and rupture. There is a limitation in our study. Based on the detailed information of TAA samples, we noticed that the mean age of the TAA patients in the GSE26155 dataset was 61.5 years, and the mean age of the TAA patients in the GSE155468 dataset was approximately 70 years. In addition, these TAA samples were homogenous, and none had hereditary diseases, such as Marfan syndrome, Loeys-Dietz syndrome or bicuspid aortic valve malformation. Taking these factors into consideration, we assumed that the predominant form of TAA in this cohort was the degenerative form of non-syndromic TAA. Therefore, the conclusions of our study might be limited to degenerative nonsyndromic TAA. A further study containing different TAA forms is needed to reveal the representative landscape of the immune microenvironment in the future. CONCLUSION In conclusion, after characterizing the landscape of the immune microenvironment in TAA by various immune-related algorithms, we found that the infiltration of macrophages and T cells was mainly responsible for TAA formation among the immune cells. In particular, as macrophages are the most prominent signal senders and receivers of the aortic tissue, they play an important role in both the physiological and pathological conditions of the aorta. These immune cells might help to further provide novel insights into candidate targets of prevention and immunotherapy for TAA patients. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS XW and D-SJ designed and directed the entire study and critically reviewed the manuscript. QW and XG performed analyses of sequencing data. BH and XF performed IHC staining experiments and statistical analyses. XF and Z-MF provided clinical samples and data. QW wrote the manuscript. All authors contributed to the article and approved the submitted version.
2022-04-09T13:13:44.870Z
2022-04-07T00:00:00.000
{ "year": 2022, "sha1": "5bc747ef8de8c0e14c2bf6a446f5488982d8599b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "5bc747ef8de8c0e14c2bf6a446f5488982d8599b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254295728
pes2o/s2orc
v3-fos-license
Case Report: Peripheral blood T cells and inflammatory molecules in lung cancer patients with immune checkpoint inhibitor-induced thyroid dysfunction: Case studies and literature review Immunotherapy has changed the paradigm of cancer treatment, yet immune checkpoint inhibitors (ICIs) such as PD-1/PD-L1 monoclonal antibodies may cause immune-related adverse events (irAEs) in some patients. In this report, two non-small cell lung cancer (NSCLC) patients treated with nivolumab presented with checkpoint inhibitor-induced thyroid dysfunction (CITD), followed by a second irAE of pneumonitis and intestinal perforation, respectively. Increases in peripheral CD8+ T cells correlated with the onset of CITD in the patients. Intriguingly, common inflammatory biomarkers, including C-reactive protein (CRP) and neutrophil/lymphocyte ratio (NLR), were not consistently increased during the onset of CITD but were substantially increased during the onset of pneumonitis and intestinal perforation irAEs. The observations suggest that unlike other irAEs such as pneumonitis, CRP levels and NLR were non-contributory in diagnosing CITD, whereas T cell expansion may be associated with immunotherapy-induced thyroiditis. Introduction Immune checkpoint inhibitors (ICIs) are currently the firstline treatment for multiple late-stage malignancies including non-small cell lung cancer (NSCLC). However, systemic immunostimulation by ICIs may lead to immune-related adverse events (irAEs) in some patients (1)(2)(3)(4). Patients receiving ICIs are closely monitored for clinical symptoms and signs of irAEs as well as laboratory tests, including C reactive protein (CRP) levels, to detect occult, subclinical irAEs (1,5). Checkpoint inhibitor-induced thyroid dysfunction (CITD) is one of the most common irAEs, along with rash, pruritis, and diarrhea (1)(2)(3). While CITD is often detected by regular measurements of thyroid-stimulating hormone (TSH), it is unclear whether other common biomarkers also have utility in understanding the molecular mechanism and prognosis of CITD (6,7). Here we present two cases of patients with NSCLC treated with ICIs that developed CITD, followed by a second irAE. We describe the levels of peripheral blood biomarkers associated with CITD's respective disease courses, including TSH, CRP, immune cell subsets, and cytokine levels. In two patients with CITD, peripheral blood CD8 + T cells increased in number at or preceding the time of CITD, yet the patients did not consistently demonstrate an acute change in systemic inflammatory biomarkers that were observed with the second irAEs (pneumonitis and intestinal perforation, respectively) they developed later. This study provides data that may enhance the management of CITD and prompt further investigations on the mechanisms of CITD. Case presentation Case #1 The first patient was a 55-year-old male with a five-year history of inoperable stage IIIA (T1aN2M0) right upper lobe (RUL) lung adenocarcinoma treated with chemoradiation. He remained asymptomatic for three years at which time surveillance chest CT showed a new left upper lobe (LUL) lesion diagnosed as stage I metachronous LUL lung adenocarcinoma and treated with CyberKnife Radiosurgery (CKRS). After two years, patient reported symptoms of fatigue, progressive weight loss, and generalized lymph node swelling. Chest CT showed a new right lower lobe (RLL) lesion with PET scan showing increased metabolic activity in draining lymph nodes and the liver. Image-guided core biopsy of the right supraclavicular lymph node was consistent with lung adenocarcinoma. PD-L1 expression was 0% by immunohistochemical (IHC) staining. He was enrolled in the trial described herein and blood samples were procured at baseline and after each cycle of the immunotherapy. The patient was started on two-week cycles of 240 mg nivolumab with 8 planned cycles. After receiving the second dose of Nivolumab, TSH values showed thyrotoxicosis (TSH = 0.09 and 0.04 mIU/L) at cycles 2 and 3 of Nivolumab treatment. Subclinical hypothyroidism (TSH = 5.1 mIU/L) at the 4 th cycle followed by symptomatic hypothyroidism (TSH = 58. 37, 89.96, and 96.16) at the 5 th , 6 th , and 7 th treatment cycles, respectively was observed ( Figure 1A and Supplementary Figure 1A). The patient was diagnosed with CITD and daily levothyroxine was started. The patient's symptoms were fatigue and constipation. The dose of levothyroxine was subsequently escalated over the course of 6 months to 150 mcg daily for symptoms and thyroid function optimization. Anti-TPO antibodies were not measured. The patient's cancer progressed after 11 months, when PET scan showed further metastasis within the liver. The patient started on 2 months of immune modulating gemcitabine, cyclophosphamide and bevacizumab. Monthly nivolumab 480 mg cycles were restarted, but unfortunately patient was not enrolled in the trial and blood samples were not collected for T cell and cytokine analyses. The patient demonstrated an excellent response after 6 cycles of nivolumab yet developed checkpoint inhibition-induced pneumonitis (CIP). After his hospitalization for CIP, the patient began to have recurrent diarrhea and extensive workup interestingly revealed elevated levels of chromogranin and serotonin. Biopsy of the liver showed metastatic adenocarcinoma with neuroendocrine differentiation with small cell morphology, raising the concern for transformation of his primary NSCLC to SCLC. Unfortunately, a corresponding brain MRI demonstrated extensive central nervous system (CNS) disease progression and the patient expired shortly thereafter ( Figure 1A). Case #2 The second patient was a 61-year-old male with history of smoking, who was initially presented with progressive neck swelling and diagnosed with superior vena cava syndrome after CT chest showed right paratracheal mass with extension into the anterior mediastinum, and SVC compression. He subsequently underwent bronchoscopy with biopsy which revealed squamous cell carcinoma and tumor cells were strongly positive for p63 and negative for TTF-1. Molecular analysis of the tumor was not performed. The patient received chemoradiation, and subsequent PET scan demonstrated CMR. Unfortunately, 7 months later the patient was admitted for post obstructive pneumonia and was found to have recurrence of RUL mass. Bronchoscopy with biopsy showed recurrent squamous cell carcinoma, with 50% expression of PD-L1 by IHC staining. He was enrolled in the trial described herein and blood samples were procured at baseline and after each cycle of A B FIGURE 1 Diagnosis, treatment regimens, immune-related adverse events, and biomarker levels in the lung cancer patients. Neutrophile to Lymphocyte Ratio (NLR), C-Reactive Protein (CRP), and Thyroid Stimulating Hormone (TSH) levels are shown along with the treatment and immune-related adverse events (irAEs). (A) Patient #1. The patient had a 5-year history (-1801 days) of inoperable stage IIIA (T1aN2M0) right upper lobe (RUL) lung adenocarcinoma. He was initially treated with chemoradiation, followed by disease progression. He then received 8 doses of bi-weekly 240 mg Nivolumab (days 0 to 106) and was enrolled in the biomarker study. Checkpoint inhibitor-induced thyroid dysfunction (CITD, thyroiditis) was observed as an immune-related adverse event (irAE) after the 2 nd dose of Nivolumab (C2). Later, the patient received another round of Nivolumab treatment (days 439 to 621, not enrolled in our biomarker trial) in which pneumonitis as another irAE occurred (day 621). (B) Patient #2. The patient was diagnosed with RUL squamous cell carcinoma (-429 days) and received chemoradiation. Subsequent PET scan demonstrated a complete metabolic response (CMR). Unfortunately, 7 months later he had recurrence of RUL mass. The patient received 7 cycles of bi-weekly 240 mg Nivolumab treatment (days 0 to 84) when enrolled in our biomarker study. CIDT (thyroiditis) was detected after the 4 th dose of Nivolumab (C4). Later, the patient received another round of Nivolumab treatment (days 140 to 259, not enrolled in our biomarker trial) in which another irAE, intestinal perforation, occurred (day 267). the immunotherapy. The patient was started on two-week cycles of 240 mg nivolumab with 8 planned cycles. After 4 cycles of nivolumab, routine TSH measurements revealed an increase above the upper limit of normal to 28.82 mIU/L on cycle 6, with a slight prior decrease in TSH on cycle 2 and 3 ( Figure 1B and Supplementary Figure 1B). Anti-TPO antibodies were negative. The patient initially reported fatigue, and patient was started on levothyroxine 75 mcg daily. After 7 cycles of nivolumab, repeat PET scan showed progression of his disease and new liver and brain metastases. The patient was switched to gemcitabine and cytoxan, but later had hemoptysis, and nivolumab was restarted. He received an additional 9 cycles of nivolumab of which the patient was not enrolled in the trial and no blood samples were collected for our cytokine and T cell study. While on the second round of nivolumab, the patient's dose of levothyroxine was increased to 100 mcg daily, yet TSH remained intermittently elevated ranging from 5.31 to 12.58 mIU/L while on treatment. During this time, the patient also reported constipation. Upon completing the 9 th cycle of nivolumab, the patient developed an intestinal perforation of undifferentiated etiology and succumbed to the complications of his disease ( Figure 1B and Supplementary Figure 2). Discussion Thyroid dysfunction (TD) is a common irAE that occurs in cancer patients treated with immune checkpoint inhibitors (1, 2). Subtypes and clinical manifestations of CITD are discussed elsewhere (8,9). Multiple studies have noted that patients on ICI therapy often have a thyrotoxicosis phase prior to hypothyroidism (6,10). Yet patients can often be asymptomatic, with one study stating that 67% of patients on ICI therapy were asymptomatic during the thyrotoxicosis phase (10). The manifestations of CITD are usually not severe, and normally grade 1 to 3 by CTCAE v5 (11). The most common clinical symptoms include fatigue, constipation, cold intolerance, swelling, and weight gain (12). Notably, both patients reported fatigue and constipation after CITD, and patient #2 had associated intestinal perforation ( Figure 1B and Supplementary Figure 2). In patients with existing thyroid disease, especially with hyperthyroidism, the risk of worsening TD increases after ICI initiation (13,14). Notably Kim et al. showed a statistically significant increase in overall survival and progression-free survival among in patients with TD when compared to euthyroid patients (15). Overall, CITD is quite common and has prognostic implications for patients' disease. Although mild symptoms are common, severe TD is associated with significant morbidity; therefore, an increased focus on the mechanisms of CITD may improve the diagnosis, management, and care of patients. The pathogenesis of CITD is unclear, yet the pathologic mechanisms are likely distinct from autoimmune thyroid disease, e.g., Hashimoto, subacute, and silent thyroiditis (16). While thyroid peroxidase (TPO) antibodies (Abs) are often positive in autoimmune thyroid disease, several reports have shown the TPO Abs are not necessarily associated with immunotherapy induced thyroiditis (14,17). By contrast, Osorio et. al., showed in NSCLC receiving pembrolizumab, 80% of patients who developed CITD had TPO Abs present at diagnosis compared with 8% who did not develop CITD (6). The role of humoral immunity may have a distinct role in CITD, yet emerging data strongly implicates T cells as a primary pathologic actor (16, 18). Other mechanistic indicators of CITD include a higher incidence with PD-1 inhibitors compared to CTLA-4 or PD-L1 inhibitors (19). Possible explanations include polymorphisms in either CTLA-4, PD-1, or PD-L1 molecules or genetic variations that predispose patients to autoimmune thyroid disease (20-22). Recent studies suggested the difference may be from the widespread T cell activation by anti-CTLA-4, whereas PD-1 or PD-L1 blockade polarizes the activation of pre-existing CD8 + T cells that have reactivity to thyroid antigens (16, 23, 24). The importance of T cells rather than B cells in CITD is reinforced by the reported comparison of T cells from thyroid fine needle aspiration to blood analysis showed that immunotherapy induced thyroiditis had a T lymphocytemediated process with intrathyroidal predominance of CD8 + and CD4 -CD8 -T lymphocytes (18). Furthermore, Delivanis et al. suggested that ICI-related thyroid destruction may be independent of thyroid autoantibodies but involve T cell, NK cell, and monocyte-mediated pathways (14). Interestingly, for patients described here, we observed an increase in peripheral T cells associated with the onset of CITD. For patient #1, both CD4 + and CD8 + T cells more than doubled between cycles 2 and 3 of treatment (Figure 2A), corresponding with a decrease in TSH to 0.09 mIU/L at cycle 2 and 0.04 mIU/L at cycle 3 ( Figure 1A and Supplementary Figure 1A). For patient #2, CD8 + T cells more than doubled between his 1 st and 2 nd cycles of treatment ( Figure 2B), and subsequently his TSH increased from 1.88 to 6.11 mIU/L ( Figure 1B and Supplementary Figure 1B). While this observation may be in part secondary to a treatment effect as we and others have previously described, it may also go along with previous reports of T cell expansion associated with immunotherapy induced thyroiditis (25)(26)(27)(28). Next, we examined potential inflammatory mediators that may be associated with CITD. We were interested in measuring levels of IL-6 that we and others have found to be elevated during pneumonitis and neutropenia irAEs (25,29). Interestingly, patient #1 had a decrease in IL-6 levels during his episode of CITD ( Figure 2C). Patient #2 had a mildly elevated level in IL-6 to 9.97 pg/mL after the first ICI dose ( Figure 2D), consistent with previous reports that hypothyroidism (HT) patients had elevated serum IL-6 levels (30). It is also noteworthy that patient #2 had a transient increase in IL-17 to 4.7 pg/mL ( Figure 2D). Increased levels of IL-17 have been observed in patients with HT compared to healthy patients (31,32). However, patient #2's IL-6 and IL-17 levels were very low during his episode of CITD ( Figure 2D). Furthermore, the levels of inflammatory molecules TNF-a, INFg, and IL-1b were very low at the onset of CITD in patients #1 and #2 (Figures 2C, D). Additional proinflammatory biomarkers associated with irAEs are CRP and neutrophil to lymphocyte ratio (NLR) (33-35). We recently report a case of checkpoint inhibitor pneumonitis with increases in CRP and NLR associated with CIP onset and resolution (25). Furthermore, in patients with autoimmune hypothyroidism, a correlation was found between levels of high-sensitive CRP and thyroglobulin autoantibody and TSH levels (36). We suspected a similar observation may be consistent in CITD, yet neither patient had dynamic increases from baseline in CRP or NLR at CITD onset ( Figure 1 and Supplementary Figure 1). Although both patients had elevated baseline CRP levels, 22.0 and 11.4 mg/mL, respectively, after ICI treatment CRP levels decreased. However, patient #1 had a transient elevation of CRP (42.4 mg/L) after the first dose of ICI, whereas the CRP level decreased below the baseline after the second and third doses of ICI ( Figure 1A and Supplementary Figure 1C). Interestingly, this patient later developed pneumonitis as an irAE while receiving Nivolumab for a second eight cycle treatment phase (not included in our biomarker trial). During this second irAE, CRP levels spiked to 132.9 mg/L, suggesting CRP as a more valuable biomarker for pneumonitis than for CITD ( Figure 1A). Moreover, for patient#2, CRP levels continued to decrease from baseline after ICI initiation ( Figure 1B and Supplementary Figure 1D). Similar to patient #1, this patient also received a second phase of Nivolumab (not included in our trial) and also developed a second irAE, intestinal perforation. On the day the patient presented with intestinal perforation, a spike in CRP level of 323.7 mg/L was observed, in contrast to the decreased CRP levels previously observed with CITD ( Figure 1B and Supplementary Figure 1D). Likewise, NLR decreased during the CITD episodes after ICI treatment for both patients; however, later on during both of their second reported irAE, NLR levels elevated (Figures 1A, B). CRP and NLR are often appreciated as sensitive biomarkers for irAEs which are consistent with our results of the second irAEs in both patients (pneumonitis and intestinal perforation, respectively), yet these cases suggest further investigation is needed to define the utility of CRP and NLR in diagnosing and understanding CITD. It is still not fully understood why CRP and NLR levels in these two patients did not significantly elevate at the onset of CITD but spiked during the events of pneumonitis and intestinal perforation. Further studies are required to better understand the commonalities and differences among various irAEs. CITD is a common irAE, and although cases are usually mild, and can be treated with thyroid hormone supplementation. The diagnosis remains clinically meaningful, and if left unmanaged can result in significant morbidity. Mechanistically, CITD is not fully understood, yet the histological predominance of T cells and lack of consistently elevated TPO antibodies suggest the overall importance a T cell-mediated process (5,7,16,18). Consistently, our observations indicate an increase in peripheral CD8 + T cells during or preceding the onset of CITD in both patients (Figures 2A, B). Monitoring peripheral CD8 + T cells, in conjunction with TSH measurement, may be useful for the research and diagnosis of CITD. Recent studies demonstrate that a dynamic increase in peripheral CD8 + T cells is a valuable indicator of immune responses to ICIs in cancer patients (26,27,37). Further research is warranted to investigate the temporal and subtype changes of peripheral CD8 + T cells in order to elucidate immunotherapy-specific responses versus autoimmune responses in CITD. Conclusion Our case studies demonstrate that increases in peripheral CD8 + T cells may correlate with onset of CITD. Unlike previous reports on other irAEs such as pneumonitis and neutropenia from our group and others (25,38,39), CRP levels and NLR were non-contributory in the diagnosis of CITD, yet larger cohorts are needed to confirm this observation. Despite the limitations of two case reports, these observations may aid in a more robust understanding of the mechanisms of CITD. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. Ethics statement The studies involving human participants were reviewed and approved by The East Carolina University Institutional Review Board (IRB). The patients/participants provided their written informed consent to participate in this study. Author contributions MAM: designed study, conducted, analyzed, and interpreted experiments, and wrote manuscript. JM: designed study, conducted, analyzed, and interpreted experiments, and wrote manuscript. ZH: gathered clinical data and assisted in assembling manuscript. AN: gathered clinical data, critical edits and revisions to manuscript. AH: assisted in assembling manuscript. DA: processed patient blood samples and performed experiments. SA: gathered clinical data, critical edits and revisions to manuscript. MM: enrolled patients, critical edits and revisions to manuscript. PW: supervised, enrolled patient, involved in the clinical care of patients, critical edits and revisions to manuscript. LY: supervised, designed study, analyzed and interpreted data, and wrote manuscript. All authors contributed to the article and approved the submitted version. Funding The study was supported in part by grants from the Lung Cancer Initiative of North Carolina and the Coach Rock Foundation. Conflict of interest Author PW was employed by company Circulogene. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. SUPPLEMENTARY FIGURE 1 Levels of thyroid stimulating hormone (TSH), neutrophil to lymphocyte ratio (NLR), and C-reactive protein (CRP) in the lung cancer patients treated with Nivolumab. Blood samples were collected at the baseline and after each cycle of Nivolumab treatment during the biomarker trial. (A&B) TSH levels (mIU/L), (C&D) CRP levels (mg/L), and (E&F) NLR levels, in patient #1 and #2, respectively. Onset of checkpoint inhibitor-induced thyroid dysfunction (CITD) is indicated by arrows on the graphs. SUPPLEMENTARY FIGURE 2 Radiographic imaging of patient #2 during the event of intestinal perforation. (A) The anteroposterior chest X-ray image shows free air under the diaphragm and elevation of hemidiaphragm on the right side. (B) The lateral chest X-ray image shows free air below the diaphragm (red arrowhead).
2022-12-07T18:22:00.875Z
2022-12-07T00:00:00.000
{ "year": 2022, "sha1": "83114e91378b56eb90b563e82302c27798220395", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "83114e91378b56eb90b563e82302c27798220395", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
195797922
pes2o/s2orc
v3-fos-license
Sulfation pathways from red to green Sulfur is present in the amino acids cysteine and methionine and in a large range of essential coenzymes and cofactors and is therefore essential for all organisms. It is also a constituent of sulfate esters in proteins, carbohydrates, and numerous cellular metabolites. The sulfation and desulfation reactions modifying a variety of different substrates are commonly known as sulfation pathways. Although relatively little is known about the function of most sulfated metabolites, the synthesis of activated sulfate used in sulfation pathways is essential in both animal and plant kingdoms. In humans, mutations in the genes encoding the sulfation pathway enzymes underlie a number of developmental aberrations, and in flies and worms, their loss-of-function is fatal. In plants, a lower capacity for synthesizing activated sulfate for sulfation reactions results in dwarfism, and a complete loss of activated sulfate synthesis is also lethal. Here, we review the similarities and differences in sulfation pathways and associated processes in animals and plants, and we point out how they diverge from bacteria and yeast. We highlight the open questions concerning localization, regulation, and importance of sulfation pathways in both kingdoms and the ways in which findings from these “red” and “green” experimental systems may help reciprocally address questions specific to each of the systems. The activated sulfate for the sulfation pathways, 3Ј-phosphoadenosine 5-phosphosulfate (PAPS), 3 is formed from sulfate by two ATP-dependent steps: adenylation, i.e. the transfer of the AMP moiety of ATP to sulfate to form adenosine 5Ј-phosphosulfate (APS) by ATP sulfurylase (ATPS), and the phosphorylation of APS at its 3Ј-OH group by APS kinase. The two enzymes are either fused into a single enzyme PAPS synthase (PAPSS) in the animal kingdom or occur as independent proteins in the green lineage (7). The by-product of PAPS-dependent sulfation reactions, 3Ј-phosphoadenosine 5-phosphate (PAP), is finally dephosphorylated to AMP by 3Ј-nucleotidases. This reaction to remove PAP is important beyond the sulfation pathways, as PAP accumulation has many additional physiological effects (8,9). Sulfate activation to APS or PAPS is a prerequisite not only for sulfation pathways but also for primary sulfate assimilation in plants, algae, bacteria, and fungi (2). Particularly fungi and some bacteria require PAPS for sulfate reduction and synthesis of cysteine. In these organisms, the activated sulfate in PAPS is reduced to sulfite by PAPS reductase, and after further reduction to sulfide, it is incorporated into cysteine (10). The green lineage as well as a large number of bacterial taxa, however, use APS for sulfate reduction by APS reductase, whereas Metazoa do not possess the ability to reduce sulfate and are dependent on sulfur-containing amino acids in their diet (7). In sulfate-reducing organisms, sulfation pathways compete with the primary sulfate reduction for activated sulfate, and the two branches of sulfur metabolism must be well-coordinated (11). The ability to reduce sulfate is thus the major difference in sulfur metabolism between animals and plants and impacts on other metabolic branches, including sulfation pathways. In plants, traditionally, studies of sulfur metabolism concentrated on reductive, primary sulfur metabolism. For the red kingdom, in his scholarly "Tribute to Sulfur," Helmut Beinert (1) wrote that sulfate "is of limited use to higher organisms except for sulfation and detoxification reactions," without any further discussion of the topic. Since then, things have dramat-ically changed with growing evidence of the importance of sulfation pathways in both kingdoms. In addition, convergent findings in the green (plant) and red (animal) biochemistry of sulfur, e.g. recognition of hydrogen sulfide as a gaseous signal (12,13), revealed the value of comparative analysis of the same pathways in very different models. Here, we compare the mechanisms the two lineages, red and green, evolved to perform and control sulfation pathways. Given their importance for the metabolism of specific compounds and for the general sulfur metabolism, we extend the scope of our comparison to the enzymes providing the active sulfate and removing the by-product PAP. We aim to identify open questions common to both humans and plants as well as questions where knowledge from one lineage might be useful to inform research in the other. Activation of sulfate to PAPS The organification or activation of sulfate to PAPS by ATPS and APS kinase initiates sulfation pathways (14). The catalytic and substrate-binding sites of the ATP sulfurylases from plants and animals are highly conserved (15); however, subsequent reactions and the enzymatic blueprints vary greatly between different lineages (7). Also, the localization and regulation of ATP sulfurylase and APS kinase show lineage-specific differences. Plant ATP sulfurylase and APS kinase In plants, ATPS occurs as a homodimer, consisting of two 48-kDa monomers (16). Plants and algae have multiple ATPS isoforms localized in chloroplast and cytosol; the model plant Arabidopsis thaliana possesses four (17). In some plants, such (1). Sulfate activation occurs via animal bifunctional PAPS synthases (2) that shuttle between cytoplasm and nucleus or plant ATP sulfurylase (3) and APS kinase (4) isoforms that are localized in cytoplasm and the chloroplast. PAPS serves as a substrate for cytoplasmic sulfation pathways (5), where PAP is produced. Sulfated compounds can then be de-sulfated by sulfatases (6), enzymes that are absent in plants, or they are secreted via OATPs (7). Two animal PAPS transporters (8) channel PAPS into the Golgi apparatus where many carbohydrate and protein sulfotransferases modify macromolecules for secretion. Although plant protein sulfotransferases are known that reside in the Golgi, an analogous transporter (8) has not yet been identified. Human PAP phosphatases (9) are in the Golgi and the cytoplasm; plant PAP phosphatases are, however, localized in the mitochondrion and the chloroplast. Dedicated PAP(S) transporter in the chloroplast (10) and the mitochondrion (11) deliver PAPS to the cytoplasm and play an important role in the degradation of PAP. In plants, APS represents a branching point where reductive biosynthetic pathways diverge (12). B, examples of structures of sulfated metabolites. JBC REVIEWS: Green and red sulfation pathways as potato, distinct cytosolic and plastidic isoforms can be identified (18), whereas all four Arabidopsis isoforms possess N-terminal chloroplast-targeting peptides. Cytosolic activity is caused by alternative translation of the ATPS2 transcript, producing two different proteins: one with the target peptide transported into the chloroplast and one without the peptide located in the cytosol (19). The reason for the dual localization of the ATPS seems to be the need for both APS for sulfate reduction in plastids and PAPS for sulfotransferases in the cytosol (20). However, given the major role of plastids for synthesis of PAPS and the presence of PAPS transporters in plastid envelopes, the role of cytosolic ATPS is not obvious. Interestingly, both human PAPS synthases are also regulated on the level of cellular localization of the enzyme between the nucleus and cytosol, even though a function of PAPS in the nucleus is completely unknown (21). Because of its position at the beginning of the pathway, ATPS is a good candidate for controlling sulfate assimilation. Indeed, early findings in Brassica napus showed that ATPS activity and transcript levels were down-regulated by downstream products of sulfate assimilation, cysteine and GSH, and were up-regulated by sulfate starvation (22). These findings played a key role in formulating the concept of demand-driven regulation of sulfur metabolism in plants (23,24). However, the subsequent enzyme in the primary sulfate assimilation pathway in plants, the APS reductase, is regulated more strongly and was shown by metabolic flux control analysis to be the major control point of the pathway (25)(26)(27). Indeed, a recent modeling approach showed that the pattern of flux control is dynamic and not static (28); changes occur with differing environmental conditions, and control resides mostly at APS reductase or downstream sulfite reductase and not ATPS (28). ATPS, however, still contributes to the control of sulfate accumulation in Arabidopsis (29) and to the response to sulfate starvation as a target of microRNA miR395 (30). Arabidopsis ATPS1 and ATPS3 isoforms are part of the regulatory network of glucosinolate synthesis, and atps1 mutants show a lower concentration of these sulfated secondary compounds (29,31). Glucosinolates are part of the plant immune response to pathogens and herbivores as well as plant natural products responsible for smell, taste, and health effects of cruciferous vegetables, but also anti-nutrients for animal feed (32). Although essential and sufficient for sulfate reduction, ATPS has to be coupled with the APS kinase for sulfation pathways. This enzyme, ubiquitous in nature and highly conserved in structure and sequence, shows the same localization in plants as ATPS. Arabidopsis possesses four APS kinase genes, which encode three plastidic and one (APK3) cytosolic isoform (33). APS kinase phosphorylates APS produced by ATPS and thus competes with APS reductase for this substrate. The two enzymes represent entries into the two branches of sulfate assimilation: a primary reductive assimilation pathway and a secondary oxidized sulfur metabolism involving sulfation pathways (34). The secondary pathway has been rarely investigated, because PAPS production is not necessary for the primary sulfate reduction and synthesis of cysteine and GSH (34). However, even though APS kinase is part of the secondary sulfate assimilation pathway, it is vital for plant survival (33,35). Inter-estingly, it is the loss of two plastidic APS kinase isoforms APK1 and APK2 that results in strongly reduced accumulation of sulfated metabolites, such as glucosinolates, and not the disruption of the cytosolic enzyme APK3 (33). This, on the one hand, again challenges the significance of cytosolic APS and PAPS synthesis; on the other hand, it shows the necessity of intracellular PAPS transport. Indeed, a PAPS transporter has been identified in chloroplast envelope membranes, part of the glucosinolate co-expression network, whose mutation shows a phenotype similar to apk1 apk2 mutants (see below and Ref. 36). The apk1 apk2 double knockout turned out to be an excellent tool to dissect the importance of secondary sulfate assimilation (11,33). The reduced synthesis of PAPS in apk1 apk2 results in a shift of sulfur flux from the secondary to the primary sulfur assimilation pathway, increased accumulation of reduced sulfur compounds, and highly-reduced glucosinolate levels (11,33). Furthermore, all components of the glucosinolate synthesis pathway were coordinately up-regulated leading to substantial accumulation of the desulfo-precursors of glucosinolates (33). Although glucosinolates and other sulfated secondary metabolites seem not to be essential for Arabidopsis growth, the apk1 apk2 mutants are significantly smaller than the WT plants (33). When additional APS kinase genes, APK3 or APK4, are mutated, the semi-dwarf phenotype is even stronger (35). The generation of multiple mutations in APS kinase genes revealed that the enzyme is essential for Arabidopsis growth (35). Which acceptors of PAPS are essential remains to be determined, as neither the glucosinolates nor the sulfated peptide hormones (such as the phytosulfokines (37), root growth factors (38), or Casparian strip integrity factors (39)) discovered so far, seem to be crucial. APS kinase is regulated on both transcriptional and posttranscriptional levels. The genes are part of the glucosinolate transcriptional network, under control by a family of six MYB transcription factors in Arabidopsis and thus co-expressed with genes providing the main substrate for PAPS (31). In addition, according to the demand-driven concept, sulfate starvation represses APS kinase to channel the scarce sulfur to the primary sulfate assimilation. Excitingly, redox regulation of APS kinase enzyme activity through dimerization of the protein and formation of disulfide bridges has been revealed in a structural analysis (40). Reducing conditions leading to monomerization of the protein increase the catalytic efficiency, including alleviation of enzyme inhibition by its substrate APS (40). This is particularly interesting as it complements the redox regulation important for control of the reductive branch of sulfate assimilation (2). APS reductase is activated by oxidation, e.g. during abiotic stress, which leads to higher activity and synthesis of cysteine and GSH (41). Accordingly, recombinant APS reductase is inactivated by incubation with reductants (27). APS reductase and APS kinase occupy the opposite branches of sulfate assimilation from APS (11). Considering that APS reductase is activated by oxidation (41), the reciprocal activation of APS kinase by reduction indicates that this redox mechanism may control the distribution of sulfur fluxes between primary and secondary sulfur metabolism (15). Bifunctional PAPS synthases in animals In contrast to plants with separate proteins possessing ATPS and APS kinase activities, vertebrate and invertebrate genomes feature these activities in single polypeptides with a C-terminal ATPS domain and an N-terminal APS kinase domain (7, 42). These so-called PAPS synthases are strictly conserved within animal genomes with a single gene in invertebrates and a PAPSS1 and PAPSS2 gene pair in vertebrates (42). An additional PAPSS2 gene copy in teleost fish and mammalian-specific splice forms of PAPSS2 are minor extensions to this rule (42). It is interesting to ask why this second sulfate-activating complex has evolved and has been strictly maintained in animals. Possibly, this has been a requirement for the expansion of sulfation pathways in animals. Two PAPS synthase genes would allow us to selectively support different sulfation pathways, either via transcriptional co-regulation (43), transient protein interaction (44), or yet-to-be described regulatory mechanisms. Such a subfunctionalization is also indicated by the fact that PAPS synthases 1 and 2 cannot complement each other. Genetic defects for PAPSS1 have never been reported so far. However, mutations in the gene coding for PAPSS2 are associated with bone and cartilage malformations as well as a steroid sulfation defect (45); this is discussed in detail below. The mechanistic question is what makes the two PAPS synthase enzymes so different. Certainly, the two genes are differentially regulated to a certain extent, and transcriptional coregulation with certain sulfotransferases has been reported (46,47). However, correlations of transcript levels between PAPS synthases and sulfotransferases are only weak (Fig. 2), and both PAPS synthases are expressed at the same time in certain tissues (48). Different subcellular localization was believed to be of functional significance (49), but the conserved nuclear localization and export signals were identified both in PAPSS1 and PAPSS2 (21). Diverse catalytic activities were purported to explain the observed functional difference, based on only a 5-fold difference in k cat /K m values when treating bifunctional PAPS synthases as pseudo one-step Michaelis-Menten enzymes (50). This characteristic could not be reproduced when assaying APS kinase only, which catalyzes the rate-limiting step of overall PAPS biosynthesis (44,51). A difference in protein stability of the two recombinant human PAPS synthases as described (42). PAPSS2 was less stable than PAPSS1 toward chemical or thermal unfolding (42). ATP sulfurylase and APS kinase activity assays, run after incubation at an elevated temperature, indicated that the sulfurylase domain is less stable than the APS kinase domain (42). This is probably due to (48). PAPSS1 and PAPSS2 expression profiles were compared with each other and against different sulfotransferases. Top panel: PAPSS1 and PAPSS2 expression seem to be weakly anti-correlated. Comparing PAPSS1 or PAPSS2 with SULT1A1 shows a weak positive correlation between PAPSS2 and SULT1A1, but a negative correlation for PAPSS1 with SULT1A1. All units in these panels are in RPKM (reads per kb of transcript per million mapped reads). Bottom panel: to illustrate the positive or negative correlation of the tissue-specific expression, the correlation coefficient R is plotted for all 52 sulfotransferases versus PAPSS1 (black) or PAPSS2 (red). There is a tendency for PAPSS2 to be co-expressed with cytosolic sulfotransferases. JBC REVIEWS: Green and red sulfation pathways ligand binding as the APS kinase may pull an ADP molecule from the bacterial host via several purification steps (6,52). It will be interesting to see how these results translate into protein stability within the living cell (53). In light of these findings, the PAPSS gene fusion could also be thought of as a solubility anchor of a more stable domain for another less stable domain, among other factors. Benefit of being fused together For bifunctional PAPS synthases, answering questions on whether and how the individual domains functionally interact with each other continue to drive our understanding in the field. Channeling of the APS intermediate between the two domains of human PAPS synthases was initially hypothesized but was subsequently ruled out based on kinetic (54,55) and structural data (52). The crystal structure of full-length human PAPSS1 shows dimers of APS kinase and ATP sulfurylase each with large interacting surfaces but only a weak interaction between those sulfurylase and kinase domain dimers (52). The large dimer interface is conserved between PAPS synthase isoforms; hence, they can form high-affinity homo-and heterodimers (51). APS channeling was excluded as there was no channel visible in the structure (52); APS produced by the sulfurylase domain exchanges freely with bulk APS (54) and APS kinase and reverse sulfurylase assays can be run without problems starting from APS (44). Because no metabolic need for APS is known in animals, the PAPS synthases may represent a way for stoichiometric gene expression and protein localization of its two enzymatic components. The formation of a bifunctional enzyme even without the added effect of substrate channeling can be an answer to the low-catalytic efficiency of the forward reaction of ATPS with a K eq of ϳ10 Ϫ8 (56). Therefore, the equilibrium can be shifted by the removal of the products, e.g. linking with inorganic pyrophosphatase to remove pyrophosphate or with enzymes utilizing APS. The animal PAPS synthase is clearly a mechanism for the latter, but not the only one in nature (7). Another mechanism to increase the efficiency is employed by most bacteria, which possess a variant of ATPS that couples APS production with GTP hydrolysis (57), and the need for increased catalytic efficiency may have led to other gene fusions. In filamentous fungi, ATPS is also fused with APS kinase; however, the kinase domain of the fusion protein is at the C-terminal end and functions only as an activation domain to modulate activity of ATPS without having a kinase activity (58,59). A number of gene fusions have been found in the eukaryotic microalgae, which as secondary or tertiary endosymbionts were likely more prone to genome rearrangements (7). ATPS in the diatom Thalassiosira pseudonana and other Protozoans is fused to both APS kinase and inorganic pyrophosphatase, whereas in the dinoflagellate Heterocapsa triquetra ATPS is fused with the other APS-utilizing enzyme, APS reductase (7). In addition to this diversity, at least three different ATP sulfurylase enzymes have evolved independently, the plant and animal enzyme, the bacterial GTPase-linked enzyme, and one mainly found in cyanobacteria and green algae (7, 60). The ATPS domains from the fusion proteins are phylogeneti-cally related to the ATPS from plants/animals or green algae (7). In plants, the need for APS for primary assimilation is greater that for the sulfation pathways, and therefore, a mechanism shifting the equilibrium from APS to PAPS is not advantageous. Another interesting evolutionary aspect of plant ATPS is that although it is largely a plastidic enzyme, it is in no way related to the ATPS in cyanobacteria, the precursors of plant plastids (7). Because chlorophyte ATPS is of cyanobacterial origin, it seems that the common precursor must have contained both forms, plastidic and Eukaryotic, and the sister lineages, plants and green algae, each retained a different one. Core sulfation pathways Sulfotransferases are the first enzymes of the core sulfation pathways. They transfer sulfate from PAPS to the hydroxyl or amino group of a wide variety of acceptors: carbohydrates, lipids, peptides, hormone precursors, xenobiotics, and other molecules (3,61). There are also PAPS-independent aryl sulfotransferases from some bacteria, which use phenolic sulfates as donors (62). These aryl sulfotransferases display a different fold, but retain a similar spatial arrangement of the active-site residues, indicative of convergent evolution. They are covered in more detail elsewhere (62). The other components, sulfatases, are hydrolytic enzymes, part of the alkaline phosphatase superfamily, that cleave biological sulfate esters (63). A posttranslational modification dramatically enhances the catalytic activity of sulfatase enzymes-a cysteine or serine residue within the catalytic center is converted to a formyl-glycine (64). Lipmann (14) referred to sulfotransferases as sulfokinases, and a common evolutionary origin of sulfotransferases and kinases has been purported, with subsequent phylogenetic divergence of enzyme activity (65). There are four conserved regions of sulfotransferases used for the characterization of the enzymes (66), including a P-loop for catalysis (67,68). This protein structure is highly conserved, except for plant tyrosylprotein sulfotransferase (69). Based on the conserved regions, sulfotransferases are found in all kingdoms of life (68). Different research communities abbreviate mammalian cytosolic sulfotransferases to SULT, plant enzymes to SOT (or SULT), and Golgi enzymes according to their main activity/substrate (e.g. HS6ST for heparan sulfate-6-O-sulfotransferase, and TPST for tyrosylprotein sulfotransferase). For consistency, we will keep the different abbreviations. The main differences between animal and plant sulfation pathways are the number of genes with more than 50 genes for human SULTs and 21 SOT genes in Arabidopsis, whereas the 17 human sulfatases do not have counterparts in plant genomes. Arabidopsis and the human sulfotransferase repertoire Sulfotransferases are grouped into categories, such as soluble or membrane-bound, cytosolic or Golgi-located, substrate preference for low-molecular-weight substrates, or the larger carbohydrates, proteins, or proteoglycans (3). Langford et al. (70) listed 13 cytoplasmic and 37 Golgi-residing SULTs in the human genome. Ensembl lists several additional entries for the human genome (71), but most of them JBC REVIEWS: Green and red sulfation pathways seem to be pseudogenes (5). The only proteins annotated having sulfotransferase activity at Ensembl that should append Langford's list are DSEL, a dermatan sulfate epimerase, and the WSCD1 protein (71). Hirschmann et al. (4) list 22 genes for sulfotransferases in Arabidopsis (but one of them is annotated as a pseudogene). Phylogenetic analysis of protein sequences from the final 21 and 52 genes representing the Arabidopsis and human sulfotransferase repertoires, respectively, reveals that Arabidopsis SOTs 1-18 share high-sequence similarity with human cytoplasmic SULTs. In fact, these two groups share a higher degree of similarity with each other than human Golgi and cytoplasmic SULTs (Fig. 3). This is illustrated by a structural overlay of AtSOT18 and human SULT1A1 (Fig. 4). Hence, any new insight on one class of these sulfotransferases may have direct applicability to the other (43). These insights mainly include discoveries of novel mechanisms of regulation (43). For example, the flexibility of the main substrate-binding loops-elucidated in part with new analytical and computational tools-is the molecular basis for the broad specificity of the sulfotransferase SULT1A1 (72), with possible implication also for AtSOT12 described below. This flexibility makes it difficult to search for pharmacologically useful and isoform-specific inhibitors of sulfotransferases using computational docking (73), except the flexibility is built into the structural models (74). Such inhibitors are useful because human sulfotransferases metabolize many drugs and may thus interfere with various pharmacological interventions. Another regulatory mechanism, allosteric regulation, has only recently been described in sulfation pathways (43). Recently, an allosteric site in human SULT1A3 was discovered that may be targeted for isoform-specific SULT1A3 allosteric inhibitors (75) Dimers of human cytoplasmic SULTs are believed to form via an unusually small dimer interface containing the amino acids KTVE (76) that are conserved in cytoplasmic SULTs (77). Arabidopsis sulfotransferases were derived from RefSeq entries from the nucleotide database at www.ncbi.nlm.nih.gov. From multiple splice forms, the one selected was assigned the major isoform. These sequences were subjected to multiple sequence alignments using Clustal Omega and MAFFT (206). From the MAFFT tree, a neighbor-joining tree without distance corrections, a collapsed tree was manually curated. A striking finding was that AtTPST (RefSeq NP_563804) was not grouped with the human TPSTs but with heparan-6-O- Biochemical data and molecular dynamics simulation suggest that dimer formation is of functional significance as it modulates flexibility of the catalytic loop 3 (78). Recently, a crystal structure of the AtSOT18 -sinigrin-PAP complex elucidated the functional domain and residues for the substrate-binding site of the enzyme (68). The structure demonstrated evolutionary conservation of the sulfotransferases between humans and plants ( Fig. 4) and suggested a loop-gating mechanism as responsible for substrate specificity for the sulfotransferase in plants (68). Noteworthy, AtSOT18 and the other AtSOTs do not contain the KTVE motif, and dimer formation has not been reported for plant sulfotransferases. Plant SOTs are still not completely characterized even in the model plant Arabidopsis (4). Some Arabidopsis SOTs have a broad substrate specificity, and some are very specific, such as AtSOT15 catalyzing only sulfation of 11-and 12-hydroxyl-jasmonate (79); however, the substrates for almost half of all SOTs are unknown (4). Most attention has been paid to three Arabi-dopsis SOTs, AtSOT16, AtSOT17, and AtSOT18, which transfer sulfate to desulfo-glucosinolate precursors as the final step in glucosinolate synthesis (80). This is due to the important role of glucosinolates in defense against biotic and abiotic stress and their anti-carcinogenic and neuroprotective properties in the human diet (32,81). The three desulfo-glucosinolate SOTs have different affinities to various classes of the precursors, i.e. aliphatic and indolic, at least in vitro (82). Interestingly, it seems that SOTs from different natural Arabidopsis accessions possess different specificity to these precursors (83); however, whether these in vitro data are relevant in vivo needs to be confirmed. Similar to the desulfo-glucosinolate SOTs, only in vitro substrate specificities are known for other SOTs. While AtSOT5, AtSOT8, and AtSOT13 transfer sulfate to flavonoids, AtSOT10 modifies the plant hormones, brassinosteroids (84,85). In contrast to these SOTs with relatively narrow specificity, the AtSOT12 accepts a variety of substrates for the synthesis of shown are human sulfotransferase SULT1A1 bound to PAP and 3-cyano-7-hydroxycoumarin (Protein Data Bank code 3U3M), A. thaliana SOT18 complexed to PAP and sinigrin (Protein Data Bank code 5MEX), Danio rerio heparan sulfate 6-O sulfotransferase HS6ST3 with PAP and part of its heptasaccharide displayed (Protein Data Bank code 5T0A), as well as human TPST2 with bound PAP and C4 peptide (Protein Data Bank code 3AP1). Structural visualizations were done using YASARA (207). B, these complexes were structurally aligned using MUSTANG (208). Root mean square deviation (RMSD) values for structural alignment, the number of aligned residues, and the percentage of amino acid identity are listed. JBC REVIEWS: Green and red sulfation pathways sulfated flavonoids, salicylic acid and brassinosteroids (86). The activity with salicylic acid, a phytohormone involved in defense against pathogens, seems to be responsible for the increased pathogen susceptibility of sot12 mutants (86). The functions of other SOTs remain to be elucidated, particularly given the large variety of so far unknown sulfur-containing metabolites in Arabidopsis (87). Protein sulfation by TPSTs Tyrosine sulfation is a major post-translational regulation of secreted proteins and peptides in both animals and plants. However, this modification seems to be confined to multicellular Eukaryotes, as TPSTs have not been found in either bacteria or yeast (88). TPST catalyzes the transfer of sulfate from PAPS to the phenolic group of the amino acid tyrosine in the Golgi (69,89,90). It is estimated that one-third of all secreted human proteins are tyrosine-sulfated (91). Human TPST1 and TPST2 are type II transmembrane proteins with a C-terminal globular domain within the Golgi lumen (90). They share 67% amino acid identity with each other. As Caenorhabditis elegans and Drosophila only contain one TPST gene, a gene duplication may have occurred at the invertebrate-vertebrate transition. Elucidating the biological roles of individual TPST isoforms through biochemical and structural studies of recombinant TPST proteins was a challenge for a long time. In 2013, Teramoto et al. (92) reported the crystal structure of human TPST2, followed by the structure of human TPST1 in 2017 (93). These structures are remarkable for two reasons. First, the core protein fold and the 5Ј-phosphosulfate-binding (5Ј-PSB) and the 3Ј-phosphate-binding (3Ј-PB) motifs involved in PAPS co-factor binding are structurally conserved even in these sulfotransferases only distantly related to their cytoplasmic counterparts (92,93). Second, they explain the mechanism of substrate recognition. Protein substrates need to locally unfold to bind to TPSTs in a deep active-site cleft, a process similar to the one known for tyrosine kinases (92). Both human TPST enzymes target peptidic motifs with negatively charged residues around the acceptor tyrosine (89) with very similar or identical recognition mechanisms (93). This leaves open how the observed functional differences of the TPST isoforms (see below and Ref. 94) are caused on the protein level. Indeed, despite structural and mechanistic similarities, the two TPST isoforms display notable functional differences. TPST1-knockout mice show reduced body weight and fewer litters due to increased fetal death in uterus, whereas male fertility is not affected (95). This suggests that TPST1 reaction products which would not be sulfated in the TPST1 knockout, have a role in females during development of embryos. TPST2knockout mice, however, primarily show male infertility (96), due to compromised egg-sperm interaction (97). Taken together with the sulfated proteins from the blood-clotting cascade and the sulfated co-receptors on the host cells for HIV infections (89), the picture emerges that tyrosine sulfation acts as macromolecular glue to strengthen interactions of proteins with other proteins or other biopolymers. A recent biophysical study clearly illustrates this point. In the complex of an N-terminal sulfated part of the chemokine receptor CCR5 and its CCL5 ligand NMR revealed that high-affinity binding is attrib-uted to sulfate-mediated twisting of the two N termini (98). Identifying more and more sulfated proteins is expected in the near future due to advances in MS that allow better recovery of sulfated peptides and unambiguous distinction from their isobaric phosphorylated counterparts (99). The importance of tyrosine sulfation in plants has been known for a long time because of the number of sulfated growth-regulating peptides (37,100). Despite the importance of the tyrosine sulfation, however, the corresponding sulfotransferase remained elusive in plants, as no homologous proteins to the animal enzyme could be found. AtTPST was identified in Arabidopsis after isolation of the enzyme from the microsomal fraction and proteomics analyses (69). AtTPST is a 62-kDa transmembrane protein located in the Golgi that lacks the characteristic cytosolic sulfotransferase domain (69). The importance of plant tyrosine sulfation is confirmed by the semidwarf phenotype of the Arabidopsis tpst1 mutant with early senescence, light green leaves, and diminutive roots (69). Human sulfatase enzymes The human genome contains 17 genes for sulfatases (70,71), which hydrolyze a range of biological sulfate esters. They are all grouped into the alkaline phosphatase superfamily (63). The activity of human (and bacterial or fungal) sulfatases depends on enzymatic oxidation of a cysteine to formylglycine (5). The corresponding enzyme is encoded by the sulfatasemodifying factor 1 (SUMF1) gene. This formylglycine is further hydrated to form the active site of sulfatase (101). The resulting geminal diol can be interpreted as activated water for the hydrolysis reaction (101). A recent crystal structure of the formylglycine-generating enzyme clearly shows its catalytic copper co-factor coordinated by two active-site cysteine residues, explaining its mechanism (102). As this oxidase creates an aldehyde, it has emerged as an enabling biotechnology tool for bioconjugation reactions (103). There is also a SUMF2 gene in the human genome that encodes a protein devoid of the oxidase activity, but interacting and modulating the function of SUMF1 by forming inhibitory heterodimers (104). Sulfatase genes encode proteins with a broad range of substrate specificity. The extracellular endoglucosamine 6-sulfatases, SULF-1 and SULF-2, target highly sulfated extracellular heparan sulfate domains, which are involved in growth factor signaling, tumor progression, and protein aggregation diseases (105). Some sulfatases (such as arylsulfatase A and B) are cytoplasmic, whereas others are membrane-bound (5). Noteworthy is the steroid sulfatase (STS), with a very rare and unusual membrane topology; its membrane stem is made up of two ␣-helices (106). STS has been described as an enzyme in the lumen of the endoplasmic reticulum (101), but it was also detected in the Golgi apparatus (107). From an endocrine point of view, STS is highly interesting as it renders steroid sulfation a reversible and dynamic process (5). STS is highly expressed in the placenta and forms, together with the fetal adrenal and fetal liver, the fetoplacental unit (108). This unit produces the placental estrogens, estradiol and estriol, from fetal adrenal androgens via fetal adrenal sulfation, fetal hepatic hydroxylation, and placental desulfation, further downstream conversion, and release into the maternal circula-tion (109). In adults, STS is also expressed in many other tissues allowing for the uptake of sulfated steroid precursors and their desulfation. Plants do not seem to possess sulfatase activity. This poses an obvious question for the catabolism of sulfated secondary metabolites. Glucosinolates are an important pool of sulfur, which can be recycled during sulfur starvation. Glucosinolate degradation is part of their anti-herbivore activity, which is initiated by tissue damage, bringing the glucosinolates into contact with thioglucosidases (myrosinases). Removal of the sugar moiety leads to chemical rearrangement of the aglycones to form volatile isothiocyanates or nitriles and release of sulfate (110). Glucosinolates can, however, be degraded also without tissue damage by atypical PEN2 myrosinase as a part of innate immunity (111). Nothing is known about the catabolism of other sulfated compounds in plants. Plants can also profit from microbial sulfatase activity in the rhizosphere. Soil contains a great portion of organic sulfur, up to 90%, which is not available to plants (112). Soil bacteria, however, can metabolize these compounds by sulfatases, releasing the sulfate, and thus improving plant sulfur nutrition. Hence, releasing sulfate via sulfatase activity is the mechanism of some plant growth-promoting bacteria (112). Attempts to engineer intracellular or excreted sulfatase in plants, to make the organic sulfate available to plants, failed so far, most likely because of the need for the post-translational activation by production of formylglycine. PAP metabolism The nucleotide PAP is produced during PAPS-dependent sulfation pathways. It is also formed during CoA-dependent fatty acid synthetase activation (113), although how or whether these two pathways interconnect is currently unclear. As a reaction product, PAP strongly inhibits sulfotransferase activity (114). With its two phosphate moieties, PAP may be regarded as the shortest possible RNA strand, and consequently, PAP interferes with RNA metabolism, inhibiting the XRN RNA-degrading exoribonucleases (115). To prevent the toxic effects of PAP, dedicated PAP phosphatases are found in all kingdoms of life. Most of the enzymes from higher Eukaryotes show multiple specificity toward PAP or PAPS, and they also impact inositol signaling by removing phosphate from inositol bis-and triphosphates (116,117), all representing small and negatively charged substrates. Lithium is known to influence many different proteins, and PAP phosphatases belong to the most sensitive targets for lithium inhibition. Mechanistically, lithium inhibition is well-understood for the PAP phosphatase CysQ from Mycobacterium tuberculosis. Lithium replaces one of a cluster of magnesium ions bound in the active site of the enzyme (118), due to the diagonal relationship between magnesium and lithium; these elements, diagonally adjacent in 2nd and 3rd periods of the periodic table, display a number of similar properties. As the negative amino acids in the catalytic center are highly conserved, it is highly likely that the same mechanism is in place in other PAP phosphatases. In many microorganisms PAP phosphatases are strongly associated with sulfate assimilation, because accumulation of PAP also inhibits PAPS reductase, an essential enzyme in sulfate reduction. In yeast, loss of PAP phosphatase Met-22 leads to methionine auxotrophy (119). Defects in PAP catabolism result in severe growth inhibition; in animals they are mainly due to inhibition of sulfation-dependent processes (120,121), and in plants the defects are much more complex, because of the involvement of PAP in additional signaling pathways (122,123). Plant PAP phosphatases and PAP-dependent stress signaling Plant PAP phosphatase SAL1 belongs to the most pleiotropic plant genes. It was first identified in rice as a protein complementing an inability to grow on sulfate in cysQ mutants of E. coli and met22 yeast mutants (124). It was subsequently shown to catalyze conversion of PAPS to APS and PAP to AMP, and this function was speculated to regulate sulfur fluxes (125). A homologue from Arabidopsis was identified in a screen for genes improving salt sensitivity and was named SAL1 (116). Since then, SAL1 has been found in numerous genetic screens for a number of unrelated phenotypes and is therefore described under many different names. A common denomination, FIERY1 or FRY1, comes from a screen for mutants in abscisic acid and stress signaling, where its loss-of-function resulted in hyperinduction of the luciferase reporter gene driven by stress-responsible promoter (126). The phenotypes observed in the various alleles of sal1 mutants include cold and drought tolerance and signaling (122,127), leaf shape and venation pattern (123), RNA silencing (115), increased jasmonate levels (128), glucosinolate and sulfur accumulation (9), lateral root formation (129), increasing circadian period (130), and many others. Initially it was believed that these phenotypes are caused by defects in inositol phosphate signaling (126), but current evidence points to PAP being the main factor (8,9,131,132), thus linking sulfation pathways with a number of cellular processes. In contrast to animal PAP phosphatases in the Golgi and the cytoplasm, the plant SAL1 enzyme in chloroplasts and mitochondria has a different localization than the sulfotransferases forming PAP (122). The number of phenotypes described in sal1 mutants resemble those of loss-of-function mutants in XRN exoribonucleases (133) and can be complemented by expression of SAL1 in the nucleus, implying that one mode of action of PAP is inhibition of XRNs (122). A model in which PAP acts as retrograde signal from chloroplast to nucleus during abiotic stress has been proposed (122) and corroborated by recent findings of redox regulation of SAL1 (134). Thus, oxidative stress leads to oxidation of a redox cysteine pair in SAL1 and strong inactivation of the enzyme. This in turn results in accumulation of PAP, its transport to the nucleus, and induction of expression of stress-response genes (134). Accordingly, PAP accumulation due to loss-of-function of SAL1 leads to stress tolerance, such as drought tolerance (8). In addition, the SAL1-PAP regulatory module has an intermediary role connecting hormonal signaling pathways, such as germination and stomatal closure (135). It has to be noted that in Arabidopsis SAL1 is a member of a small gene family with seven members. SAL1 is, however, the only gene that has been found in the numerous genetic screens JBC REVIEWS: Green and red sulfation pathways and that, when disrupted, causes the various phenotypes. Two additional isoforms, AHL and SAL2, were confirmed to function as PAP phosphatase (136), but only AHL is expressed at levels comparable with SAL1 (131). In contrast to SAL1, AHL does not seem to use inositol 1,4-bisphosphate as a substrate (136), and its overexpression complements the loss of SAL1 for at least some phenotypes (131). Although this is clear evidence for PAP being the causal metabolite for many phenotypes, the reason why in WT Arabidopsis AHL does not suffice to metabolize PAP remains to be elucidated. Another unsolved question is the physiological relevance of PAPS dephosphorylation. The alteration in glucosinolate synthesis is the first direct metabolic link of SAL1 with sulfation pathways (9). In the fou8 allele of sal1 mutants, glucosinolate levels were lower than in WT Col-0 (9). This was caused by reduction in sulfation rate, as the mutants also accumulated the desulfo-glucosinolate precursors (9). The phenotype thus strongly resembled that of apk1 apk2 mutants with low provision of PAPS (33). Interestingly, combining the fou8 mutant with apk1 apk2 resulted in alleviation of many of the phenotypic alterations connected with loss of SAL1 function, strongly suggesting that PAP was the responsible metabolite (9). This observation forms a second direct link of SAL1 and sulfation pathways: the SAL1-PAP signaling depends on synthesis of PAPS and sulfation reactions, i.e. secondary sulfur metabolism (132). This is particularly important for plants, which do not synthesize glucosinolates or other major classes of sulfated secondary metabolites but still possess functional PAP signaling (137). Which sulfotransferase isoforms provide the majority of PAP for the stress signaling is, however, still unknown. Human BPNT1 and Golgi PAP phosphatases In humans, PAP is degraded at the sites of its production by a cytoplasmic and a Golgi PAP phosphatase. The human PAP phosphatase bisphosphate nucleotidase 1 (BPNT1) is a cytoplasmic enzyme (138), whereas the "Golgi PAP phosphatase" (gPAPP) is obviously located in the Golgi apparatus. For its side activity toward inositols, however, gPAPP is also known as inositol monophosphatase domain containing 1 (IMPAD1). The catalytic domain of this type II transmembrane protein is in the lumen of the Golgi (138). Its main substrate is PAP from Golgiresiding sulfotransferases (121). Mice with an inactivated gPAPP/IMPAD1 gene show neonatal lethality, abnormalities in the lung, and bone and cartilage malformation (121). This may be due to under-sulfated chondroitin and perturbed formation of heparan sulfate due to the inhibition of the corresponding sulfotransferases by the accumulated PAP (121). The human gPAPP/IMPAD1 gene lies in the genomic region 8p11-p12 that is frequently amplified in breast cancer (139); however, a functional role in tumorigenesis remains to be established. Patients with truncation mutations in gPAPP/IMPAD1 are characterized by short stature, joint dislocations, brachydactyly, and cleft palate (120,140). Some patients also had a homozygous missense mutation D77N (140). This mutation was recently introduced into mice, and the phenotype of homozygous gPAPP/Impad1 knockin animals overlaps with the lethal phenotype described previously in Impad1 knockout mice (141). The gPAPP phosphatase is only found in animals; hence, it may have co-evolved with the many Golgi sulfotransferases as a critical modulator of glycosaminoglycan and proteoglycan sulfation. With an inhibition constant K i of 157 M, the only other human PAP phosphatase BPNT1 is an exceptionally lithiumsensitive enzyme (142). Hence, there was speculation whether BPNT1 is the actual target for lithium as a treatment for bipolar disorder. At least in C. elegans, lithium causes BPNT1-mediated selective toxicity to specific neurons and leads to behavior changes (143). A study in rats, however, questioned the role of PAP phosphatases for the therapeutic effect of lithium, as there was no PAP accumulation detected in the brain after prolonged lithium exposure (144). Additionally, the knockout of Bpnt1 in mice leads to an early aging phenotype (145). Bpnt1 Ϫ/Ϫ mice do not show a skeletal phenotype, but develop liver pathologies, hypoalbuminemia, hepatocellular damage, and deadly wholebody edema by just 7 weeks of age (145). PAP accumulation is thought to interfere with RNA processing leading to defective ribosomes (145). A recent re-evaluation of the same mouse model linked the toxic accumulation of a BPNT1 substrate, such as PAP, directly or indirectly to changes in HIF-2␣ levels and iron homeostasis (146). Looking at the tissue distribution of the BPNT1 and enriched levels of PAP, Hudson and York (147) reported a mismatch between broad expression of BPNT1, but measurable PAP accumulation only in liver, duodenum, and kidneys. Interesting questions about how BPNT1 is regulated in a tissue-specific manner, whether redox regulation plays a role, and what involvement this PAP phosphatase has in further regulatory pathways remain to be answered. In worms at least, a genetic interaction of BPNT1 and the exoribonuclease XRN2 in polycistronic gene regulation has recently been reported (148). Subcellular localization and transporters in sulfation pathways The products of sulfation pathways represent intracellular metabolites as well as external proteins/peptides and carbohydrates. Therefore, the sulfotransferase enzymes have to be located in at least two compartments of the cytosol and Golgi apparatus. However, animal sulfate activation occurs in the cytoplasm and the nucleus as PAPS synthases shuttle between these compartments (21). Conserved nuclear localization and export signals govern this subcellular distribution, including a nuclear localization signal at the very N terminus of the APK domain as well as an atypical nuclear export signal at the APK dimer interface (21). Human soluble SULT enzymes are mainly cytoplasmic (3); however, they are sometimes also present in the nucleus (149). They receive the necessary PAPS co-factor from PAPS synthases via diffusion through the bulk medium; however, transient protein interactions might facilitate this process (44). Because of their very high expression in liver and some other tissues, cytoplasmic sulfotransferases may outnumber PAPS synthases and the co-factor itself (44,150). Therefore, interactions between sulfotransferases and PAPS synthases may be a mechanism to overcome the substrate limitation and add an additional level of control (44). JBC REVIEWS: Green and red sulfation pathways The many Golgi sulfotransferases rely on the import of PAPS by the human PAPS transporters PAPST1 and PAPST2, also referred to as SLC35B2 and SLC35B3, respectively (151,152). Both transporters belong to the group of nucleotide-sugar transporters but are specific for PAPS and only share 24% amino acid identity with each other (152). The PAPST1 homologue from Drosophila is essential for viability of the flies (152). In humans, the two transporters are expressed in different tissues and may impact different subsets of sulfation pathways (153). A complementary mechanism to having enzymes in multiple compartments is that the substrates and/or sulfated products themselves can traffic around the cell. Many low-molecularweight compounds such as steroids are believed to be membrane-permeable. Notably, however, a recent study challenges the dogma of freely membrane-permeable steroids (and maybe also other smaller compounds). Okamoto et al. (154) have reported that in Drosophila, steroid hormones require a protein transporter for passing through cellular membranes. Moreover, once steroids become sulfated, they are trapped within the cell (5). Release into the circulation and uptake into other cells depends on organic anion transporters, the OATPs (155). These individual transporters thus represent an additional layer of regulation (156). Plants also require transporters for function of sulfation pathways. Although all the PAPS-dependent sulfotransferases are located outside of plastids, the majority of the APS kinase activity is located within the chloroplast (33). Hence, plant cytosolic sulfation pathways are dependent on export of PAPS from plastids and those in the Golgi additionally on import of cytosolic PAPS. The first plant PAPS transporter was identified through co-expression with genes for glucosinolate synthesis and transport assays in liposomes (36) and belongs to the ADP/ ATP carriers of the mitochondrial carrier family. AtPAPST1 also transports PAP, which has to be imported to plastids for degradation by SAL1; therefore, the transporter most probably serves as a PAP/PAPS antiporter (36). The loss-of-function mutant papst1 accumulates desulfo-glucosinolate precursors and shows decreased glucosinolate levels similar to but to a lower extent than apk1 apk2, suggesting the existence of a second plastidic PAPS transporter. Indeed, AtPAPST2 was recently identified in Arabidopsis as a transporter located dually in membranes of chloroplasts and mitochondria (157). The AtPAPST2 gene is not co-expressed with glucosinolate genes, and its loss had only a minor effect on glucosinolate accumulation (157). Localization links AtPAPST2 to SAL1, which is also present in plastids and mitochondria. Thus, it seems that AtPAPST1 has a major role in exporting PAPS from chloroplast to cytosol for sulfation reactions and AtPAPST2 in importing PAP into the organelles for degradation by SAL1 (157). It also seems that the two transporters, AtPAPST1 and AtPAPST2, are not sufficient to explain all phenotypes connected to movement of PAPS and PAP between cytosol and the organelles, particularly the accumulation of glucosinolates and their desulfo-precursors. This metabolic phenotype can be expected to be found in mutants of the additional transporter gene(s) and enable their identification. Natural genetic variation The enzymes connected to sulfation pathways show a large variation between the different lineages and taxa. Many of them are found in several isoforms, further expanding their variation. However, the individual gene/enzyme isoforms also show variability within a single species, in different accessions and populations or even in individuals. Rare genetic mutations have been extremely informative in the study of many components of the sulfation pathways (16,45,158). Because of vastly increased sequencing capacities, such genetic variation is now studied on a population scale, both in plants and in humans. Human genetic variation and clinical outcomes Genetic defects in the gene for human PAPSS1 have not been reported so far. Gene defects in human PAPSS2, however, have been known to cause various forms of bone and cartilage malformation, due to an under-sulfation of the extracellular matrix (45,158). A steroid sulfation defect was reported for the first time in a girl with the compound-heterozygous mutations T48R/R329* in PAPSS2 (159). A subsequent study with two brothers carrying the compound-heterozygous mutations G270D and the frameshift mutation W462Cfs*3, resulting in an early termination codon, in PAPSS2 confirmed disrupted sulfation of DHEA, the most abundant steroid in the human circulation (5), and increased androgen activation (45). These studies of individual patients (45,159) established PAPSS2 as the main sulfate-activating complex to support abundant DHEA sulfation by the sulfotransferase SULT2A1. Why the other PAPS synthase PAPSS1 was not able to compensate PAPSS2 loss was a longstanding question in the field. A recently reported specific protein interaction between PAPSS2 and SULT2A1 may be an explanation for this directionality in steroid sulfation pathways (44). Oostdijk et al. (45) list 43 individuals with different kinds of PAPSS2 mutations. Importantly, a clinical phenotype of the heterozygous carriers of major loss-of-function PAPSS2 alleles are known. The two mothers with WT/R329X and WT/W462Cfs*3 reported clinical features consistent with a phenotype of polycystic ovary syndrome, specifically chronic anovulation requiring ovulation induction (159). These alleles are far more common in the general population than the compound-heterozygous cases and may contribute significantly to the patient cohort with reduced sulfation capacity and associated health risks (160). Mutations in sulfotransferase genes additionally underlie a number of clinical conditions. Discussing genetic findings for all 52 human sulfotransferase genes is out of the scope of this review, but a few examples illustrate our understanding thus far. In one case, mutations in a Golgi-localized carbohydrate sulfotransferase 11 (CHST11) were shown to result in reduction in chondroitin sulfation and thus defects in cartilage formation and limb malformations (161). Mutations in the X-localized gene for the steroid STS cause a skin condition known as X-linked ichthyosis (5), due to a build-up of cholesterol sulfate in the stratum corneum of the skin and visible scaling. Androgen metabolism and steroid secretion had been studied recently in a cohort of male X-linked ichthyosis patients before and after puberty (162). Circulating JBC REVIEWS: Green and red sulfation pathways DHEAS was increased in these patients, whereas serum DHEA and testosterone were decreased. Interestingly, a prepubertal surge in the serum DHEA to DHEAS ratio could be seen in healthy controls that was absent in patients with X-linked ichthyosis, indicative of physiologically up-regulated STS activity before puberty (162). Inactivating mutations in SUMF1 cause multiple sulfatase deficiency (MSD), a rare and fatal autosomal recessive disorder with a rather complex phenotype, characterized by absent activity of all sulfatase enzymes (5). To date, there are about 30 mutations of the SUMF1 gene reported in patients with MSD. Clear genotype-phenotype correlations have been observed linked to the residual activity of SUMF1 (163) and protein stability (164) leading to manifestations with severe neonatal, late infantile, or rarer mild juvenile forms of MSD (165,166). Variation in sulfation pathways in plants In plants, the origin and evolution of sulfation pathways are intriguing questions. Zhao et al. (167) compared the evolution of genes connected to retrograde signaling in the green lineage, including sulfation pathways because of PAP. Interestingly, although TPST genes are present in all basal plants, green and streptophyte algae, SOTs are present only in Gymnosperms and Angiosperms, with exception of some but not all chlorophytes (167). Therefore, maybe the hunt for the essential sulfated compound should concentrate on the moss Physcomitrella patens or the liverwort Marchantia polymorpha, which do not possess SOTs but only five or two TPST-like genes, respectively, and four or two APS kinases (167). Most of our knowledge on plant sulfation pathways is derived from Arabidopsis, not only because it is a model species but also because as a member of the Brassicaceae it produces large quantities of glucosinolates. The following question thus arises. What is the importance of sulfation pathways in plants that do not produce glucosinolates and do not have a large flux through the secondary assimilation pathway? However, some of these species have SOT families with more isoforms than Arabidopsis, for example, cotton and eucalyptus possess 45 SOTs (167), although not a single sulfated compound is known in these species. Comparison of regulation of the sulfation pathways and their coordination with primary sulfur metabolism between Arabidopsis and species without many sulfated compounds would thus be very informative. But similarly interesting are comparisons with species that produce other sulfated metabolites. For example, the species of genus Flaveria, which are a model for evolution of C 4 photosynthesis (168), produce a variety of sulfated flavonoids (169). The first plant SOTs have been characterized from Flaveria and were shown to be specific not only for a metabolite but also the position of the hydroxyl to be sulfated (66). Sulfated flavonoids represent a large pool of sulfur in Flaveria. Indeed, in Flaveria pringlei 4.6% of sulfur can be found in quercetine 3-sulfate, and therefore 4-fold more than in GSH (170). The regulation of synthesis of sulfated flavonoids is completely unknown. Particularly, comparison of the regulation by sulfur starvation between sulfated flavonoids and the glucosinolates would provide important insights into general and specific regulatory processes in plant-sulfated metabolome. Arabidopsis ecotypes Natural variation in plants has been extensively used to understand genetic control of various traits (171). Starting with a population of two-parent recombinant lines and genotyping of hundreds of DNA markers, the extensive genome sequencing and high throughput marker data are available for Arabidopsis ecotypes. Such data allow for the harnessing of variation within hundreds of genotypes with the depth of 100,000 markers. Some of these approaches have also been used to improve our knowledge of sulfation pathways (172, 173). On the forefront of these efforts was the dissection of variation in glucosinolates. Crop varieties and Arabidopsis ecotypes show an enormous variation in qualitative and quantitative glucosinolate composition. Many biosynthetic genes were initially mapped as quantitative trait loci (172). The variation in glucosinolates was shown through genome-wide association mapping to be linked with variation in susceptibility to herbivores (173). In this respect, the variation in substrate specificity in Arabidopsis desulfo-glucosinolate SOTs (83) is particularly interesting, because it may contribute to the variation in glucosinolates but may also be a consequence of the different glucosinolate profiles. The power of the natural variation approach was demonstrated by two studies showing the importance of the APR2 isoform of APS reductase for control of sulfate and sulfur accumulation (174,175). APR2 is responsible for ϳ75% of the total APS reductase activity in Arabidopsis. Several independent rare SNPs have been found which inactivate the enzyme and consequently reduce the flux through primary sulfate assimilation and lead to accumulation of the initial metabolite, sulfate (174,175). One of the populations studied in these reports, the lines derived from a cross between Arabidopsis accessions Bay-0 and Shahdara, helped to find a similar link between the ATPS1 isoform of ATP sulfurylase and sulfate accumulation (29). Although the coding sequences of ATPS1 from the two accessions are identical, the genes differ by an insertion/deletion in an intron that is associated with significant enhancement of transcript levels. A similar deletion is found in a range of accessions, where it is also linked with lower ATPS1 transcript levels and higher sulfate content (29). Also for ATPS1, an accession was found in which a nonsynonymous SNP led to inactivation of the enzyme (16). The availability of genome sequences of more than a thousand Arabidopsis accessions makes further studies of a structure-function relationship possible (176). Open questions Sulfation pathway research at the systems level is, in essence, learning by analogy. The studies to understand sulfation pathways in plants and animals have come a long way in recent years, and repeatedly the findings in one-model systems have fostered studies and new findings in the other (177,178). There are, however, still many questions waiting to be answered, and some of these are prompted by comparative pathway analysis. Complete inventory of components Despite the full genomic sequences of many plants and vertebrates, including humans, we cannot be sure that all components of sulfation pathways have been discovered. This is par-JBC REVIEWS: Green and red sulfation pathways ticularly true for plants, where clearly at least one gap exists; we still do not know how PAPS reaches the TPST located in Golgi. In addition, PAP seems not to be degraded in the Golgi as no SAL1 or gPAPP homologue was detected in this organelle. So, how is it turning back to its signaling role after PAPS is consumed in the Golgi? Also, the glucosinolate data from studies of Arabidopsis papst1 and papst2 mutants (36,159) indicate that there might be another PAPS transporter in plant chloroplasts. Another question is the peptide/protein sulfation. The number of plant-sulfated peptides discovered is growing as is the knowledge of their importance in cellular signaling (104,179). As the Arabidopsis tpst knockout results only in a mild phenotype, are there other isoforms of this protein similar to humans? If so, where would they be localized? Finding new sulfotransferases is entirely possible for both plants and humans. AtTPST was discovered relatively recently as it did not show high-sequence similarity with other plant SOTs and the TPSTs from humans (69). Furthermore, gene numbers of vertebrate sulfotransferases vary considerably (5). Although the SULT2A1 gene has undergone a dramatic expansion in rodents, with eight genes in the mouse (71), it is the SULT1A gene that has undergone an expansion in primates, with four genes in humans (70) and with a gene-copy polymorphism between individuals (180). Moreover, the number of known Arabidopsis SOTs increased from 18 to 21 between 2004 and 2014 (4,181). In addition, no sulfatase gene has yet been identified in plants, and the catabolism of sulfated metabolites other than glucosinolates is not known. Therefore, there may be unknown enzymes to recycle the sulfate from such molecules by hydrolysis or another mechanism, or plant SOTs may catalyze the reverse reaction. Generating a complete inventory of components of sulfation pathways will be as challenging and interesting as defining a minimal functioning sulfation pathway. Transient protein interactions and functional gene fusions Human sulfotransferase SULT2A1 interacts with PAPSS2, most likely to boost the sulfation of the androgen precursor DHEA (44). The actual function of this complex of a PAPSproducing and a PAPS-utilizing enzyme remains to be determined on a molecular effect. With novel methodology of capturing weak and transient protein interactions (such as proximity ligation assays for in situ cross-linking (182)), it is possible that more such protein interactions within sulfation pathways will be discovered. Likely candidates are enzymes that are fused to a single polypeptide in some species but occur as separate proteins in others, such as plant APS kinase and ATPS (7). In primary sulfate assimilation, protein-protein interactions were indeed described for onion ATPS and APS reductase (183). Moreover, the functionality of known, but little understood, fusion proteins is of interest. If these fusions lead to improved catalysis, it would be rewarding to study the ATPS-APS kinase gene fusion with pyrophosphatase found in the microalgae (7) as this might increase the rate of PAPS synthesis and biological sulfation pathways. Of particular interest is the ATPS-APS reductase fusion from the dinoflagellate H. triquetra (7). First, the protein is the only ATPS fusion with the cyanobacterial/ chlorophyte form of ATPS; second, it may have a great influ-ence on primary sulfate assimilation and synthesis of cysteine, methionine, and other important compounds; and third, it may better outcompete APS kinase and so reduce the provision of activated sulfate for sulfation pathways. Redox regulation of sulfation pathways Regulation of sulfation pathways by protein redox mechanisms is intriguing, particularly in context of the whole-sulfur metabolism (34). The reciprocal redox regulation of APS reductase and APS kinase in Arabidopsis (described above and in Refs. 40, 41) might serve as a mechanism to control the partitioning of sulfur into plant primary and secondary metabolism (11). Combined with the redox regulation of SAL1 (134) and of ␥-glutamylcysteine synthetase, the key enzyme in GSH synthesis in primary assimilation (184), there is mounting suggestion of important redox regulatory control of sulfur fluxes in plants. However, the importance of the redox regulation of APS kinase remains to be demonstrated in vivo as well as an exact quantification of the effects of cellular redox potential on the sulfur fluxes. The redox regulation of APS kinase and ␥-glutamylcysteine synthetase is confined to plants and does not occur in related enzymes from cyanobacteria or proteobacteria (40,185). Plant APS reductase and related bacterial APS and PAPS reductases use the same mechanism with active cysteine residue (10) and can therefore be expected to be affected by redox changes. The redox regulation of SAL1 is conserved between plants and yeast despite the yeast enzyme not possessing the same redox-active Cys pair as the plant line (134). In contrast, ATPS from the microalgae was shown to be redox-regulated (186), whereas no indication of such control was described for plants. The questions regarding when the redox regulation of sulfur metabolism have evolved, and whether this regulation is also conserved in Metazoa remain open. Since Lansdon et al. (54) showed that the recombinant PAPSS1 protein is activated by incubation in DTT, it seems indeed that redox regulation may contribute to the control of function of human PAPS synthases. Function of promiscuous proteins versus function of sulfated metabolites PAPS biosynthesis is essential for plants and animals, but why? The search for the specific essential metabolite(s) in plants or animals has not been successful so far. One reason for this is that there is only weak correlation between the function of a promiscuous sulfotransferase and one of its sulfated metabolites. The same holds true for PAP phosphatases. A phenotype obtained from a single-gene knockout of a SULT is not direct evidence for the biological function of one of its substrates. For example, the human estrogen sulfotransferase SULT1E1 is as efficient in DHEA sulfation as SULT2A1 (187). Two independent sulfation processes regulate development of a predatory nematode (188), but no sulfated metabolism is yet known. Bruce et al. (189) reported an intracellular PAPS-dependent step during HIV infection. The same group later reported an involvement of cytoplasmic sulfotransferase SULT1A1 (190), but again the actual sulfated metabolite that interacted with HIV infection is not yet known. JBC REVIEWS: Green and red sulfation pathways In plants, the number of phenotypes caused by loss of SAL1 has been ascribed first to defects in inositol signaling and then to PAP (8,132); however, a thorough analysis by complementing the mutants with enzymes of single activity has not been described. Furthermore, dissection of amino acid residues important for reactions with the different substrates (inositol polyphosphates, PAP, and PAPS) has not been performed. Indeed, the importance of dephosphorylation of PAPS to APS has not been investigated. Another intriguing question concerning plant SAL1 is its mitochondrial localization (122), as mitochondria do not play a known role in sulfation pathways. The function and substrate specificity of a number of plant SOTs is unknown. This corresponds to the discovery of a large number of unknown sulfur-containing metabolites in Arabidopsis (87). Identification of new sulfated metabolites and the mix and match with the SOT specificities will bring more clarity to the importance of plant sulfation pathways and allow the search for the essential sulfated metabolites. Thought provokingly, what if in fact the sulfotransferase protein itself or a by-product of sulfation pathways is the actual essential agent? Human SULT4A1 is among the most conserved sulfotransferases within vertebrates (3). However, this protein lacks essential catalytic residues. Mouse studies suggest that SULT4A1 is not an actual sulfotransferase but is a neuronal protein required for normal brain function (191). Similarly, at least some parts of sulfation pathways may just run to generate sufficient quantities of PAP for the associated signaling. Finally, activated sulfate in the form of PAPS could also be referred to as "sulfo-ATP." Analogous to sulfo-ATP, one may ask what ATP is good for. Therefore, perhaps the question of which single sulfated metabolite is essential is not relevant. Potential applications Studying sulfation pathways in humans has the potential to reveal novel biomarkers and mechanisms of human disease. The field has the promise to advance by deep-phenotyping bodily samples from patients with rare genetic mutations in sulfation pathway genes. However, even more progress can be expected from exploring disease association of individuals showing hypo-sulfation such as the heterozygous PAPSS2 ϩ/Ϫ alleles described above. A second field of possible application is the development of pharmacological interventions, including specific inhibitors, modulators, and stabilizers for sulfation pathway proteins. A stand-out sulfated metabolite is the sulfated carbohydrate heparin (192), which is known for its huge commercial market and the consistent attempts to make new and better heparins by understanding the control of its biosynthesis and sulfation (193). In addition, desulfation has a major impact for steroiddependent cancer types (5). Hence, STS represents a central target for anti-cancer drug development. The STS inhibitor irosustat promisingly passed phase II clinical trials (5), and further drug development efforts are ongoing (194,195). Another compound with a commercial potential is an intermediate in sulfation pathways, the APS. APS is used in pyrosequencing where pyrophosphate, generated by incorporation of dNTP into a DNA strand, is joined by ATP sulfurylase with APS to form ATP, which is then detected by luciferase (196,197). Pyrosequencing is limited by the costs of reagents, including APS. APS synthesis may thus be optimized by the ATPS fusion proteins either directly or through PAPS and nuclease-driven dephosphorylation (198), in turn reducing costs. Glucosinolates, the best known sulfated metabolites from plants, are important for plant immunity (32) and have wellcharacterized positive health properties for human consumption (81). Increasing plant glucosinolate content is a promising strategy for both improving crop resistance to pathogens and nutritional quality. The latter has been demonstrated by increasing the synthesis of glucoraphanin in broccoli and improving its anti-carcinogenic property (199). Recent progress in synthetic biology has engineered glucosinolate synthesis into non-Brassicaceae plants normally unable to produce such metabolites (200), increasing potentially the range of crops for enhancement of nutritional quality. Interestingly, the production of glucosinolates was initially limited by the rate of PAPS synthesis (201), underlying the importance of basic research in sulfation pathways. How to answer all these questions In addition to investigations at the systems level, newly developed technologies will certainly provide the cutting edge in sulfation pathway research. On the one hand, these are analytical nature-advanced methods to detect, characterize, and quantitate various sulfo-conjugates such as peptides (99), steroids (202), or plant secondary metabolites (170). On the other hand, these also represent novel chemo-synthetic approaches to obtain good purity sulfated nucleotides (203), steroid reference material (204), or sugars (205). Comparative approaches between kingdoms on genomics or synthetic biology levels will also bring new insights. The so-far missing approach, which has a great potential to uncover new and unexpected regulatory mechanisms, is modeling, aimed at structures and metabolite docking as well as control flux analysis. The latter will have to consider the sulfation pathways within a full metabolic reconstruction, as limitation of flux analysis to sulfur fluxes does not provide unique solutions (28). Such study will be highly informative for understanding the links between sulfation pathways and regulation of metabolism of carbon acceptors.
2019-07-05T13:15:01.918Z
2019-07-02T00:00:00.000
{ "year": 2019, "sha1": "1bde7d24981d50da47bcb6040028c4919b2ef559", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/294/33/12293.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "0c852217f3abf85156af0d61c502a20b88e862b8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
246430518
pes2o/s2orc
v3-fos-license
On the analytic structure of the $H^\infty$ maximal ideal space We characterize the algebra $H^\infty \circ L_{m}$, where $m$ is a point of the maximal ideal space of $H^\infty$ with nontrivial Gleason part $P(m)$ and $L_{m} : \mathbb{D}\to P(m)$ is the coordinate Hoffman map. In particular, it is shown that for any continuous function $f: P(m) \to \mathbb{C}$ with $f\circ L_{m} \in H^\infty$ there exists $F\in H^\infty$ such that $F|_{P(m)} = f$. Introduction -Preliminaries The Gelfand transform represents H ∞ as an algebra of continuous functions on its maximal ideal space M (provided with the weak star topology) via the formulaf (ϕ) def = ϕ(f ), where f ∈ H ∞ and ϕ ∈ M. We will not write the hat of f unless the contrary is stated. Beginning with a seminal paper of Hoffman [6], many papers have studied the analytic behavior of H ∞ on parts of M others than the disk (see [1], [3], [4], [7] and [8]). While the whole picture seems to be unreachable, the present paper intends to throw some light into this never-ending program. A more precise statement of our result (Thm. 2.2 and Coro. 2.3) will require to develop some notation and machinery. The pseudohyperbolic metric for x, y ∈ M is defined by ρ(x, y) = sup{|f (y)| : f ∈ H ∞ , f = 1 and f (x) = 0}, which for z, ω ∈ D reduces to ρ(z, ω) = |z − ω|/|1 − ωz|. The Gleason part of m ∈ M is P (m) def = {x ∈ M : ρ(m, x) < 1}. Clearly D is a Gleason part. If z 0 ∈ D, we can think of the analytic function L z 0 (z) = z + z 0 1 + z 0 z , z ∈ D as mapping D into M. In [6] Hoffman proved that if m ∈ M and (z α ) is a net in D converging to m, then the net L zα tends in the space M D (i.e., pointwise) to some analytic map L m from D onto P (m) such that L m (0) = m. Here 'analytic' means that f • L m ∈ H ∞ for every f ∈ H ∞ . The map L m does not depend on the particular choice of the net (z α ) that tends to m. A Blaschke product b with zero sequence {z n } satisfying is called an interpolating Blaschke product and {z n } is called an interpolating sequence. Let G denote the set of points in M that lie in the closure of some interpolating sequence. If m ∈ M \ G then P (m) = {m} and hence L m is a constant map. If m ∈ G then L m is one-to-one, meaning that P (m) is an analytic disk in M. Hoffman also realized that even when P (m) is a disk, there are cases in which L m is a homeomorphism and cases in which it is not. By an abstract version of Schwarz's lemma [9, p. 162], any connected portion of M provided with a nontrivial analytic structure must be contained in some P (m) with m ∈ G. In order to understand the analytic structure of M it is then fundamental to study the Hoffman algebras H ∞ • L m , where m ∈ G. In [8] it is proved that H ∞ • L m is a closed subalgebra of H ∞ and that they coincide when P (m) is a homeomorphic disk (i.e., L m is a homeomorphism). Particular versions of the last result were obtained in [6, pp. 106-107] and [4,Coro. 3.3]. On the other hand, when P (m) is not a homeomorphic disk it is well known that the identity function is not in H ∞ • L m , meaning that this algebra is properly contained in H ∞ . But, what is it ? We provide an answer to this question by giving several characterizations of H ∞ • L m , the most natural being where C(P (m), C) is the algebra of continuous maps from P (m) into C. The inclusion ⊆ is trivial, but the proof of the other inclusion turned out to be very difficult. We can look at the equality as an extension result; it says that for every continuous function f on P (m) such that f • L m ∈ H ∞ there is an extension F ∈ H ∞ of f (i.e., F | P (m) = f ). By the above comments, the result is new only for non-homeomorphic disks, but the argument here works in general. However, the technical complications introduced by considering non-homeomorphic disks make the proof much more difficult and longer than in [8]. Algebraic properties of Hoffman maps Let τ : D→M be an analytic function. As before, this means that f •τ ∈ H ∞ for all f ∈ H ∞ . We can extend τ to a continuous map τ * : M→M by the formula τ * (ϕ)(f ) def = ϕ(f • τ ), where ϕ ∈ M. Two particular cases will be of interest here. If τ is an analytic self-map of D then we can think of τ as mapping D into M and consider its extension τ * . We can do this with any automorphism of D, which therefore induces a homeomorphism from M onto M. In particular, if λ ∈ ∂D, the rotation z → λz (z ∈ D) extends to M in this way. From now on, for ϕ ∈ M we simply write λϕ for this 'λ-rotation' in M. We point out that even when each such rotation is a homeomorphism, the action of the group ∂D into M is not continuous [5, pp. 164-165]. The other relevant case for the paper is L m (for m ∈ G). The extension L * m maps M onto P (m), and P (m) is a homeomorphic disk if and only if L * m is one-to-one [8,Sect. 3]. We will also denote this extension by L m , where the meaning will be clear from the context. The inclusion of the disk algebra in H ∞ induces a natural projection π : M→D. The fiber of a point ω ∈ ∂D is π −1 (ω) ⊂ M. Let x, y ∈ M and let (z α ) be a net in D so that y = lim z α . We claim that the limit of (1 + π(x)z α )/(1 + π(x)z α ) always exists (in D) and it is independent of the net (z α ). A rigorous statement would say that the above limit exists when π(z α ) is in place of z α ; but since π identifies D with π(D), no harm is done with the appropriate mind adjustment. Definition. Let λ : M × M → ∂D be the function where (z α ) is any net in D that tends to y. Observe that if x, y ∈ M do not satisfy the extreme conditions |π(x)| = 1 and π(x) = −π(y) then λ(x, y) = 1 + π(x)π(y) 1 + π(x)π(y) , and this expression reduces to π(x)π(y) when |π(x)| = 1 = |π(y)|. We will use indistinctly the notations λ(x, y) or λ x,y to denote this function. A word of warning: the function λ was introduced by Budde in [1] for the purpose of proving the same result given in Proposition 1.1 below. However, the value of λ(x, y) stated in [1] when |π(x)| = |π(y)| = 1 is π(x)π(y), therefore overlooking the pathological behavior of λ when π(x) = −π(y). Fortunately, all the proofs and results in [1] remain valid by only adjusting λ to its right value. In [4, Lemma 1.8] Gorkin, Lingerberg and Mortini proved that if m ∈ G and b is an interpolating Blaschke product then b • L m = Bf , where B is an interpolating Blaschke product and f ∈ (H ∞ ) −1 . This fact will be used frequently along the paper. We will also need a result of Budde [1] stating that if ϕ ∈ M has trivial Gleason part then L m (ϕ) also has trivial Gleason part. In symbols, L −1 m (G) ⊂ G. Proof. Suppose first that m or y (or both) is not in G. Then one of the maps L m or L y is constant and the left member of (1.1) is the constant map L m (y). If y ∈ G then Budde's result asserts that L m (y) ∈ G, which is trivially the case if m ∈ G, too. Therefore also the right member of (1.1) is the constant map L m (y). Suppose now that m, y ∈ G and let ω, ξ and z in D. An elementary calculation shows that Replace ω in (1.2) by a net (ω α ) in D tending to m. Then the first member of (1.2) tends to L m (L ξ (z)) and L ωα (ξ)→L m (ξ). Consequently It is clear that the constants λ α = λ(ω α , ξ) tend to λ(m, ξ). Using that for x ∈ G the map L x is an isometry on D with respect to ρ [6, p. 105] we get Hence, the lower semicontinuity of ρ (see [ That is, L m (L ξ (z)) = L Lm(ξ) (λ(m, ξ)z) for every ξ, z ∈ D. Now replace ξ by a net (ξ α ) in D tending to y. By the continuity of L m on M then L m (L ξα (z))→L m (L y (z)) and L m (ξ α )→L m (y). Since the map x → L x is continuous from M into M D , then we also have L Lm(ξα) →L Lm(y) pointwise on D. The proposition follows replacing z by λ(m, y)z. Corollary 1.2 (Budde) Let m ∈ G and ξ ∈ M such that L m (ξ) ∈ P (m). Then L m maps P (ξ) onto P (m) in a one-to-one fashion. Proof. Let f ∈ H ∞ and {z α } be a net in D tending to y. Thus γz α →γy and for every z ∈ D, as desired. A characterization of Hoffman algebras Definition. Let m ∈ G. The m-saturation of a set E ⊂ M is defined as L −1 m (L m (E)), and E will be called m-saturated if it coincides with its m-saturation. We also write L m (y) It is well known that if f is an interpolating Blaschke product then Z(f ) is the closure of Z D (f ). This immediately implies that if m ∈ G and y ∈ M are different points then there is an interpolating Blaschke product f such that f (m) = 0 = f (y). As a consequence we obtain that if m ∈ G then That is, L m takes the same value on the points L x (λ m,x ω) and ξ, which belong to P (ξ). The 'one-to-one' part of For m ∈ G the theorem and Lemma 2.1 provide a description of the algebra Only the sufficiency needs to be proved. We devote the next two sections to prove Theorem 2.2. For the sake of clarity it is convenient to rescue the hat for the Gelfand transform in the next corollary. Then the following conditions are equivalent. (a) h is continuous on P (m) with the topology induced by M, Proof. We assume first that (a) holds. If (b) fails then there are x ∈ L m (0) and is just a rephrasing of (c). Now suppose that (d) holds. Since F ∈ H ∞ thenF is continuous on M, and consequentlyF | P (m) = h is continuous on P (m). Technical lemmas The hyperbolic metric for z, ω ∈ D is So, h and ρ are increasing functions of each other and h(z, ω) tends to infinity if and only if ρ(z, ω) tends to 1. We will use alternatively one metric or the other according to convenience. The hyperbolic ball of center z ∈ D and radius r > 0 will be denoted by ∆(z, r). Our next lemma is a trivial consequence of Lemma 3.2; we state it for convenience. Proof. Since z α →m in M then L zα →L m in M D , and by the continuity of F on M then We will see that F (L zα (z))→f (z) uniformly on |z| ≤ r for any 0 < r < 1. In fact, otherwise there is ε > 0, a subnet (z β ) of (z α ) and points ω β with Taking a subnet of (z β ) if necessary, we can also assume that ω β →ω, with |ω| ≤ r. Since f is continuous at ω and F (L z β (ω))→f (ω), then there is β 0 such that for every β ≥ β 0 , These inequalities together with (3.1) give This will contradict the continuity of F if we prove that L z β (ω β ) tends to L m (ω). Let (L zγ (ω γ )) be an arbitrary convergent subnet of (L z β (ω β )), say to y ∈ M. Then by the lower semicontinuity of ρ, ρ(y, L m (ω)) ≤ lim ρ(L zγ (ω γ ), L zγ (ω)) = lim ρ(ω γ , ω) = 0, meaning that y = L m (ω). So, every convergent subnet of (L z β (ω β )) tends to y, and consequently the whole net tends to y. Since L ω : D→D is an onto isometry with respect to ρ, then there exists v ∈ D such that ξ = L ω (v) and |v| = ρ(ξ, ω). The elementary formula (for π(m) = 1) where λ = λ(m, ω), and the last inequality holds because |v| < 1/2. Thus, (3.2) and the isometric property of L ξ give Put z ′ = λ m,ω z. Then by (1.1) This inequality together with So, adding this estimate to (3.3) we obtain as promised. Lemma 3.8 Let m ∈ G and f ∈ H ∞ such that f • L x (λ m,x z) = f (z) for all x ∈ L m (0) and z ∈ D. For ε > 0 and 0 < r < 1 consider the set Then U is a neighborhood of L m (0). Proof. If the lemma fails then there is x ∈ L m (0) in the closure of V = M \ U . Since V is open and D is dense in M, a simple topological argument shows that V = V ∩ D. Therefore Let (ω α ) be a net in V ∩ D that tends to x, and write z α def = z ωα . By taking a suitable subnet we can also assume that z α →z 0 , where |z 0 | ≤ r. Thus, We can assume f ∞ ≤ 1. By the Schwarz-Pick inequality [2, p. 2], which tends to zero. Since L ωα →L x then the last inequality gives Proof of Theorem 2.2 Given m ∈ G \ D and f ∈ H ∞ that satisfy the hypotheses of the theorem, we are going to construct a function F ∈ H ∞ such that F •L m = f . We can assume without loss of generality that π(m) = 1 and f = 1. Let {σ k } ⊂ (0, 1) be a sequence satisfying k≥1 σ k > 0 and let s(σ k ) be the associated parameters given by Lemma 3.3. Take s k > s(σ k ) tending increasingly to ∞, and put r k def = 4 (2 k s 1 + 2 k−1 s 2 + · · · + 2s k ). Given an arbitrary interpolating sequence S such that m ∈ S, and {ε k } ⊂ (0, 1) a decreasing sequence that tends to 0, we will construct a decreasing chain of subsequences S k = {z k,n : n ≥ 1}, S ⊃ S 1 ⊃ S 2 ⊃ · · ·, such that for every k ≥ 1, h(z k,n 1 , z k,n 2 ) > r k for n 1 = n 2 , (5) if b k is the interpolating Blaschke product with zero sequence S k , then The first construction. The argument will be inductive. By Lemmas 3.8 and 3.6, for every k ≥ 1 there is an open m-saturated neighborhood of L m (0), W k ⊂ M, such that W k+1 ⊂ W k and for all ω ∈ W k , (4.1) Step 1. By Lemma 3.7 there is S ′ 1 ⊂ S such that (1) holds and S ′ 1 ∩ P (m) ⊂ L m (W 1 ). Since Then (4.1) tells us that S ′ 1 satisfies (7), and then so does any subsequence of S ′ 1 that contains m in its closure. By Lemmas 3.1 and 3.3 we can assume that δ(S ′ 1 ) is so close to 1 that (4) and (5) hold. Furthermore, since π(m) = 1 we can easily achieve conditions (2) and (3) by taking as S 1 the subsequence of S ′ 1 whose elements are contained in a sufficiently small Euclidean ball centered at 1. Condition (6) only makes sense for k = l = 1. If z 1,n , z 1,p ∈ S 1 are such that h(z 1,n , z 1,p ) < r 1 then (4) implies that z 1,n = z 1,p . Therefore L −z 1,p (z 1,n ) = 0 ∈ T 1 ∩ D, because L m (0) = m ∈ S 1 ∩ P (m). Step l. Let l ≥ 2 and suppose that we already have S ⊃ S 1 ⊃ . . . ⊃ S l−1 satisfying (1) . . . (7). By Lemma 3.7 there exists S ′ l ⊂ S l−1 such that (1) holds and S ′ l ∩ P (m) ⊂ L m (W l ). Since W l is m-saturated then T ′ l def = L −1 m (S ′ l ∩ P (m)) ⊂ L −1 m (L m (W l )) = W l . By (4.1) the sequence S ′ l satisfies (7), and the same holds for any subsequence of S ′ l having m in its closure. As in the case l = 1, by Lemma 3.1 and 3.3 we can assume that S ′ l satisfies (4) and (5), and by taking the points of S ′ l that are close enough to 1 we can also assume that S ′ l satisfies (2) and (3). Clearly, any subsequence S l of S ′ l such that m ∈ S l will satisfy all the above properties. Therefore we will be done if we can pick the sequence S l so that it also satisfies (6). Let k ≤ l−1 and let η k > 0 to be chosen later. By [4, Lemma 1 Since m ∈ S ′ l then by the remark following Lemma 3.4 there is a subsequence Λ k ⊂ S ′ l such that m ∈ Λ k and If ν ∈ Λ k and z k,n ∈ S k satisfy h(z k,n , ν) < r l then h(L −ν (z k,n ), 0) < r l . Applying (4.2) to z = L −ν (z k,n ) we get We are using here that L −ν = L −1 ν and b k (z k,n ) = 0. Then |B k (L −ν (z k,n ))| < η k g −1 k ∞ . Since B k is interpolating, Lemma 3.2 implies that for small values of η k the point L −ν (z k,n ) must be close to the zero sequence of B k in the ρ-metric. That is, choosing η k small enough we obtain for every ν ∈ Λ k such that h(ν, z k,n ) < r l for some z k,n . Doing this process for k = 1, . . . , l −1 we obtain the respective subsequences Λ k ⊂ S ′ l satisfying (4.3), and such that m ∈ Λ k for k = 1, . . . , l − 1. Since disjoint subsequences of an interpolating sequence have disjoint closures then m is in the closure of and by (4.3) S l satisfies (6) for k = 1, . . . , l − 1. Finally, the same argument used in Step 1 shows that S l satisfies (6) also for k = l. The second construction. Observe that condition (4) implies that for a fixed value of k, ∆(z k,n 1 , s k ) ∩ ∆(z k,n 2 , s k ) = ∅ if n 1 = n 2 . Now we define recursively some sets made of unions of the balls ∆(z k,n , s k ), which we call 'swarms'. For n ≥ 1 the swarm of height 1 and center z 1,n is defined as E 1,n = ∆(z 1,n , s 1 ). Once we have the swarms of height j = 1, . . . , k − 1, we define the swarm of height k and center z k,n (for n ≥ 1) as E k,n = ∆(z k,n , s k ) {E j,p : j ≤ k − 1, p ≥ 1 and E j,p ∩ ∆(z k,n , s k ) = ∅}. We write diam h E = sup{h(x, y) : x, y ∈ E} for the hyperbolic diameter of a set E ⊂ D. The next three properties will follow by induction. (I) diam h E k,n ≤ 2 k s 1 + 2 k−1 s 2 + · · · + 2s k , (II) E k,n 1 ∩ E k,n 2 = ∅ if n 1 = n 2 , and (III) each swarm of height j ≤ k − 1 meets (and then it is contained in) at most one swarm of height k. Proof of (I). This is trivial for k = 1. By the definition of swarms and inductive hypothesis, Proof of (II) and (III). Suppose that E j,p (with j ≤ k) meets E k,n 1 and E k,n 2 , where n 1 = n 2 . Then by (I) h(z k,n 1 , z k,n 2 ) ≤ diam h E k,n 1 + diam h E j,p + diam h E k,n 2 ≤ 3 (2 k s 1 + 2 k−1 s 2 + · · · + 2s k ) < r k , which contradicts condition (4). When j = k this proves (II), and for j < k this proves (III) except for the statement between brackets. So, suppose that E j,p meets E k,n , where j ≤ k − 1. If E j,p meets ∆(z k,n , s k ) then E j,p ⊂ E k,n by definition. Otherwise there is some swarm E of height at most k − 1 such that If height E ≥ j = height E j,p then by inductive hypothesis (II) for the equality and (III) for the strict inequality, we have E j,p ⊂ E. Hence, E j,p is contained in E k,n . Similarly, if height E < j then inductive hypothesis (III) implies that E ⊂ E j,p . Therefore and then E j,p ⊂ E k,n by definition. Some remarks are in order. Condition (II) says that two swarms of the same height are either the same (with the same n) or they are disjoint, and condition (III) says that if two different swarms have non-void intersection, then the one of smaller height is contained into the other. Also, observe that by (II), E k,n ∩ S k = {z k,n }. We will see that if l ≥ k then In fact, suppose that for some l ≥ k and p ≥ 1 there is ω ∈ E l,p with |ω − 1| > ε k . Then, since {ε j } is a decreasing sequence, (2) and (I) yield which is not possible. By (4.4) every strictly increasing chain of swarms E (1) ⊂ E (2) ⊂ . . . is finite. Because if ω ∈ E (1) then there is k such that |ω − 1| > ε k (since ε j tends to 0), and therefore E (1) cannot lie in any swarm of height ≥ k. Roughly speaking, we could say that there is no swarm of infinite height. Consequently, every swarm is contained in a unique maximal swarm, and Choosing ε k . Define a function g ∈ H ∞ (Ω) by g(ω) = f • L −1 z l,p (ω) for ω ∈ E l,p , with E l,p a maximal swarm. The only requirements that we have imposed so far to the sequence {ε k } are that it is contained in (0, 1) and decreases to zero. We claim that there is a choice of the sequence {ε k } so that lim k g(L z k,n (z)) = f (z), (4.5) where the limit is uniform on n and on compact subsets of D. Fix 0 < r < 1 and let z ∈ D with |z| ≤ r < 1. Since lim k s k = ∞ then for k big enough we have |z| ≤ r < (e s k − 1)/(e s k + 1). (4.6) This means that z ∈ ∆(0, s k ). The point z k,n is in some maximal swarm E l,p with l ≥ k. Hence by (4.6) L z k,n (z) ∈ ∆(z k,n , s k ) ⊂ E l,p ⊂ Ω Since g is defined on Ω then g(L z k,n (z)) makes sense, and where the last equality comes from the identity L −1 z l,p = L −z l,p and (1.1). A simple calculation shows that λ(−z l,p , z k,n ) = λ(z k,n , L −z l,p (z k,n )). So, if ξ def = L −z l,p (z k,n ) we can rewrite (4.7) as g(L z k,n (z)) = f • L ξ (λ z k,n ,ξ z). (4.8) Since z k,n , z l,p ∈ E l,p then by (I), h(z k,n , z l,p ) ≤ diam h E l,p < r l . Thus, (6) implies that there is ω ∈ T k such that ρ(ξ, ω) < ε k . Since h(z, 0) ≤ s k (by (4.6)) and f ∞ = 1, then successive applications of (4.8), (7) and the Schwarz-Pick inequality yield where ρ(L ξ (λ z k,n ,ξ z), L ω (λ m,ω z)) ≤ ρ(L ξ (λ z k,n ,ξ z), L ξ (λ m,ξ z)) + ρ(L ξ (λ m,ξ z), L ω (λ m,ω z)) = ̺ 1 + ̺ 2 . Using the isometric property of L ξ , a straightforward calculation shows that . The construction of F . We recall that b k is a Blaschke product with zero sequence S k . Since S is an interpolating sequence, then a = inf{|b k (z)| : z ∈ S \ S k } > 0, and since m ∈ S k then {x ∈ M : |b k (x)| < a/2} is an open neighborhood of m. So, if (z α ) is a net in S converging to m then there is α(k) such that the tail (z α ) α≥α(k) is completely contained in {z ∈ D : |b k (z)| < a/2} ∩ S = S k . Therefore (4.5) implies that lim α g(L zα (z)) = f (z). That is, V = {z ∈ D : |b(z)| < β} ⊂ Ω. In addition, since each b k vanishes on m (because m ∈ S k for every k ≥ 1) then b vanishes on m with infinite multiplicity. So, b ≡ 0 on P (m). The proof above shows that if y ∈ k≥1 S k is any point and F ∈ H ∞ is the function constructed in the last step, then F • L y (z) = f (z). This does not mean that L y (0) = L m (0), because the chain of interpolating sequences constructed depends on the function f . Examples A point m ∈ M \ D is called oricycular if it is in the closure of a region limited by two circles in D that are tangent to ∂D at the same point. Every oricycular point is in G and it is in the closure of some tangent circle to ∂D (see [6, pp. 107-108] ). We are going to search for the possible fibers that meet L m (0) when m is a nontangential or an oricycular point, and we shall determine λ(m, x) for all possible x ∈ L m (0). Every point in M \ D has the form γm, where π(m) = 1 and γ ∈ C has modulus 1. Since λ(m, x) = λ(γm, γx) for every x ∈ M, and by Lemma 1.3 L γm (0) = γL m (0), then there is no loss of generality by considering π(m) = 1. Let b be an interpolating Blaschke product with zero sequence {z k } such that b(m) = 0. If the point ω ∈ D is a zero of b • L m then there is a subsequence {z k j } of {z k } such that b • L z k j (ω)→0. By Lemma 3. If m is a nontangential point we can assume that there is some fixed −π/2 < θ < π/2, such that {z n } lies in the straight segment S = {z ∈ D : 1 − z = re iθ , r > 0}. A straightforward calculation shows that the closure of S in C meets ∂D when r = 0 and r = 2 cos θ. Therefore, S = (1 − 2 cos θ e iθ , 1) and z n = 1 − r n e iθ , with 0 < r n < 2 cos θ. We can also assume that z n →1. The conformal map L −zn sends S into a circular segment C n ∩ D, where C n is the circle that pass through the points L −zn (z n ) = 0, L −zn (1) = e i2θ and L −zn (1 − 2 cos θ e iθ ) = (r n − 2 cos θ)e iθ 2 cos θ e iθ + r n (e −iθ − 2 cos θ) . (5.1) We are including here the extreme case when C n is a straight line (i.e., θ = 0). Since r n →0, taking limits in (5.1) we see that the limit curve of C n is the circular segment C ∩ D, where C is the circle that pass through 0, e i2θ and −1. Therefore the zero sequence of b • L m lies in C ∩ D, and since b • L m vanishes on L m (0), only the fibers of −1 and e i2θ can have points of L m (0). Clearly, if x ∈ L m (0) is in the fiber of e i2θ then λ(m, x) = e i2θ . Suppose that x ∈ L m (0) is in the fiber of −1. Since the straight segment −1 + Re −iθ , with 0 < R < 2 cos θ, is tangent to C at the point −1 then x is a nontangential point lying in the closure of this segment. So, λ(m, x) = e −i2θ by Section 1. If m is a oricycular point (with π(m) = 1) a similar but easier analysis shows that L m (0) lies in the closure of C ∩ D, where C is the tangent circle to ∂D that pass through 0 and −1. Therefore L m (0) only can meet the fiber of −1, and indeed it does unless P (m) is a homeomorphic disk. Since C is tangent to ∂D then every x ∈ L m (0) with π(x) = −1 is a tangential point, which by Section 1 yields λ(m, x) = −1.
2022-02-01T04:47:47.839Z
2022-01-30T00:00:00.000
{ "year": 2022, "sha1": "483df99e4ad4ef50312ef6d7f39f9753c1c0a6f0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "483df99e4ad4ef50312ef6d7f39f9753c1c0a6f0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
230540846
pes2o/s2orc
v3-fos-license
Virtual Reality Simulation for Pediatric Airway Intubation Readiness Education Background The objective of this study was to provide education to inexperienced trainees regarding preparation for airway intubation using virtual reality (VR) tutorial and comparison of performance with that of experienced trainees without VR training. We hypothesized that after the VR tutorial, junior fellows and residents will have comparable recall of the proper steps as experienced trainees. Methods This project was initiated in the pediatric intensive care unit from July 1, 2019, to July 30, 2019. Volunteer residents and pediatric critical care medicine fellows participated. The VR group completed a 19-minute immersive tutorial and then demonstrated learned skills with a traditional manikin. Non-VR group fellows listed steps to prepare for airway intubation from memory with scoring on a 24-point timed checklist. Results Seventeen subjects participated; two residents were excluded. The VR group had seven trainees (47%) and scored similarly to the other group based on checklist items (50.5% vs 50.8%, P=1). Conclusion VR technologies can be used for education in preparation for pediatric airway intubation. There was no difference in the performance accuracy between the two groups. Larger studies are essential to study benefits of VR in preparation and performance of airway intubation. Introduction Mastery of pediatric airway management is an important skill for trainees from various specialties. Preparation for pediatric intubation involves familiarity with and education on the nuances of the pediatric airway including anatomy and proper patient positioning; correct equipment choices; and use and set-up as well as knowledge of various scenarios that may affect the choice of sedative, analgesic, and neuromuscular blockade. The knowledge of these steps with repetitive training and meticulous pre-procedure preparation can help mitigate difficulties encountered in the process of airway intubation and prevent adverse outcomes. Traditional medical education has a didactic approach, which is often difficult to implement when learning procedural skills and has been shown to be ineffective for adult learners in medicine. In 2002, a working group of experts concluded that simulation-based education is an optimal modality for learning [1]. Simulation lends itself to a see, do, and teach approach that allows for a combination of learning styles that leads to better retention. Learners are able to see an instructor demonstrate the procedure, and then experience the act using their tactile and visual senses. To cement their learning, they then can teach that procedural act to another. A small number of studies have demonstrated that simulation-based medical education with deliberate practice can be superior to traditional clinical education [2]. However, traditional manikin-based simulation can lack realism, require extensive set-up and dedicated staff, and is limited in the number of learners who can access the simulation center at one time [3]. Computer-based technologies have evolved at a rapid pace with virtual reality (VR) used in various aspects of medical education including surgical training, decision-making skills in trauma, and cardiopulmonary resuscitation training [4][5][6]. VR provides a realistic, immersive, on-demand experience. It can be customized to each user, and includes wearing a headset for immersive interaction with a software environment. In addition, recent technological developments in video conferencing allow for the participation of multiple users, even those without their own VR headsets, to experience simulation remotely while still being able to practice team training and closed-loop communication from anywhere. It has been used in training for performing orotracheal and fiber-optic intubation [7][8][9]. What is not well understood is if a single instance of VR training as described can rapidly advance the knowledge of a novice to a similar level of training of more experienced physicians when assessing the steps needed for intubation preparation. In this study, we incorporated VR technology for the education of novice pediatric residents and critical care medicine fellows on the sequential steps required to prepare for a pediatric airway intubation. We hypothesized that inexperienced pediatric trainees after undergoing a brief immersive VR tutorial would have comparable performance with experienced senior fellows and emergency medicine residents when asked to demonstrate learned skills on a traditional manikin. This article was previously presented as a meeting abstract at the 2020 SCCM Critical Care Congress, Orlando, FL, on February 16, 2020. Design, setting, and participants This was a prospective randomized comparison study initiated from July 1 to 31, 2019, in the pediatric intensive care unit (PICU) of a quaternary children's hospital with Accreditation Council for Graduate Medical Education (ACGME)-accredited pediatric residency and subspecialty (fellowship) training programs. Pediatric residents (postgraduate year [PGY] 2), visiting emergency medicine (PGY 2) residents completing clinical rotation in the PICU during the study period, and pediatric critical care fellows (PGYs 4, 5, 6) participated in this project. The VR technology used included Oculus Rift S headset with hand controls (Facebook LLC, San Jose, CA) ( Figure 1) and an Alienware gaming laptop (Dell Technologies, Round Rock, TX). The VR simulation software tutorial for pediatric airway intubation was developed using the Acadicus simulation platform (Arch Virtual, Madison, WI). Study groups Two study groups were randomized (n=17) (Figure 2). Pediatric residents and first-year fellows (n=7) were included in the VR group. Upper-year fellows (PGYs 5 and 6) and emergency medicine residents (n=8) were included in the non-VR group. Two pediatric residents in the non-VR group were excluded for study analysis given their inexperience with airway management. They were unavailable the day the VR training was offered but still wished to participate in the study scoring. The VR group completed a 19-minute immersive tutorial that outlined the steps involved in preparation for pediatric airway intubation. An interactive avatar in the VR tutorial ( Figure 3) discussed the steps in detail in a room created to replicate the PICU's patient room. VR: virtual reality In addition to watching the tutorial in the VR room from multiple angles as they could move around freely in the simulated environment, they were also able to visualize, pick up, and utilize equipment in the VR environment such as squeezing the bag and mask to deliver breaths, placement of nasopharyngeal airway, laryngoscopy, and endotracheal tube insertion ( Figure 4). After completing their time in the virtual environment, the VR group was asked to demonstrate the learned steps on a traditional manikin and verbally announce each steps as they performed them. The non-VR group listed the steps in the airway preparation process from memory without additionally being asked to demonstrate the steps on the manikin. Both groups were scored on a 24-point timed checklist (Table 1) by the same observer who is a critical care attending physician. The checklist was prepared based on the most the common equipment and preparation required pre-intubation by the study authors who have all completed pediatric critical care fellowship training. Checklist items Bag mask size 1 Bag mask size 2 Statistical analysis Statistical analysis was performed using SPSS Statistics software (IBM Corp., Armonk, NY), and Mann-Whitney U test was used for data analysis. Results The VR group included seven participants (47%). They scored similarly to the non-VR group on the checklist items after completion of the tutorial (50.5% vs 50.8%, P=1). Some of the steps missed most frequently by the VR group included failing to request an end-tidal carbon dioxide detector (42%), choosing a variety of nasopharyngeal airways (57%), failure to request a nasogastric tube (100%), failure to request set-up of repetitive cyclic blood pressure measurements during subsequent intubation (42%), and failure to request nursing confirmation of functioning peripheral intravenous access (57%). The median time to complete the set-up steps was higher in the VR group (6 vs 3.5 minutes, P=0.005). When allowing for requests for advanced airway equipment and medications not covered in the VR tutorial, such as fiberoptic scope or laryngeal mask airway (73.6% vs 55.3%, P=0.0009), the non-VR group scored higher. Discussion In this study, an immersive, VR tutorial-based learning module taught residents and first-year pediatric critical care fellows the steps involved in the preparation for airway intubation. While this study was not focused on the performance of the tracheal intubation itself, preparedness by having all the necessary equipment available can help with the success of the procedure itself. The act of setting up is an important skill for trainees who perform this procedure in the intensive care unit and emergency department. The number of tracheal intubation attempts is associated with adverse events [10]. Adequate education on training and preparation may help mitigate adverse events like oxygen desaturation, hypotension, and cardiorespiratory arrest. This fellowship training program provides training via lecture-based and practice via manikin-based methods to residents and fellows. VR learners showed no statistical difference in accuracy when compared to experienced fellows after participation in the immersive tutorial. This suggests VR technology maybe utilized for this purpose. The ability to recreate this training in a VR format offers several advantages over traditional boot camp style teaching or pre-recorded watchable video tutorials. VR training gives learners access to a three-dimensional, immersive setting that contains the equipment and replicates the environment they are being asked to work in without having to reserve space in the simulation lab or intensive care unit for practice sessions. It allows an instructor to pre-record a demonstration in which the learner can not only watch the steps, but play an active role in the tutorial as well. The learner can use equipment and repeat the steps of the recorded instructor who is free to do other tasks. Learners can also complete their tutorial at their own pace and in their own time and repeat the exercise as many times as they choose until reaching a level of comfort with the skills. The time to execute the steps for set-up in the physical simulation lab by the VR group (after their VR tutorial was complete) was compared to the verbal report of the steps by the advanced fellows. Although there was a statistically significant result, it offers several insights that probably invalidate its use in this study. First, the VR group not only called out their steps but they also went on to demonstrate its use on the manikin in the simulation lab. The advanced fellows simply recited without subsequent demonstration. In addition, as previously discussed the VR tutorial was focused on gathering and testing of the equipment that could be needed for intubation, and not the procedure itself. One could argue that gathering the equipment quickly in an emergent clinical scenario is important and this time difference could be clinically relevant. However, in reality, the intensive care unit is a team environment with support staff available to assist in equipment procurement at most times. The key to this VR training exercise was to familiarize the learner with the sequential steps in equipment set-up. When allowing the advanced fellows to "freelance" during their intubation set-up exercise, they requested equipment that was not on the checklist. Although the score was significantly higher than that for the VR participants, the tutorial was intended to cover basic tenets of airway preparation. It is a consideration to make a second advanced airway preparation tutorial that creates assets not yet present in the VR environment. This likely comes from their collective experiences over the first two years of fellowship with the challenging clinical scenarios. Over the past five years, VR technologies have been rapidly advancing. Improvements in graphics, headset resolution, and ability of hardware to refresh images quickly leading to a seamless appearance of the environment have made the experience of VR move from an awkward cartoonish endeavor to one where it has become difficult to separate it from reality. Its extension into medical simulation has been led by companies interested in training surgeons, in particular, to practice operations in a way that mimics reality but puts no patient at harm. The success of surgical VR has paved the way for the proliferation of immersive medical simulations that feature rare diseases like high-altitude cerebral edema (SimX, Mountain View, CA), common diseases such as sepsis in an oncology patient (Oxford Medical Simulation, Boston, MA), and interactive tutorials with customizable environments and equipment (Arch Virtual, Madison, WI). A significant reduction in cost and space requirements and enhanced realism when compared with traditional simulation centers have made these systems increasingly popular. Their rising popularity and expansion have been seen over the past several years at the International Meeting on Simulation in Healthcare (IMSH) with more companies and new features being showcased each year. Zackoff et al. demonstrated in their study that a VR curriculum focused on pediatric respiratory failure can help trainees recognize and interpret changes in clinical status compared with traditional high-fidelity, manikin-based simulation based on self-assessment of competence [11]. There are several advantages of VR application in medical education. It is portable, is simple to set up, and requires very little space. An area as small as 12 square feet can support a simulation when using a VR headset. This allows for financial flexibility that makes simulation more accessible to health systems that do not have dedicated laboratories. It also offers options to systems whose simulation lab space and time is overbooked or expensive to maintain. The learner is also able to navigate the scenario without the support of additional staff, thus limiting additional restrictions in time and scheduling. The scenarios are repeatable and allow learners to practice skills until comfortable. An improvement in performance occurs at times that are convenient for the learner [12]. Multiple users can also interact within the same platform from great distances while still fostering team dynamics. Avatar-mediated learning has been shown to improve communication skills and has been shown to be effective in pediatrics [13,14]. The immersive nature of this technology can make user experience more satisfactory than traditional learning. A survey of medical students showed that the process of using VR was enjoyable and improved confidence with the training received [15]. Recent improvements in video conferencing also allow for an increase in the number of users who can benefit from a VR session even when they do not own a VR headset and computer. With a singular headset and laptop, a VR scenario can be streamed live to multiple participants using teleconferencing platforms such as Zoom (Zoom Video Communications, Inc., San Jose, CA), Webex (Cisco Webex, Milpitas, CA), or Microsoft Teams (Microsoft Corp., Redmond, Washington). Remote participants can see and hear the action in the scenario and direct the in-scenario avatar remotely. They can collaborate and practice team mechanics, closed-loop communication, and debrief from multiple points on the globe simultaneously. The recent closure of most simulation centers because of the global coronavirus disease 2019 (COVID-19) pandemic makes VR an attractive option to conduct team training in medical simulation. Given the global physician shortage in the times of pandemic, VR use is helping training medical students and out-of-practice physicians to gain skills and competence [16]. The validity of its use in this situation is yet to be examined. The cost of simulation-based medical education is underreported in the literature [17]. One study reported an estimated $100,000 to set up a basic simulation lab, while an advanced center can have costs in the millions with the annual maintenance costing at least $15,000 [18,19]. However, the practical aspects of acquiring and maintaining high-fidelity manikins can be overcome using VR. Technology-based simulation costs include headset, computer laptop, and software subscription, which are comparatively less expensive. This technology comes with disadvantages as well. Many aspects of clinical medicine have not been replicated and, at this time, require in-person, bedside teaching. The ability of experiencing tactile perception like that of passing a guidewire through a blood vessel or displacement of the epiglottis during airway intubation can be difficult to mimic and has yet to be simulated using haptic feedback. In addition, the development of haptic features is still in its infancy and, thus, comes with a significant financial burden at this time. However, there have been recent randomized controlled trials comparing VR with haptic technology and those without in surgical procedures. These trials have shown promise. VR simulation with haptic feedback can enhance user experience with force-feedback mechanisms and improve realism in skills like cutting, suturing, and dissection [20]. While this software tutorial did not include haptic technology, addition of this feature may be useful when physical distancing is encouraged. Familiarity and comfort with newer technology is another potential disadvantage. Finally, there may be an indirect increase in the cognitive load in VR simulation that may negatively affect learning and must be studied further [21]. Limitations This study as it has been conducted has several limitations. The sample size of participants is small. This is the experience of a single academic center where simulation-based education is with the use of manikins with no previous experience in VR. The non-VR participants were tested on accuracy by recall method and compared to the VR group who were asked to recall and demonstrate on a manikin. The tutorial also did not include additional airway adjuncts that were mentioned by the experienced fellows, which will need to be added in future iterations of the software. Conclusions In this study, there were no differences in the accuracy of performance while learning the steps to prepare for airway intubation between inexperienced learners who used VR technology and experienced senior fellows. VR-based remote tutorials may offer advantages over traditional simulation training and may continue to be necessary due to physical distancing requirements during the current and future global pandemics. A larger, randomized, multicenter study in VR training is required to show consistent benefit of VR simulation as a teaching tool in medical education. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. N/A issued approval N/A. This article is a Quality Improvement education project and IRB approval was not obtained. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: Nicholas Slamon declare(s) personal fees from Arch Virtual. Medical consultant to Arch Virtual and subsidiary Acadicus. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2020-12-17T09:08:57.960Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "34cfb94f77cd967e6189e132c661c7b84eea449c", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/45956-virtual-reality-simulation-for-pediatric-airway-intubation-readiness-education.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "599fc970db1c8ecaff1a06a497df8e9730502d93", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
5523893
pes2o/s2orc
v3-fos-license
An Efficient Text Summarizer using Lexical Chains We present a system which uses lexical chains as an intermediate representation for automatic text summarization. This system builds on previous research by implementing a lexical chain extraction algorithm in linear time. The system is reasonably domain independent and takes as input any text or HTML document. The system outputs a short summary based on the most salient concepts from the original document. The length of the extracted summary can be either controlled automatically, or manually based on length or percentage of compression. While still under development, the system provides useful summaries which compare well in information content to human generated summaries. Additionally, the system provides a robust test bed for future summary generation research. Introduction Automatic text summarization has long been viewed as a two-step process. First, an intermediate representation of the summary must be created. Second, a natural language representation of the summary must be generated using the intermediate representation (Sparek Jones, 1993). Much of the early research in automatic text summarization has involved generation of the intermediate representation. The natural language generation problem has only recently received substantial attention in the context of summarization. Motivation In order to consider methods for generating natural text summaries from large documents, several issues must be examined in detail. First, an analysis of the quality of the intermediate representation for use in generation must be examined. Second, a detailed examination of the processes which link the intermediate representation to a potential final summary must be undertaken. The system presented here provides a useful first step towards these ends. By developing a robust and efficient tool to generate these intermediate representations, we can both evaluate the representation ......... andcormider the difficult problem of generatiiig natural language texts from the representation. Background Research Much research has been conducted in the area of automatic text summarization. Specifically, research using lexical chains and related techniques has received much attention. Early methods using word frequency counts did not consider the relations between similar words. Finding the aboutness of a document requires finding these relations. How these relations occur within a document is referred to as cohesion (Halliday and Hasan, 1976). First introduced by Morris and Hirst (1991), lexical chains represent lexical cohesion among related terms within a corpus. These relations can be recognized by identifying arbitrary size sets of words which are semantically related (i.e., have a sense flow). These lexical chains provide an interesting method for summarization because their recognition is easy within the source text and vast knowledge sources are not required in order to con> pure them. Later work using lexical chains was conducted by Hirst and St-Onge (1997) using lexical chains to correct malapropisms. They used WordNet, a lexical database which contains some semantic information (http://www.cs.princeton.edu/wn). Also using WordNet in their implenmntation. Barzilay and Elhadad (1997) dealt with some of tile limitations in Hirst and St-Onge's algorithm by examining every possible lexical chain which could be computed, not just those possible at a given point in the text. That is to say, while Hirst and St.Onge would compute the chain in which a word should be placed when a word was first encountered, Barzilay and Elhadad computed ever:,' possible chain a word could become a member of when the word was encountered, and later determined the best interpretation. Barzilay and Elhadad (1997). A Linear Time Algorithm for Hyponym 1 1 0 0 We use their results as a basis for the utility of Sibling 1 0 0 0 the methodology. The most substantial difference is that Barzilay and Elhadad create all possible chains explicitly and then choose the best possible chain, whereas we compute them implicitly. As mentioned above, WordNet is a lexical database that contains substantial semantic information. In order to facilitate efficient access, the WordNet noun database was re-indexed by line number as opposed to file position and the file was saved in a binary indexed format. The database access tools were then rewritten to take advantage of this new structure. The result of this work is that accesses to the Word-Net noun database can be accomplished an order of magnitude faster than with the original implementation. No additional changes to the WordNet databases were made. The re-indexing also provided a zero-based continuous numbering scheme that is important to our linear time algorithm. This importance will be noted below. Our Algorithm Step 1 For each word instance that is a noun For every sense of that word Compute all scored "meta-chains" Step 2 For each word instance Figure out which "meta-chain" it contributes most to Keep the word instance in that chain and remove it from all other Chains updating the scores of each "meta-chain" Our basic lexical chain algorithm is described briefly in Figure 1. The algorithm takes a part of speech tagged corpus and extracts the nouns. Using WordNet to collect sense information for each of these noun instances, the algorithm then computes scored "nmta-chains" based on the collected information. A "meta-chain" is a representation of every possible lexical chain that can be computed starting with a word of a given sense. These meta-chains are scored in the following manner. As each word instance is added, its contribution, which is dependent on the scoring metrics used, is added to the "metachain" score. The contribution is then stored within and type of relation. Currently, segmentation is accomplished prior to using our algorithm by executing Hearst's text tiler (Hearst, 1994). The sentence numbers of each segment boundary are stored for use by our algorithm. These sentence numbers are used in conjunction with relation type as keys into a table of potential scores. Table 1 denotes sample metrics tuned to simulate the system devised by Barzilay and Elhadad (1997). At this point, the collection of "meta-chains" contalns all possible interpretations of the source document. The problem is that in our final representation, each word instance can exist in only one chain. To figure out which chain is the correct one, each word is examined.using the score contribution stored in Step 1 to determine which chain the given word instance contributes to most. By deleting the word instance from all the other chains, a representation where each word instance exists in precisely one chain remains. Consequently, the sum of the scores of all the chains is maximal. This method is analogous to finding a maximal spanning tree in a graph of noun senses. These noun senses are all of the senses of each noun instance in the document. From this representation, the highest scored chains correspond to the important concepts in the original document. These important concepts can be used to generate a summary from the source text. Barzilay and Elhadad use the notion of strong chains (i.e., chains whose scores are in excess of two standard deviations above the mean of all scores) to determine which chains to include in a summary. Our system can use this method, as well as several other methods including percentage compression and number of sentences. For a more detailed description of our algorithm please consult our previous work (Silber and McCoy, 2000). Runtime Analysis In this analysis, we will not consider the computational complexity of part of speech tagging, as that is not the focus of this research. Also, because the size and structure of WordNet does not change from execution to execution of.aJae.algorit, hm, we shall take these aspects of WordNet to be constant. We will examine each phase of our algorithm to show that the extraction of these lexical chains can indeed be done in linear time. For this analysis, we define constants from WordNet 1.6 as denoted in Table 2. Extracting information from WordNet entails looking up each noun and extracting all synset, Hyponym/Hypernym, and sibling information. The runtime of these lookups over the entire document is: n * (log(Ca) + Cl * C2 + Cl * C5) When building the graph of all possible chains, we simply insert the word into all chains where a relation exists, which is clearly bounded by a constant (C6). The only consideration is the computation of the chain score. Since we store paragraph numbers represented within the chain as well as segment boundaries, we can quickly determine whether the relations are intra-paragraph, intra-segment, or adjacent segment. We then look up the appropriate score contribution from the table of metrics. Therefore, computing the score contribution of a given word is constant. The runtime of building the graph of all possible chains is: n*C6.5 Finding the best chain is equally efficient. For each word, each chain to which it belongs is examined. Then, the word is marked as deleted from all but the single chain whose score the word contributes to most. In the case of a tie, the lower sense nmnber from WordNet is used, since this denotes a more general concept. The runtime for this step is: While the constants are quite large, the algorithm is clearly O(n) in the number of nouns in the original document. At "first glance, "the'constants ~involved seem prohibitively large. Upon further analysis, however, we see that most synsets have very few parent child relations. Thus the worst case values maynot reflect the actual performance of our application. In addition, the synsets with many parent child relations tend to represent extremely general concepts. These synsets will most likely not appear very often as a direct synset for words appearing in a document. "2,;5 User ~Interface Our system currently can be used as a command line utility. The arguments allow the user to specify scoring metrics, summary length, and whether or not to search for collocations. Additionally, a web CGI interface has been added as a front end which allows a user to specify not just text documents, but html documents as well, and summarize them from the Internet. Finally, our system has been attached to a search engine. The search engine uses data from existing search engines on the Internet to download and summarize each page from the results. These summaries are then compiled and returned to the user on a single page. The final result is that a search results page is returned with automatically generated summaries. Comparison with Previous Work As mentioned above, this research is based on the work of Barzilay and Elhadad (1997) on lexical chains. Several differences exist between our method and theirs. First and foremost, the linear run-time of our algorithm allows documents to be summarized much faster. Our algorithm can summarize a 40,000 word document in eleven seconds on a Sun SPARC Ultra10 Creator. By comparison, our first version of the algorithm which computed lexical chains by building every possible interpretation like Barzilay and Elhadad took sLx minutes to extract chains from 5,000 word documents. The linear nature of our algorithm also has several other advantages. Since our algorithm is also linear in space requirements, we can consider all possible chains. Barzilay and Elhadad had to prune interpretations (enid thus chains) which did not seem promising. Our algorithm does not require pruning of chains. Our algorithm also allows us to analyze the iinportance of segmentation. Barzilay and Elhadad used segmentation to reduce the complexity of the problem of extracting chains. They basically built chains within a segment and combined these chains later when chains across segment boundaries shared a word in the same sense in common. While we include segmentation information in our algorithm, it is merely because it might prove useful in disambiguating chains. The fact that we can use it or not allows our algorithm to test the importance of segmentation to proper-word ~ense disambiguation. It is important to note that on short documents, like those analyzed by Barzilay and Elhadad, segmentation appears to have little effect. There is some linguistic justification for this fact. Segmentation is generally computed using word frequencies, and our lexical chains algorithm generally captures the same type of information. On longer documents, our research has shown segmentation to have a much greater effect. Current Research and Future Directions Some issues which are not currently addressed by this research are proper name disambiguation and anaphora resolution. Further, while we attempt to locate two-word collocations using WordNet, a more robust collocation extraction technique is warranted. One of the goals of this research is to eventually create a system which generates natural language summaries. Currently, the system uses sentence selection as its method of generation. It is our contention that regardless of how well an algorithm for extracting sentences may be, it cannot possibly create quality summaries. It seems obvious that sentence selection will not create fluent, coherent text. Further, our research shows that completeness is a problem. Because information extraction is only at the sentence boundary, information which may be very important may be left out if a highly compressed summary is required. Our current research is examining methods of using all of the important sentences determined by our lexical chains algorithm as a basis for a generation system. Our intent is to use the lexical chains algorithm to determine what to summarize, and then a more classical generation system to present the information as coherent text. The goal is to combine and condense all significant information pertaining to a given concept which can then be used in generation. 4 Conclusions \Ve have described a domain independent summarization engine which allows for efficient summarization of large documents. The algorithm described is clearly O(n) in the number of nouns in the original document. In their research, Barzilay and Elhadad showed that lexieal chains could be an effective tool for automatic text summarization (Barzilay and EIhadad, 1997). By developing a linear time algorithm to compute these chains, we have produeed a front end to a summarization system which can be implemented efficiently. An operational sample of this demo is available on the web at http://www.eecis.udel.edu/-silber/research.htm. ..... While. ,usable currenlfly, the-system provides a platform for generation research on automatic text summarization by providing an intermediate representation which has been shown to capture important concepts from the source text (Barzilay and Elhadad, 1997). The algorithm's speed and effectiveness allows research into summarization of larger documents. Moreover, its domain independence allows for research into the inherent differences between domains. 5
2014-07-01T00:00:00.000Z
2000-06-12T00:00:00.000
{ "year": 2000, "sha1": "b87976ecb9aed3e57aa374bd848f6fb3bf754e70", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1118294&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "b87976ecb9aed3e57aa374bd848f6fb3bf754e70", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
258796185
pes2o/s2orc
v3-fos-license
Post-Resettlement Food Insecurity: Afghan Refugees and Challenges of the New Environment Background: Lack of access to adequate, safe, and nutritious food is a major concern for the Afghan population due to ongoing war and humanitarian crises. Recently resettled Afghan refugees in the US continue to face challenges securing adequate, nutritious food resources in new environments. This study examined Afghan refugees’ food access and insecurity in the San Joaquin Valley, California. Methods: Semi-structured, in-depth interviews were conducted to collect the perspectives and experiences of key informants and newly arrived Afghan refugees. Results: This study highlights environmental and structural factors (availability and accessibility of grocery stores; availability of religious-appropriate items in the stores; the public benefit received by a family; and public transportation) and individual factors (religious and cultural practices; financial and language barriers) as major determinants of post-resettlement food insecurity. Conclusion: Increasing the accessibility and affordability of culturally and religiously appropriate food items within the US food system, enhancing the collaboration of community volunteers and resettlement organizations in the direct assistance of new families, and providing continuous access to public benefits are possible steps to mitigate the risk of food insecurity among Afghan refugees. This study suggests a continuous examination of the degree of food insecurity in this population and its attendant health impacts. Introduction As the humanitarian crisis increased in Afghanistan, the United States evacuated more than 74,000 Afghans and facilitated their resettlement in different locations across the states (fiscal year 2021) [1]. To support the acceptance of refugees, on 8 September 2022, the US President announced a cap of 125,000 new refugees for the 2023 fiscal year [2]. Refugees generally leave their countries due to insecure environments [3], which leads to encountering multiple challenges before and during the migration process and after resettlement that can negatively impact refugees' health outcomes [4,5]. Among those challenges, food insecurity has been a major ongoing threat to refugees' health [6,7]. Food insecurity is defined as "a household-level economic and social condition of limited or uncertain access to adequate food for an active, healthy life that may or may not lead to periodic reductions in food" [8] and is associated with poor health outcomes such as anemia, malnutrition (including obesity), diabetes, and hypertension [9]. According to the United Nations High Commissioner for Refugees (UNHCR)'s most recent report, around 82 percent of internally displaced people (IDPs) and 67 percent of refugees and asylum seekers originate from countries that experienced severe food crises [10]. The prevalence of food insecurity has been exacerbated during the COVID-19 pandemic for the general population and for those resettled in the US [11,12]. Although studies have identified poverty as a determinant of food insecurity for the general US population [13], refugees living in the US face added challenges [14]. Lack of familiarity with the retail food environment and its products, as well as language barriers, were reported to be linked with food access difficulties and the risk of further food insecurity among refugees [14][15][16]. Difficulty accessing needed ingredients, lack of interpersonal relationships in new neighborhoods, unique cooking customs, and acculturation to the host countries' eating patterns may create changes in families' dietary patterns [17]. These changes in dietary patterns post-resettlement have been found to significantly contribute to refugees' health deterioration [17]. Few studies have focused on the interaction between environmental factors and individual factors that shape refugees' food choices, nutritional status, and risk of food insecurity [15,16]. Food insecurity is a global concern for Afghan refugees; as the humanitarian crisis in Afghanistan intensifies, millions of Afghans experience severe food shortages [18]. Hunger statistics data (percentage of the population whose food intake is insufficient to meet dietary energy requirements continuously) from Afghanistan reports an increase of six percent, from 23% in 2017 to 29.8% in 2020 [19]. A recent report from Afghanistan indicated that in 2021, nearly 19 million Afghans would experience acute food insecurity [20]. The challenges of food insecurity are exacerbated upon arrival in countries like the United States. The majority of the Afghan population (99.7% in the existing resources) is reported to be followers of Islam [21], which could have an impact on most aspects of their lives, including their food choices. For many Afghan refugees, as for other Muslims, food is preferentially restricted to halal foods-those that adhere to Islamic law. Halal meat requires a specific method of slaughter [22]. Halal also indicates that the product is free of any pork fragments, parts of dead animals, blood, or intoxicants such as alcohol [22]. The global halal food market demand experienced a rapid increase of about 8.8% between 2017 and 2021 [23]. In the US, the number of places that offer halal meat items increased from 200 in 1998 to more than 7500 in 2016, which is still considered a small number compared to the whole US food market [24]. Based on the most recent Google search, there is only one halal butcher store and four halal markets in the study area [25]. Despite the growing body of research on the impact of the physical environment (neighborhood geographic, economic, and social characteristics) on food insecurity in different communities in the US [26][27][28], little research has examined the impact of neighborhoods and food access on the food insecurity of Afghan refugees in the United States. Little is known about how Afghan refugees adapt to their new food environment, address their religion-specific food needs, and how that could exacerbate the Afghan family's risk of post-resettlement food insecurity. This study employed a qualitative approach to explore Afghan refugees' encounters, experiences, and challenges in the new food environment to better understand the determinants and root causes of food insecurity in the newly arrived families resettled in California's San Joaquin Valley, where in the most recent report of 2022, 77% of the population reported below the federal nutrition program threshold and 12.1% below the overall rate of food insecurity [29]. Methods This study analyzed the existing data collected in 2018-2019 as part of a study aimed at exploring the perspectives and experiences of recently arrived Afghan refugees regarding food access and food environment navigation in the San Joaquin Valley, California. Data was collected through in-depth, semi-structured interviews with Afghan refugee families and key informants in three major cities in the area. We used purposeful sampling to recruit key informants from service providers working for organizations (governmental or nongovernmental) directly serving recently arrived Afghan refugees, religious and community leaders, and community volunteers who directly support newly arrived refugees. Refugee participants were identified and contacted through snowball sampling. Key informants who participated in the study introduced the study to the Afghan families with whom they were in contact, and those families who agreed to participate in the study were asked to introduce the study to other families. After initial recruiting, participants received informed consent via email in advance. The informed consent was also read to the participants by the interviewer at the beginning of each interview. Oral consent was obtained and audiorecorded at the onset of each interview. None of the study participants disagreed with the informed consent, and no participant disagreed with being audio recorded. If the interview was conducted at a family's house, the wife and husband participated jointly. All the interviews were conducted by the first author, who is proficient in Farsi and English and trained in conducting qualitative interviews. Key informants' interviews were conducted in person and in English. Refugee interviews were conducted in either Farsi or English, depending on the participants' choice. All the refugee participants in this study were comfortable understanding the interviewer's Faris dialectic, and there were no Pashtospeaking participants in this study. A preliminary semi-structured interview guide was developed based on the literature reviews and the research team's experience working with refugees in different areas. The interview guide for key informants included questions about their role in refugee resettlement; their perspectives on refugees' food choices, shopping processes, financial resources for grocery shopping, the adequacy of available food items, and the availability of local food services; and other barriers related to food access and possible solutions. The interview guide developed for Afghan refugee families included questions on individuals' or families' access to needed grocery items after arrival, experience with food shopping in the United States, navigation of the physical environment, barriers to obtaining needed items, food preparation, and eating habits after resettlement. At the end of the interviews, the interviewer (the first author) summarized the participants' responses and allowed them to change their responses or provide clarification. Each interview lasted approximately one hour. After the study, all participants received an e-gift card as a token of appreciation for their time and support of the study. All interviews were audio-recorded and transcribed verbatim. English interviews were transcribed by GMR Transcription [30], a professional transcription service founded in 2004, and 99% accuracy was guaranteed. Moreover, Farsi interviews were transcribed by the first author, a native Farsi speaker. The University of California, Merced Institutional Review Board approved the study. Data analysis and interpretation included concomitant consultation with the available literature, field notes, and analytical memos. We utilized a thematic analysis approach [31]. After each interview was completed, we conducted line-by-line coding using ATLAS.ti (Windows version 19.0.8.0) to support the coding process [32]. Interviews with key informants and refugees were coded with the same coding system. Emerging codes were used to create prompts for necessary changes in interview guides. Interviews were stopped when there were no emerging codes in the two final interviews. The next step in data analysis was to develop descriptive themes around major concepts discussed from key informants' perspectives and refugee families' experiences with food access and its determinants in the post-resettlement environment for Afghan refugees in San Joaquin Valley, California. Results Twenty-four interviews were conducted overall. We completed twelve interviews with key informants and twelve interviews with Afghan families resettled in the area for less than two years. No demographic information was collected from participants. No detailed demographic information was collected from the participants. Based on the recruiting charts, refugee interviews included one woman and three men, and the rest, both wives and husbands, participated jointly. All participants were 18-45 years old and had dependents in their household (varying between the number of children and, in some cases, including grandparents). For the key informant interviews, we had five women and seven men. Three interviewees were religious or community leaders; three were community volunteers; and the rest were staff of different governmental and non-governmental organizations serving refugees in the study area. The major themes emerging from our study participants' perspectives and experiences on major determinants of post-resettlement food insecurity in Afghan refugee communities were lack of familiarity with the new environment (navigation of grocery stores), religious and cultural practices (excessive barriers and restrictions), transportation and housing (access limitation), cost of grocery items (lack of financial literacy), and limitations of food banks and food pantries (lack of healthy options). Lack of Familiarity with the New Environment Navigation in Grocery Accessing and navigating grocery stores for newly arrived refugees is challenging and fear-inducing. When a family resettles in a new place, it takes time to become familiar with the physical environment, including the streets around their home, grocery stores nearby, and ways to access neighborhood resources. A refugee male participant mentioned, "When we came here, I did not know anything. Where to go? Where to shop? What are the foods? We started going out with my kids and finding places. Sometimes we got lost". Another male participant added: "It would take about an hour to find and reach a store and shop for something, and then an hour again to come back. It was about two hours, and when we came back, our wives were so worried. Our wives were very afraid because we did not know anything. We did not know where to go. Thus, not knowing anything made everyone very afraid every time we were out shopping". Some families received support from family and friends who had resettled in the area earlier or from community volunteers. As one of the refugee participants reported, "When we arrived first, the Afghan people who came before us helped us find the stores with Afghan food". Community members usually help newly arrived families find stores in the area and access necessary items. A female key informant, a community volunteer, told us, "They navigate the stores through networking; when they come first, the religious center is their GPS, I would say". Another female key informant, staff of one of the local resettlement centers, also noted the value of this community support: "Nowadays, the community takes good care of newly arrived families. There are members of the Afghan community who will typically come alongside them and show them the ropes, show them the places to buy the best things, where to buy the least expensive things, and how to have access to them". Families without this community support had to explore the area on their own to find and access the needed stores. One of the refugee women mentioned: "No one showed me the neighborhood stores. I was going out on my own and walking around and finding stores. Slowly, I learned about the stores and what they had". A female community volunteer also emphasized the hardship refugees face when navigating on their own and noted: "Without guidance, finding the food and the place of Afghan food they need is very difficult. Doing it alone is difficult. They should have a guide. Someone should guide them so they can shop". In addition to the challenges of unfamiliar neighborhoods and geographic locations, a lack of familiarity with grocery stores' organization and payment systems was also reported as intimidating and creating barriers to accessing required food items for recent arrival refugees. As one refugee told us, "The problem was we did not know where to find what; we had to go to a store multiple times to figure out where to find what. After we went sometimes, we learned about those stores and knew where the items were located". Another refugee man compared his experience from back home with the new environment: "From the place, we came to these stores; these systems were not there. We were very nervous. How should we pay? How does the system work, and what should we buy? After a month, it became easier. The first month was very hard when we went to those stores and saw their systems". Some resettlement agencies offered a short training for families after their arrival, which included a tour of the local neighborhood, assistance with public transportation navigation, and details about nearby grocery stores. However, not all agencies offer such tours, and this tour is available to a limited number of families. Along with difficulty navigating the stores, newly arrived families reported difficulties identifying available pre-packaged items and most needed help deciding what they could shop for or were looking for. One refugee family told us: "The first week was hard; it was hard because we did not know the food." One of the female staff members of the resettlement center' also noted: "I think in the beginning, it is overwhelming because it is a big, crazy grocery store. I think they just kind of shop around." The training provided by resettlement agencies, while useful, usually does not include any introduction to the items available in the grocery store, and a lack of familiarity with US products occasionally leads to buying the wrong products. Another refugee family disclosed, "There were times that we bought an ingredient and brought it home to cook with, and in-home, we realized it was not what we wanted, or the one we bought, we cannot use it". Navigating an unfamiliar environment could also be more challenging for families who experience language barriers. Some of the key informants in this study mentioned that, most recently, resettled Afghan men are mostly well-educated and have English language skills. However, single mothers and elderly individuals struggle to communicate in English and have more difficulty shopping for food items in US grocery stores. As one of the community volunteers told us, "For some families, when they first arrived, we almost had to use the picture-shopping list because there was a translation difficulty and they did not know the items. We cannot communicate; we just use pictures, or sometimes we just bring grocery items with us and ask them to circle items they are looking for so we can locate them at the stores". Religious and Cultural Practices: Excessive Barriers As most Afghan families practice Islam, immediately after arrival, they look for appropriate religious items, such as halal meat. The number of markets that offer halal items in this area is limited, and they often only offer limited items that are mainly frozen. One community volunteer observed: "Accessing halal meat is difficult, and there is only one halal butcher in the area. The butcher store is usually far from where most families resettled, and families usually need rides or travel long distances via public transportation to shop for fresh meat". One of our refugee participants also shared: "For meat, we had to take one bus to the central terminal and the second bus to another city". Along with the hardship of accessing halal items, the price of halal meat is much higher than other meat products available in general grocery stores. A resettlement agency staff member reported: "The meat is hard to come by here because most of them are still trying to observe halal meat, which is unavailable in our communities". A community volunteer (female) also noted, "The price is different. For meat, it is up to $2 extra. The chicken here in stores is not halal, and it is 1.18-1.19 (dollars) per pound, but the halal chicken is about 2.50-2.60 per pound". Due to the higher prices, families who only consume halal meat (as opposed to those who consume non-halal US products) may either eat less meat during the month or spend a higher proportion of their budget to purchase needed protein items. A religious leader (male) described the effects of these different prices: "You know that the regular processed chicken in the markets is much cheaper than the halal chicken. If families are limited on cash, if they are going to buy, for example, two pounds of halal chicken, they can get it for the same price, about four or five pounds, from outside. Especially looking at the family size. I would say the average size of a family is five to six individuals. Thus, imagine if they are going to spend this money only on buying meat; where is the money that is going to be spent on other groceries as well?" The higher meat price can cause families to run out of food assistance benefits more quickly compared to other families. A refugee participant told us, "Sometimes, the kids are craving meat, so we have to pay more and buy halal meat, and again, we are running out of food stamps". Another emerging concern in this study was unfamiliarity with the ingredients and with pre-packaged items available in US stores and fear of whether those items are religiously or culturally appropriate, which may limit the family's choice and willingness to try those products. A community volunteer female participant remarked, "Getting safe food items is not something that is easy, especially canned food. I got some canned garbanzo beans and red beans for refugee families. They don not know what is added to them, so I read the label to them". Families often prefer to shop at ethnic stores that have a familiar environment and limited items at higher prices. The ethnic stores carry religious or culturally preferred items that families may easily recognize and are familiar with the packaging and taste of. Even though the items have higher prices than similar items in US grocery stores, families prefer to shop there because they are identical to the brands and packaging, they used previously. The same volunteer also disclosed, "They see what they would find in Afghanistan for produce and stuff, and they just buy that". One of the religious leaders (a male participant) also added: "We have an Indian market; it is very small, but they go there very often and buy rice or oil they are familiar with. Thus, they spend lots of money in one minimart, which is very expensive, but they recognize the packaging more there, and they recognize the products more, so they are comfortable there. " We also realized that service providers and community volunteers might also have difficulty navigating halal items and be confused by refugee families' different practices. Some key informants in our study mentioned that refugee families used halal to refer to all culturally preferred items, which makes it confusing for providers who support refugee families. A community volunteer admitted, "I am not even familiar with all halal products. Some families even look for halal flour, halal oil, and obviously halal meat". Transportation and Housing: Access Limitations In addition to navigating grocery stores in the new environment, newly arrived families must navigate public transportation and learn how to best access the stores based on their housing locations. As a result, neighborhoods' structural resources could highly shape families' access and food choices. A male community volunteer disclosed, "Some families live far away, and transportation, of course, is a major problem for them to come back and forth to the grocery stores". Most recently arrived refugee families have not had access to personal transportation for several months and would highly rely on public transportation or shop at local stores within walking distance of their homes. Local stores tend to have fewer items, which are higher priced and lower quality, than larger grocery stores. A female community volunteer reported, "The part of town that the majority of families are living in does not have grocery stores within walking distance. They would need to walk almost two miles to get to a regular grocery store instead of a local market like a gas station so that they could buy something. There are some neighborhood markets, but they are very expensive". Recent arrivals also must rely on volunteers or other community members for transportation to the grocery stores and ethnic markets, which could limit the shopping locations and timing. One resettlement agency staff member cited: "A family who does not have a car after a year is still dependent on the volunteers and their help for rides to grocery stores. A family that purchases a car is more independent, and they go more to grocery stores on their own, so in one year, it depends if they can get personal transportation". Using public transportation could be confusing and time-consuming and varies based on the city of resettlement; however, after some time, most families learn how to use public transportation and highly utilize it, even though it would take a long time to commute to the store with public transportation. This could also bring up the issue of food safety and the question of which items could be transported safely during these long trips. One of the religious leaders explained: "Once they know where to go, most families use buses. Resettlement agencies give the families vouchers to use the buses. Once they know the bus route, they can go on their own. Usually, the trip that would take about 30 min takes them two hours to reach because of the different bus stations where they need to stop and all that", As it is difficult to schedule a ride to the stores, families may prefer to shop in bulk to avoid running out quickly. However, shopping in bulk could lead to spending a significant portion of the food assistance benefits on nonperishable items, possibly running out of food assistance earlier in the month. Running out of food assistance could result in being unable to access fresh ingredients, including fresh produce, eggs, milk, vegetables, and fruits, closer to the end of the month or for a long period of time. One female community volunteer observed: "At the beginning of the month, they will choose to buy 150 pounds of flour and five packs of oil, and then they run out of money every month". Food Banks and Food Pantries: Limited Utilization and Options Some key informants in this study observed that, unlike most low-income families in the US, refugee families may not be willing to use food bank items in areas where food banks and pantries are available. Most food pantry items are not familiar to families or may not be culturally appropriate for ethnic or religious minorities. One female community volunteer cited, "There are different food ministries and food banks that help. The families typically do not use them a lot just because it is not the food that they would typically eat. At some point, if there are things that might be helpful to them, then they will take them, but for the most part, it is not the same. It is not what they would normally eat. For example, I guess the best way to break it down would be for canned goods and different things like that. They typically will not eat the same chilis that a food bank would provide, but if there are ever times when there's fresh produce available, which we have seen often, like fruits and vegetables, spinach, tomatoes, and different things like that, from some of the local markets that might have a surplus and would donate some, then you see very active participation in receiving those items". While other programs such as the Salvation Army and Food for Hunger may be available in the area, refugee families often feel uncomfortable participating in them, especially because they require standing in line with their families; thus, like food banks, these programs are underutilized. As one of the community volunteers explained: "There are some meal programs that offer free meals, but for these families to take their children to a location to get the free meals, they would, I think, rather just eat bread or whatever they can make on their own. They do not want to go and stand in line to get free meals with their kids. They do not want to do that". The key informant participants were also concerned about the possible impact of a lack of access to fresh items later in the month and excessive access to food of low nutritional qualities (particularly sugary beverages and candy) in US grocery stores on the families' eating habits, which can lead to potential adverse health outcomes. One of the community leaders noted: "Families all eat the same thing; if they are younger or older, they all eat one food. The little kids eat the food of the elders; they should have something specific for them, but it is a bit expensive, and they may not be able to afford it". Some study participants observed weight changes in family members within a few months of their arrival. Families with children start to adapt to the choices of the US food environment in their diet as children are introduced to US food items at their schools and start using prepared and pre-packaged meals, junk food, and fast foods regularly in their regular diet. One refugee woman told us, "Our kids, because they go to school here, eat lunch and breakfast there, so they are used to the food here and are familiar with some of the foods and want them from us". A community volunteer also cited: "We see that people are gaining weight because they are eating a lot more American food over time. I'd say after a year, some of them are eating at fast food restaurants". Price of Food Items: Lack of Financial Literacy Like other low-income families, financial hardship could be a major determinant of food choices among refugees resettled in the San Joaquin Valley, California. Key informants in this study mentioned that economic assistance through the Supplemental Nutrition Assistance Program (SNAP, formerly known as food stamps) is the primary (and sometimes only) source of shopping for food for refugee families. Our participants believed that understanding how SNAP benefits work and how to budget for a month could challenge new families. As a resettlement agency staff member noted: "That is a main source of their food. I mean, that is support; that is a food stamp. Sometimes, that is enough. Sometimes, that is not enough. However, they should learn and practice management. Their food stamps have to get them to the end of the month. Thus, that is the only source they are using for the food". One of the community leader participants pointed out: "There is no other source. They are already short on other sources. Hence, they cannot get money from other places for food; they only have food stamps. If they run out, there is nothing; they can only reduce their food. That is the only way they can manage". Newly arrived refugees may need support and be taught budgeting practices to make their benefits last a month. Some of the key informants emphasized that newcomers need more support and training on financial literacy, food budgeting, and spending food stamps. One community volunteer cited: "Maybe never before in their lives have they gotten everything in one pot at the beginning of the month. I do not understand why budgeting does not seem to be a part of their lives. I would rather see them educated on how to spend than spend all their money at the beginning of the month and not have money at the end". In conversations with a religious leader participant, he shared: "I have felt like they need some budgeting help. Maybe it is because food has been insecure for them in the past. Maybe never before in their lives have they gotten everything in one pot at the beginning of the month". Discussion Food insecurity, or the lack of access to enough healthy and necessary nutrient items [8], is a major threat to the health of low-income communities. Existing studies have reported on multi-dimensional barriers that impact food access and food security in migrants and indicated that food insecurity in these communities results from elements such as individual characteristics, economic resources, and structural constraints of the physical environment [15,33]. This study examined the factors impacting food access for Afghan refugees resettled in the San Joaquin Valley, California. Our findings indicated that food access and choices of recently resettled refugees in this area are affected by other complex factors, including lack of familiarity with the US food environment, transportation and language barriers, and poor budgeting abilities, which support the existing literature [15,34,35]. We highlighted the impact of being a religious minority (Muslim) and the lack of availability or higher price of halal items in the post-resettlement environment. Another concern related to halal items was uncertainty about the definition of halal and its application among families and service providers. Our discussion with the study participants revealed that identifying halal items is challenging for most Afghan refugees, especially with prepackaged items. As a result, Afghan refugees are not utilizing most items offered by food pantries. This hesitation is not a matter of preference but of religious obligation, which could justify why, unlike other low-income families in the US, shopping for less expensive equivalent items available in US grocery stores or using food pantries may not be a desirable solution for Afghan refugees. This study also reported that for refugees, food stamps are the only source of payment for food items, and newly arrived families lack the budgeting ability to make their food stamps last for the whole month, like what has been reported in other low-income communities [35,36]. To address these concerns, we encourage refugee-serving organizations and policymakers to invest more in understanding the refugee communities' food access barriers and needs and increase collaboration among all service providers serving the newly arrived Afghan refugees. Strong collaboration among NGOs, religious and community center leaders, and community volunteers will create cross-agency support to design and expand relevant and effective interventions to address new arrivals' food access challenges. Ultimately, all resettlement organizations should expand their cultural orientation training to include a tour of the new neighborhood's food markets. Community volunteers can also provide organized help with the transportation and shopping needs of the new families. Religious centers can provide educational guidelines on halal items available in the local area for new families and service-providing organizations' staff, including food banks. Limitations This study focused on the experiences of refugees and key informants who closely work with refugees from a specific ethnic/linguistic background. Refugees who participated in this study all recently arrived and were Dari speakers. Refugees who are mainly Pashto-speaking may have additional barriers that we did not have a chance to examine. Furthermore, refugee families' food access challenges and risk of food insecurity could vary with the length of their stay in the US. Another limitation to consider is that the resettlement area of this study is geographically distinct, with the native community being predominantly Latino, which could result in different resources being available compared to other resettlement locations. We also should consider that Afghan refugees mostly come from areas of severe hardship, like refugee camps, conflicts, and war zones. Experiences of striving for food in the past may lead families to adjust to what is currently available and underreport difficulties or a lack of access; and more likely to demonstrate appreciation and satisfaction with whatever resources they have. There is a need for future public health research to explore the food status and risk of food insecurity among refugees with different ethnic backgrounds and among those who have been living in the US for an extended period, have established employment, and may not be eligible for federal support and food assistance anymore. The issue of changing the diets of school-aged children and the risk of further adverse health outcomes, including obesity, should also be examined in more detail in future research, along with consideration of the lack of halal items in school lunches. Finally, the interviews for this study were conducted before the pandemic. More studies are needed to investigate the pandemic's impact on Afghan refugees' food access challenges. Conclusions Determinants of food access for recently resettled refugees in the San Joaquin Valley, California, are complex and involve multi-dimensional individual, environmental, economic, societal, and institutional factors. While the financial situation is the major critical factor in food planning and dietary decisions for native US low-income families, for the recently arrived Afghan refugees, a combination of religious practice and unfamiliarity with the environment (physical, financial, and food systems) could enhance the impact of financial difficulties. The gap between existing resources and families' needs could be resolved with collective effort, including increasing social support, improving the US food system, making more ethnic-appropriate food items available in US grocery stores, and directly engaging communities and resettlement agencies in supporting newcomers as they navigate the post-resettlement environment. Although this paper focused on understanding the experiences of a small community of refugees resettled in the United States, most of the challenges discussed should be considered while serving other Muslim refugees across the US. With the increasing rate of refugees coming from conflicted areas, including Syria, East Asia, and North Africa, there is a need for a continuous data collection effort on food access, the risk and degree of food insecurity, and its adverse health effects among different refugee communities in the US. Informed Consent Statement: Oral informed consent was obtained from all subjects involved in the study. Data Availability Statement: Data is not publicly available to protect participant privacy and ethical concerns. Acknowledgments: The research team appreciates the support of the Blum Center of the University of California Merced for funding this research and thanks all of the study participants for sharing their time, perspectives, and experiences with the research team. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the study's design, in the collection, analysis, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.
2023-05-20T15:06:34.917Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "7595a6f0bca793de3c57172b0ecdf9d34a29583a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijerph20105846", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "03e310630895be7984c4646e44598419e7a6c950", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
16255059
pes2o/s2orc
v3-fos-license
Assessment of mangroves from Goa, west coast India using DNA barcode Mangroves are salt-tolerant forest ecosystems of tropical and subtropical intertidal regions. They are among most productive, diverse, biologically important ecosystem and inclined toward threatened system. Identification of mangrove species is of critical importance in conserving and utilizing biodiversity, which apparently hindered by a lack of taxonomic expertise. In recent years, DNA barcoding using plastid markers rbcL and matK has been suggested as an effective method to enrich traditional taxonomic expertise for rapid species identification and biodiversity inventories. In the present study, we performed assessment of available 14 mangrove species of Goa, west coast India based on core DNA barcode markers, rbcL and matK. PCR amplification success rate, intra- and inter-specific genetic distance variation and the correct identification percentage were taken into account to assess candidate barcode regions. PCR and sequence success rate were high in rbcL (97.7 %) and matK (95.5 %) region. The two candidate chloroplast barcoding regions (rbcL, matK) yielded barcode gaps. Our results clearly demonstrated that matK locus assigned highest correct identification rates (72.09 %) based on TaxonDNA Best Match criteria. The concatenated rbcL + matK loci were able to adequately discriminate all mangrove genera and species to some extent except those in Rhizophora, Sonneratia and Avicennia. Our study provides the first endorsement of the species resolution among mangroves using plastid genes with few exceptions. Our future work will be focused on evaluation of other barcode markers to delineate complete resolution of mangrove species and identification of putative hybrids. Electronic supplementary material The online version of this article (doi:10.1186/s40064-016-3191-4) contains supplementary material, which is available to authorized users. Background Mangroves are unique ecosystem exist along the sheltered inter-tidal coastline, in the margin between the land and sea in tropical and subtropical areas. This ecosystem endowed with productive wetland having flora and fauna adapted to local environment such as fluctuated water level, salinity and anoxic condition (Tomlinson 1986;Hutchings and Saenger 1987). They are most productive and biologically important ecosystems of the world which provide goods and services to human society in coastal and marine systems (FAO 2007). They have unique features such as aerial breathing roots, extensive supporting roots, buttresses, salt-excreting leaves and viviparous propagules (Duke 1992;Shi et al. 2006). The term 'mangroves' are referred to either individual plant or intertidal ecosystem or both, as 'Mangrove plants ' and 'Mangrove ecosystem' (MacNae 1968). However, in this context we used mangrove term as a mangrove plants. Anthropogenic activity and climate are responsible for destruction of coastal mangroves vegetation. Globally among 11 of the 70 mangrove species were listed threatened species by International Union for Conservation of Nature (IUCN) (Polidoro et al. 2010). Mangrove species diversity and distribution reported existence of 34 major and 20 minor mangrove species belonging to 20 genera and 11 families across the world (Tomlinson 1986). Ricklefs and Latham (1993) reported the existence of 19 genera with 54 mangrove species including few hybrids. According to world atlas of mangroves database, 73 mangrove species along with few recognized hybrids are distributed in 123 countries with territorial coverage of 150,000 km 2 area globally (Spalding et al. 2010). Indian mangrove vegetation represents fourth largest in the world, distributed along the coastline and occupies 8 % of the total world mangrove covering 6749 km 2 areas (Naskar and Mandal 1999). The entire mangrove habitats in India are situated in three zones: east coast (4700 km 2 ), west coast (850 km 2 ) and Andaman & Nicobar Islands (1190 km 2 ). East coast zone ranges from Sundarban forest of West Bengal to Cauvery estuary of Tamil Nadu and comprises 70 % mangrove (Untawale and Jagtap 1992;Jagtap et al. 1993;Sanyal et al. 1998). West coast region stretches from Bhavnagar estuary of Gujarat to Cochin estuary of Kerala and constitute 15 % mangrove (Mandal and Naskar 2008). Mangrove flora of India constitutes about 60 species belonging to 41 genera and 29 families (Untawale 1985). Along the west coast of India, 34 species of mangroves belonging to 25 genera and 21 families have been reported. There are about 11, 20, 14 and 10 species of mangroves reported along the coast of Gujarat, Maharashtra, Goa and Karnataka respectively in western India. Goa state is located in western coast of India and mangrove vegetation in Goa occupies 500 ha of area (Government of India, 1997). The Cumbarjua canal (15 km) links the two river channels of Mandovi and Zuari, forming an estuarine complex which supports a substantial mangrove extent. D' Souza and Rodrigues (2013) reported the presence of 17 mangrove species in Goa that include 14 true and 3 associated mangrove species. DNA barcoding is currently used effective tool that enables rapid and accurate identification of plant (Li et al. 2015). The Consortium for the Barcode of Life (CBOL) recommended rbcL + matK as the core barcode. However, these core barcode further combined with the psbA-trnH intergenic non-coding spacer region which improved discrimination power of core barcode. The noncoding intergenic region psbA-trnH exhibits high rates of insertion/deletion and sequence divergence (Kress and Erickson 2007). These features make trnH-psbA highly suitable candidate plant barcode for species resolution. Later on, the nuclear ribosomal internal transcribed spacer (ITS) region considered as supplementary barcode, though China Plant Barcode of Life claimed ITS region had higher discriminatory power than plastid core barcodes (CBOL Plant Working Group 2009;Hollingsworth et al. 2011;China Plant BOL Group 2011). Hollingsworth et al. (2011 observed ITS region has some limitations which prevent it from being a core barcode such as incomplete concerted evolution, fungal contamination and difficulties of amplification and sequencing. Plastid gene large subunit of the ribulose-bisphosphate carboxylase gene (rbcL) is of 1350 bp in length and choice for DNA barcoding (Chase 1993).The maturase gene matK is about 1500 bp long and located within the trnK gene encoding the tRNALys (UUU). Substitution rate of the matK gene is highest among the plastid genes (Hilu et al. 2003). Plastid gene matK can discriminate more than 90 % of species in the Orchidaceae but less than 49 % in the nutmeg family (Kress and Erickson 2007;Newmaster et al. 2008). In another case, identification of 92 species from 32 genera using the matK barcode could achieve a success rate of 56 % ). However, a recent study of the flora of Canada revealed 93 % success in species identification with rbcL and matK, while the addition of the trnH-psbA intergenic spacer achieved discrimination up to 95 % (Burgess et al. 2011). Gonzalez et al. (2009) reported that species discrimination was lower (<50 %) for rbcL + matK combination in the study of tropical tree species in French Guiana. Lower discrimination were reported in closest and complex taxa of Lysimachia, Ficus, Holcoglossum and Curcuma using rbcL and matK (Xiang et al. 2011;Zhang et al. 2012;Li et al. 2012;Chen et al. 2015). The lowest discriminatory power was observed in closely related groups of Lysimachia with rbcL (26.5-38.1 %), followed by matK (55.9-60.8 %) and combinations of core barcodes (rbcL + matK) had discrimination of 47.1-60.8 % (Zhang et al. 2012). Delineating mangrove species from putative hybrids using morphological characters are always questionable. Putative hybrids were reported within the major genera of Rhizophora, Sonneratia and Lumnitzera and recently in Bruguiera (Tomlinson 1986;Duke and Ge 2011). In the present study, we assessed mangrove species using plastid coding loci viz. rbcL and matK. Mangroves from Goa are rich in diversity and accounted 14 species belonging to four order and five families. This is our first step towards DNA barcoding of mangroves based on plastid genes. Our study might be helpful in identification as well as developing various strategies towards mangrove conservation. Sample collection In the present study, leaf samples of 14 mangrove species were collected from Goa, located on the west coast of India with geographical latitude of 15.5256°N and longitude of 73.8753°E. Mangrove species identification was performed based on morphological characteristics using a comparative guide to the Asian mangroves and mangroves of Goa (Yong and Sheue 2014;Dhargalkar et al. 2014;Setyawan et al. 2014). Herbarium of these specimens was deposited at Botanical Survey of India, western regional centre, Pune, India. The morphology based identification keys used to authenticate the taxon identities of 14 mangroves species from Goa were listed in supplementary information (Additional file 1: Table S1). The well identified voucher specimens along with their taxonomic information and collection details are listed (Table 1) with their photographs in supplementary information (Additional file 1: Fig. S1). The sequences obtained using barcode markers: rbcL and matK were submitted to the NCBI GenBank (Accession numbers indicated in Table 1), and publicly accessible through the dataset of project DNA Barcoding of Indian Mangroves (Project code: IMDB) in Barcode of Life Data systems (BOLD) (doi:10.5883/DS-IMDBNG) (Ratnasingham and Hebert 2007). DNA extraction High content of mucilage, latex, phenolics, secondary metabolites and polysaccharides in these plants make it a difficult system for protein and nucleic acid isolation from mangrove plants. Cetyl-trimethyl ammonium bromide (CTAB) protocol for DNA extraction from mangroves (Parani et al. 1997a) was modified. Leaf tissue was pulverized in liquid nitrogen and pulverized leaf sample (0.2 g) were mixed with CTAB buffer (20 mM EDTA; 1.4 M NaCl; 2 % PVP-30; 1 % β-mercaptoethanol; 10 % SDS and 10 mg/ml proteinase K). The suspension was incubated at 60 °C for 60 min with gentle mixing and centrifuged at 14,000 rpm for 10 min at room temperature with equal volume of chloroform: isoamyl alcohol (24:1). The aqueous phase was transferred to a new tube and DNA was precipitated with 0.6 volume of cold isopropanol (−20 °C) and chilled 7.5 M ammonium acetate followed by storing at −20 °C for 1 h. The precipitated DNA was centrifuged at 14,000 rpm for 10 min at 4 °C followed by washing with 70 % ethanol. DNA was finally dissolved in TE buffer (10 mM Tris-HCl, 1 mM Na 2 E-DTA, pH 8.0) and its quantity and quality was confirmed by agarose gel electrophoresis and nanodrop (Thermo Scientific, USA). PCR and sequencing Amplification of plastid genes (rbcL and matK) was carried out in 50-μl reaction mixture containing 10-20 ng of template DNA, 200 μM of dNTPs, 0.1 μM of each primers and 1 unit of Taq DNA polymerase (Thermo Scientific, USA). The reaction mixture was amplified in Bio-Rad (T100 model) thermal cycler with temperature profile for rbcL (94 °C for 4 min; 35 cycles of 94 °C for 30 s, 55 °C for 30 s, 72 °C for 1 min; repeated for 35 cycles, final extension 72 °C for 10 min) and for matK (94 °C for 1 min; 35 cycles of 94 °C for 30 s, 50 °C for 40 s, 72 °C for 40 s; repeated for 37 cycles, final extension 72 °C for 5 min). The amplified products were separated by agarose gel (1.2 %) electrophoresis and stained with ethidium bromide (Sambrook et al. 1989 of universal primers rbcL (rbcLa_F and rbcLa_R) and matK_390f and matK_1326r were used for the amplification purpose (Kress and Erickson 2007;Vinitha et al. 2014;Chen et al. 2015). To amplify R. apiculata matK locus, we designed matK_RA reverse primer as follows: 5′-AAAGTTCGTTTGTGCCAATGA-3′. PCR products were purified according to manufacturer's instruction (Chromous Biotech) and further sequencing reactions were carried out using the Big Dye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems) and analyzed on ABI 3500xL Genetic Analyzer (Applied Biosystems). Data analysis Sequence alignment and assembly was achieved in Codon code Aligner v.3.0.1 (Codon Code Corporation) and MEGA 6 (Tamura et al. 2013). The NCBI BLAST was performed to confirm identity of specimens (Altschul et al. 1990). All known mangroves sequences were searched with our sequenced samples using 'BLASTn' tool against NCBI database and highest-scoring hit from each query is taken as the mangrove identification. Intraspecific, interspecific and barcode gap analysis was performed at Barcode of Life Data systems web portal. Further, rbcL and matK sequences were concatenated using DNASP v5.10 and analyzed in MEGA 6 for their resolution inference (Rozas, 2009). The effectiveness of the analysed barcodes in rbcL, matK and rbcL + matK was evaluated using Tax-onDNA v1.6.2, Species Identifier 1.8 (Meier et al. 2006) and BLASTClust (http://toolkit.tuebingen.mpg.de/blastclust). Neighbor-joining (NJ) trees were constructed using MEGA 6.0 and K2P genetic distance model, and node support was assessed based on 1000 bootstrap replicates. Species with multiple individuals forming a monophyletic clade in phylogenetic trees with a bootstrap value above 60 % were considered as successful identification. DNA barcode and sequence analysis Mangroves belonging to 14 species, 9 genera and 5 families were collected. We acquired high quality DNA barcodes for 45 specimens belonging to 14 species, which were sequenced for rbcL and matK. Table 1). The specimens were verified from sequenced data by performing NCBI BLAST. This is performed for preliminary verification for all mangroves at species level but downside in our case study is limited reference data for comparison. The rbcL and matK correctly identified genera up to 100 %, while species identification with rbcL and matK leads to 64 and 85 % identification respectively. Intraspecific and interspecific relationship Barcoding of mangrove exhibited absolute average interspecific differentiation of 0.35 % and 0.9 % in rbcL and matK respectively, while for species average intraspecific variability was 0.24 % in rbcL and 0.20 % in matK (Table 2) with low species resolution in few taxa. The intraspecific and interspecific analysis for rbcL revealed largest average pairwise distance of 0.68, while in matK it was 2.05 and 2.32 respectively. The highest range of congeneric differentiation in Bruguiera and Avicennia were observed in rbcL from 0 to 0.68, whereas for matK, it ranged from 1.29 to 2.31 in Avicennia, further suggesting significant genetic divergence within Avicennia. Barcode gap analysis The barcode gap analysis revealed highest intraspecific distance (>2 %) in 9 specimens of rbcL and 6 specimens of matK, while low intraspecific distance (<2 %) in 11 specimens of rbcL and 9 specimens of matK. Here, low intraspecific distance (<2 %) suggests low species resolution, thus leading to species overlap. With rbcL the largest nearest neighboring distance of 8.43 was observed in Avicennia alba with mean intraspecific distance of 0.11 (Fig. 1a). The maximum intraspecific distance of 0.68 was observed within three individuals of Kandelia candel, Bruguiera gymnorrhiza, A. officinalis and Sonneratia caseolaris (Fig. 1b). With matK, maximum intraspecific distance of 2.05 was observed in Excoecaria agallocha with three individuals per species (Fig. 1d), while largest distance to the nearest neighbor of 24.65 was observed in A. officinalis with mean intraspecific distance of 0.12 (Fig. 1c). Overall average nearest neighboring divergence observed among mangroves using rbcL was 1.39 % (S.E = 0.17) and matK was 4.07 % (S.E = 0.5) (Fig. 1a). Species identification and assignment The species were assigned to their taxa based on three methods, similarity based method using TaxonDNA, BLAST score based single linkage (BLASTClust) and tree based method (NJ). To assess the species assignment of single region and multi regions, we used the 'Best Match' (BM) and 'Best Closest Match' (BCM) criteria from TaxonDNA. For TaxonDNA analysis, we need to set threshold (T) below which 95 % of all intraspecific distances were found. All the results above the threshold (T) were treated as 'incorrect' . Similarly, if all matches of the query sequence were below threshold (T), the barcode assignment was considered to be correct identification. The matches of the query sequence were equally good, but correspond to a mixture of species, then test was (Table 3). The species specific clustering using match and mismatch criteria was evaluated in TaxonDNA and BLASTClust, where sequences with highest similarity and identity were considered as successfully identified. Those species with an identical barcode sequence to an individual of other species were considered as ambiguous, and sequences matching with different species names were treated as failure identifications. Species having single sample and unique sequence were considered as potentially distinguishable. The BLASTClust analysis revealed slightly different results than that of TaxonDNA, where the rate of species resolution and cluster formation was low as that of TaxonDNA (Table 4). Species with multiple individuals forming a monophyletic clade in NJ trees with a bootstrap value above 60 % were considered as successful identifications (Kress et al. 2010). The matK and rbcL + matK discriminated mangrove species in NJ model test method, while rbcL alone failed to identify those species (Fig. 2a-c). Further analysis revealed similar rates of species resolution using both methods for matK as well as rbcL (Table 5). Rhizophora, Sonneratia and Avicennia genera were failed to discriminate their species using plastid markers rbcL, matK and rbcL + matK. Discussion To the best of our knowledge, current study is the first attempt of performing DNA barcoding based assessment of mangroves from Goa using plastid core markers rbcL and matK. Some countable reports based on molecular taxonomy and phylogeny of Indian mangroves are available using nuclear, mitochondrial and plastid markers (ITS, rbcL, RFLP, RAPD, PCR-RAPD and AFLP) (Parani et al. 1997a, b;Lakshmi et al. 1997Lakshmi et al. , 2000Setoguchi et al. 1999;Schwarzbach and Ricklefs 2000). Besides this there are many reports of mangroves identification based on morphological characters (Untawale 1985;Tomlinson 1986;Untawale and Jagtap 1992). Present study revealed discrimination of mangroves based on DNA barcoding at species level excluding some taxa (Rhizophora, Sonneratia and Avicennia). Highest rate of PCR amplification and sequencing was observed in rbcL (97.7 %), while amplification as well as sequencing rate of matK was 95.5 %. Similarly, highest success rate of identification was observed with matK (80.5 %) in local temperate flora of Canada and in combination rbcL + matK identified 93 % flora (Burgess et al. 2011). Species identification success rate using rbcL seems to be higher, whereas rbcL recovery ranged from 90 to 100 % (Little and Stevenson 2007;Ross et al. 2008;CBOL Plant Working Group 2009). matK showed difficulties in PCR amplification and sequencing. Fazekas et al. (2008) showed that matK markers provide possibility of 88 % sequencing success, with the use of 10 primer pair combinations. Similarly, a Table 3 Identification success rates using TaxonDNA (Species Identifier) program under 'Best Match' and 'Best Closest Match' methods TaxonDNA is an alignment-based method based on sequence distance matrices. Percentage of correct/incorrect/ambiguous assignment of a taxon is compared using molecular operating taxonomic unit (MOTU). The species specific clustering using match and mismatch criteria (Ford et al. 2009;Gonzalez et al. 2009;Kress et al. 2010;Hollingsworth et al. 2011). In contrast, CBOL reported that single pair of matK primer was successfully amplified and sequenced 84 % angiosperm species (CBOL Plant Working Group, 2009). We faced many hindrances in amplification and sequencing of Rhizophora genera species R. apiculata using universal matK primers. R. apiculata was amplified and sequenced using universal rbcL marker but for matK amplificaiton, we designed a reverse primer. The possible explanation for the trouble could be due to secondary metabolite might hindered amplification of target genes or failure of primers to amplify genes. Initially, species identification was performed by NCBI BLAST using rbcL and matK sequence data, the BLAST could yield accurate identifications results (Hollingsworth et al. 2009;Kress et al. 2010;Kuzmina et al. 2012). On a similar note BLAST was performed revealing its least efficacy in species identification. It has been used for verification purpose in recent years and comparisons based on test datasets (Ford et al. 2009). Parmentier et al. (2013) reported that species assignment using BLAST method was reliable for genus identification of African rainforest tree (95-100 % success), but less for species identification (71-88 %). Sometimes it gave erroneous identifications, most often due to the limited number of available reference sequences. In the present study, BLAST result with default parameter, for rbcL successfully identified genera (100 %) and species identification rate was 64.28 % for 14 mangroves species. matK was able to identify genera (100 %) and species identification up to 85.71 % successfully. The possible reason for this erroneous assignment in some taxa at species level due to availability of limited sequences in the BOLD or GenBank database (Parmentier et al. 2013). Our result underscored the importance of BLAST method to assigned correct mangroves genera identification (with rbcL and matK). Both Sonneratia alba and Avicennia marina were incorrectly identified at species level using rbcL and matK. Some mangrove species viz. R. apiculata, B. cylindrica and A. alba were misidentified at species level using rbcL. The species identification and taxon assignment was evaluated using TaxonDNA and BLASTClust for rbcL, matK and rbcL + matK. Overall matK marker showed good performance at species and genus level (Tables 3, 4). In contrast to matK; rbcL alone showed poor performance at species level identification. Combined, rbcL + matK markers showed better performance at species and genus level identification (Tables 3, 4, 5). Accordingly, plant CBOL group (2009) reported only 72 % species level resolution using combined rbcL and matK. Similar result was observed after combined rbcL and matK at species level resolution . Lowest resolution was recorded in closely related groups of Lysimachia with combination of rbcL and matK universal markers (Zhang et al. 2012). However, the identification rates based on TaxonDNA and phylogenetic tree methods (Tables 3, 5) were significant with matK as compared to rbcL. Low resolution using DNA barcoding regions has been documented in many other plants such as the genus Araucaria (32 %), Solidago (17 %) and Quercus (0 %) (Little and Stevenson 2007;Leon-Romero et al. 2012). In TaxonDNA analysis, for rbcL threshold (T) was observed 0 %, similar result was recorded for rbcL in the Zingiberaceae family . However, threshold (T) for Indian Zingiberaceae family members were recorded as 0.20 % for rbcL and 0 % for rpoB and accD (Vinitha et al. 2014). In BLASTClust, the rbcL and matK regions showed similar identification rates, while concatenation of both these regions increased the efficiency of species resolution as well as cluster formation (Gonzalez et al. 2009;Blaalid et al. 2013). In case of closest taxa of mangroves viz. Avicennia, Rhizophora and Sonneratia species, there is a need to explore new DNA barcode markers, which may leads to species level resolution. Table 5 Identification achieved by phylogenetic analysis using Neighbor Joining (NJ) and various methods, obtained from models test For each, Bootstrap replicates = 1000 K2 + G Kimura 2 + Gamma distribution, GTR + I Generalised time reversible + proportion of invariable sites (I), T92 + I Tamura 1992 Model + proportion of invariable sites (I)
2018-04-03T06:05:48.739Z
2016-09-13T00:00:00.000
{ "year": 2016, "sha1": "78da9e168bfc9f41978b8ee1bafe52e13d6b2653", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-016-3191-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78da9e168bfc9f41978b8ee1bafe52e13d6b2653", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }