id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
4463156
pes2o/s2orc
v3-fos-license
Nanotubes mediate niche-stem cell signaling in the Drosophila testis Stem cell niches provide resident stem cells with signals that specify their identity. Niche signals act over a short-range such that only stem cells but not their differentiating progeny receive the self-renewing signals1. However, the cellular mechanisms that limit niche signaling to stem cells remain poorly understood. Here we show that the Drosophila male germline stem cells (GSCs) form previously unrecognized structures, microtubule-based (MT)-nanotubes, which extend into the hub, a major niche component. MT-nanotubes are observed specifically within GSC populations, and require IFT (intraflagellar transport) proteins for their formation. The BMP receptor Tkv localizes to MT-nanotubes. Perturbation of MT-nanotubes compromises activation of Dpp signaling within GSCs, leading to GSC loss. Moreover, Dpp ligand and Tkv receptor interaction is necessary and sufficient for MT-nanotube formation. We propose that MT-nanotubes provide a novel mechanism for selective receptor-ligand interaction, contributing to the short-range nature of niche-stem cell signaling. MT-nanotubes are indicated by arrowheads. A graphic interpretation is shown in the right-hand panel. c, Orientation of nanotubes towards the hub in GSCs versus gonialblasts/spermatogonia. The size of each vector represents the frequency of MT-nanotubes oriented towards each direction. Indicated numbers of nanotubes (n . 30 testes) were scored from three independent experiments. d-g, Three-dimensional rendering images of MT-nanotubes (brackets) in fixed (d) or live tissue (e-g), with indicated cell membrane markers; nos-gal4.GFP-atub was used for d and e. showed only 0.44 MT-nanotubes per testis (or ,0.002 per cell, n 5 75 testes), without any particular orientation when present (Fig. 1c). MT-nanotubes were sensitive to colcemid, the microtubuledepolymerizing drug, but not to the actin polymerization inhibitor cytochalasin B, suggesting that MT-nanotubes are microtubule-based structures (Extended Data Fig. 1a-d, f). MT-nanotubes were not observed in mitotic GSCs (Extended Data Fig. 1e, g), and GSCs form new MT-nanotubes as they exit from mitosis (Extended Data Fig. 1h and Supplementary Video 1). By contrast, MT-nanotubes in interphase GSCs were stably maintained for up to 1 h of time-lapse live imaging (Supplementary Video 2). Although cell-cycle-dependent formation of MT-nanotube resembles that of primary cilia 9,10 , MT-nanotubes are distinct structures, in that they lack acetylated microtubules and are sensitive to fixation. Furthermore, a considerable fraction of GSCs form multiple MT-nanotubes per cell (54% of GSCs with MT-nanotubes, n 5 251 GSCs), and MT-nanotubes are not always associated with the centrosome/basal body, as is the case for the primary cilia (Extended Data Fig. 1i). To examine the geometric relationship between MT-nanotubes and hub cells further, we imaged MT-nanotubes in combination with various cell membrane markers, followed by three-dimensional rendering. Although the MT-nanotubes are best visualized in unfixed testes that express GFP-aTub in germ cells, adding a low concentration (1 mM) of taxol to the fixative preserves MT-nanotubes, allowing immunofluorescence staining. First, Armadillo (Arm, b-catenin) staining, which marks adherens junctions formed at hub cell/hub cell as well as hub cell/GSC boundaries, revealed that adherens junctions do not form on the surface of MT-nanotubes ( Fig. 1d and Supplementary Video 3). Using FM4-64 styryl dye, we found that the MT-nanotubes are ensheathed by membrane lipids ( Fig. 1e and Supplementary Videos 4 and 5). Furthermore, myristoylation/palmitoylation site GFP (myrGFP), a membrane marker, expressed in either the germline (Fig. 1f) or hub cells (Fig. 1g) illuminated MT-nanotubes, suggesting that the surface membrane of a MT-nanotube is juxtaposed to hub-cell plasma membrane. We examined genes that regulate primary cilia and cytonemes for their possible involvement in MT-nanotube formation (Fig. 2a). RNA interference (RNAi)-mediated knockdown of oseg2 (IFT172), osm6 (IFT52) and che-13 (IFT57), components of the intraflagellar transport (IFT)-B complex that are required for primary cilium anterograde transport and assembly 11 , significantly reduced the length and the frequency of MT-nanotubes (Fig. 2a, Extended Data Fig. 2b and Extended Data Table 1). Knockdown of Dlic, a dynein intermediate chain required for retrograde transport in primary cilia 12 , also reduced the MT-nanotube length and frequency ( Fig. 2a and Extended Data Table 1). Knockdown of klp10A, a Drosophila homologue of mammalian kif24 (a MT-depolymerizing kinesin of the kinesin-13 family, which suppresses precocious cilia formation 13 ), resulted in abnormally thick/bulged MT-nanotubes (Fig. 2a, Extended Data Fig. 2c and Extended Data Table 1). We did not observe significant changes in MT-nanotube morphology upon knockdown of IFT-A retrograde transport genes, such as oseg1 and oseg3 ( Fig. 2a and Extended Data Table 1). Endogenous Klp10A localized to MT-nanotubes both in wild-type testes and in GFP-aTub-expressing testes ( Fig. 2b and Extended Data Fig. 2d, e). GFP-Oseg2 (IFT-B), GFP-Oseg1, GFP-Oseg3 (IFT-A) and Dlic also localized to the MT-nanotubes when expressed in germ cells ( Fig. 2c and Extended Data Fig. 2f-i). The localization of IFT-A components to MT-nanotubes, without detectable morphological abnormality upon mutation/knockdown, is reminiscent of the observation that most of the genes for IFT-A are not required for primary cilia assembly [14][15][16][17] . Expression of a dominant negative form of Dia (Dia DN ) or a temperature-sensitive form of Shi (Shi ts ) in germ cells (nos-gal4.UAS-dia DN or UAS-shi ts ), which perturb cytoneme formation 18 , did not influence the morphology or frequency of MT-nanotubes in GSCs ( Fig. 2a and Extended Data Table 1). Taken together, these results show that primary cilia proteins localize to MT-nanotubes and regulate their formation. In search of the possible involvement of MT-nanotubes in hub-GSC signalling, we found that the Dpp receptor, Thickveins (Tkv), expressed in germ cells (nos-gal4.tkv-GFP) was observed within the hub region (Extended Data Fig. 3a), in contrast to GFP alone, which remained within the germ cells (Extended Data Fig. 3b). A GFP protein trap of Tkv (in which GFP tags Tkv at the endogenous locus) also showed the same localization pattern as Tkv-GFP expressed by nos-gal4 (Extended Data Fig. 3c). By inducing GSC clones that co-express Tkv-mCherry and GFP-aTub, we found that Tkv-mCherry localizes along the MT-nanotubes as puncta (Fig. 3a). Furthermore, using live observation, Tkv-mCherry puncta were observed to move along the MT-nanotubes marked with GFP-aTub (Extended Data Fig. 3d), suggesting that Tkv is transported towards the hub along the MT-nanotubes. It should be noted that, in the course of our study, we noticed that mCherry itself localized to the hub when expressed in germ cells, similar to Tkv-GFP and Tkv-mCherry (see Extended Data Fig. 3e, f and Supplementary Note 1). Importantly, the receptor for Upd, Domeless (Dome), predominantly stayed in the cell body of GSCs (Extended Data Fig. 3g), demonstrating the specificity/ selectivity of MT-nanotubes in trafficking specific components of the niche signalling pathways. A reporter of ligand-bound Tkv, TIPF 19 , localized to the hub region together with Tkv-mCherry (Fig. 3b), in addition to its reported localization at the hub-GSC interface 19 . Furthermore, Dpp-GFP expressed by hub cells co-localized with Tkv-mCherry expressed in germline (Fig. 3c, dpp-lexA ts .dpp-GFP, nos-gal4 ts .tkv-mCherry). These results suggest that ligand (Dpp)receptor (Tkv) engagement and activation occurs at the interface of the MT-nanotube surface and the hub cell plasma membrane. Knockdown of IFT-B components (oseg2 RNAi , che-13 RNAi or osm6 RNAi ), which reduces MT-nanotube formation, resulted in reduction of the number of Tkv-GFP puncta in the hub area, concomitant with increased membrane localization of Tkv-GFP (Fig. 3d, thickening of MT-nanotubes, led to an increase in the number of Tkv-GFP puncta in the hub area ( Fig. 3d, e, g). Taken together, these data suggest that Tkv is trafficked into the hub via MT-nanotubes, where it interacts with Dpp secreted from the hub. Knockdown of klp10A (klp10A RNAi ) led to elevated phosphorylated Mad (pMad) levels, a readout of Dpp pathway activation, in GSCs ( Fig. 4a, b, d and Supplementary Note 2). By contrast, RNAi-mediated knockdown of oseg2, osm6 and che-13 (IFT-B components), which causes shortening of MT-nanotubes, reduced the levels of pMad in GSCs (Fig. 4c, d). Dad-LacZ, another readout of Dpp signalling activation, exhibited clear upregulation upon knockdown of klp10A (Extended Data Fig. 4a, b). GSC clones of che-13 RNAi , osm6 RNAi or oseg2 452 were lost rapidly compared with control clones (Fig. 4e, f), consistent with the idea that MT-nanotubes help to promote Dpp signal transduction 3,4 . Knockdown of oseg2, che-13 and osm6 did not visibly affect cytoplasmic microtubules (Extended Data Fig. 4d-g), suggesting that GSC maintenance defects upon knockdown of these genes are probably mediated by their role in MT-nanotube formation. Global RNAi knockdown of these genes in all GSCs using nos-gal4 did not cause a significant decrease in GSC numbers (data not shown), indicating that compromised Dpp signalling due to MT-nanotube reduction leads to a competitive disadvantage in regards to GSC maintenance only when surrounded by wild-type GSCs. When klp10A RNAi GSC clones were induced, pMad levels specifically increased in those GSC clones, indicating that Klp10A acts cell-autonomously in GSCs to influence Dpp signal transduction (Fig. 4g). Importantly, klp10A RNAi spermatogonia (Fig. 4g, yellow line) did not show a significant elevation in pMad level compared with control spermatogonia (Fig. 4.g, pink line), demonstrating that the role of Klp10A in regulation of Dpp pathway is specific to GSCs. pMad levels did not change in spermatogonia upon manipulation of MT-nanotube formation (Extended Data Fig. 4c). GSC clones of klp10A RNAi or klp10A null mutant (klp10A 24 ) did not dominate in the niche, despite upregulation of pMad (Extended Data Fig. 5), possibly because of its known role in mitosis 20 . Importantly, these conditions did not significantly change STAT92E levels, which reflect Upd-JAK-STAT signalling in GSCs 2,21 , revealing the selective requirement of MT-nanotubes in Dpp signalling (Extended Data Fig. 6). Together, these results demonstrate that MTnanotubes specifically promote Dpp signalling and their role in enhancing the Dpp pathway is GSC specific. Since cytonemes are induced/stabilized by the signalling molecules themselves 18 , we explored the possible involvement of Dpp in MTnanotube formation. First, we found that a temperature-sensitive dpp mutant (dpp hr56 /dpp hr4 ) exhibited a dramatic decrease in the frequency of MT-nanotubes (0.067 MT-nanotubes per GSC, n 5 244 GSCs) and the remaining MT-nanotubes were significantly thinner (Fig. 5a, b, Extended Data Fig. 7a, b and Extended Data Table 1). Knockdown of tkv (tkv RNAi ) in GSCs also resulted in reduced length and frequency of MT-nanotubes (Fig. 5a, b, Extended Data Fig. 7c and Extended Data . d-f, Tkv-GFP expressed in control (d), klp10A RNAi (e) and oseg2 RNAi (f) germ cells (nos-gal4 ts .UAS-tkv-GFP, UAS-RNAi). Black and white of micrograph were inverted for better visibility of Tkv localization to the hub and plasma membrane. g, Average number and standard deviations of Tkv-GFP puncta within hub area per testis for indicated genotypes. n 5 15 testes from at least two independent crosses were scored. P values from t-tests are provided as *P # 0.05, **P # 0.01, ***P # 0.001. Scale bar, 10 mm. Table 1) from at least two independent experiments were scored for each data point. g, A klp10A RNAi GSC clone (72 h after clone induction, blue circle) with a higher pMad level, compared with control GSCs (white circle), klp10A RNAi spermatogonia clone (yellow circle) and control spermatogonia clone (pink circle) have similar pMad levels. Asterisk indicates hub. Scale bar, 10 mm. Average value and standard deviations are shown in each graph. P values from t-tests are provided as *P # 0.05, **P # 0.01, ***P # 0.001. Table 1). Conversely, overexpression of Tkv (tkv OE ) 22 in germ cells led to significantly longer MT-nanotubes (Fig. 5a, b, Extended Data Fig. 7d and Extended Data Table 1). Interestingly, expression of a dominant negative Tkv (tkv DN ), which has intact ligand-binding domain but lacks its intracellular GS domain and kinase domain, resulted in thickening of MT-nanotubes, rather than reducing the thickness/length (Fig. 5a, b and Extended Data Table 1). This indicates that ligand-receptor interaction, but not downstream signalling events, is sufficient to induce MT-nanotube formation. Strikingly, upon ectopic expression of Dpp in somatic cyst cells (tj-lexA.dpp), spermatogonia/spermatocytes were observed to have numerous MT-nanotubes (Fig. 5c, d and Extended Data Fig. 7e), suggesting that Dpp is necessary and sufficient to induce or stabilize MT-nanotubes in the neighbouring germ cells. In turn, MT-nanotubes may promote selective ligand-receptor interaction between hub and GSCs, leading to spatially confined self-renewal (Fig. 5e). Our study shows that previously unrecognized structures, MTnanotubes, extend into the hub to mediate Dpp signalling. We propose that MT-nanotubes form a specialized cell surface area, where productive ligand-receptor interaction occurs. In this manner, only GSCs can access the source of highest ligand concentration in the niche via MT-nanotubes, whereas gonialblasts do not experience the threshold of signal transduction necessary for self-renewal, contributing to the short-range nature of niche signalling. In summary, the results reported here illuminate a novel mechanism by which the niche specifies stem cell identity in a highly selective manner. RNAi screening of candidate genes for MT-nanotube morphology/function was performed by driving UAS-RNAi constructs under the control of nos-gal4 (see below for validation method). Control crosses for RNAi screening were designed with matching gal4 and UAS copy number using TRiP background stocks (Bloomington Stock Center BDSC36304 or BDSC35787). Expression of Dpp under the dpp-lexA (LHG) driver or tj-lexA driver (Bloomington Stock Center, BDSC54786) with tub-gal80 ts (denoted as dpp-lexA ts or tj-lexA ts , respectively) was performed by culturing flies at 18 uC to avoid lethality during development and shifted to 29 uC upon eclosion for 24 h before analysis. For shi ts expression, nos-gal4.UAS-shi ts flies cultured at 18 uC were shifted to 29 uC upon eclosion for 24 h before analysis. The dpp hr56 /CyO; nos-gal4, UAS-GFP-atub females were crossed with dpp hr4 /SM6 males at permissive temperature (18 uC) and shifted to restrictive temperature (29 uC) upon eclosion for 24 h before analysis. We used nos-gal4 ts (see below for transgenene construction) to achieve temporal control of UAS-tkv-mCherry to obtain similar expression levels to dpp-lexA ts .LexAop-dpp-GFP for co-localization analysis and quantification of Tkv-GFP puncta. Temperature shift (29 uC) was performed upon eclosion for 72 h before analysis. Expression of UASdome-EGFP 28 (a gift from S. Noselli) was performed at 18 uC with nos-gal4 without VP16 (see below for transgene construction). Other fly crosses were performed at 25 uC. Control experiments were conducted with matching temperature-shift schemes. For clonal expression of Tkv-mCherry and GFP-aTub, hs-FLP; nos-FRT-stop-FRT-gal4, UAS-GFP 29 females were crossed with UAS-tkv-mCherry, UAS-GFP-atub males and heat shocked at 37 uC for 20 min and observed 24 h after heat shock. A strong tkv RNAi (TRiP.HMS02185, Bloomington Stock Center BDSC40937) led to complete loss of germ cells, while a weak knockdown (TRiP.GL01538, Bloomington Stock Center BDSC43194) partly maintained germ cells and was used for this study. Other stocks used in this study are listed in Extended Data Table 1. Amplified fragments were sequenced for validation and subcloned into BglII/ XhoI sites of pUAST-EGFP-attB 29 or pUAST-VenusN-attB vector (containing the amino (N)-terminal half portion of Venus instead of GFP). pUAST-VenusN-attB vector was constructed as follows. The N-terminal half portion of Venus cDNA was amplified using primers, XhoI (underlined)-RSIAT (linker peptide, lower case)-Venus-F; 59-AACTCGAGagatccattgcgaccATGGTGAGCAAGGGCGA-39 and KpnI (underlined)-Venus-R; 59-TCGGTACCTTAGGTGATATAGACGTT GTGGCTGATGTAGT-39 and subcloned into XhoI/KpnI sites of pUAST-attB vector 30 . Transgenic flies were generated using strain BDSC24749 by PhiC31 integrase-mediated transgenesis (BestGene). UAS-dlic-VN and UAS-klp10A-VN were used when GFP fluorescence was not necessary or undesirable. UAS-klp10A RNAi (double-stranded RNA HMS00920) target sequence is within the 59 untranslated region (UTR) of the gene and is not present in UAS-klp10A-VN construct; thus this transgene was used to rescue RNAi-induced phenotypes (Extended Data Table 1). For time-lapse live imaging, testes were placed on a drop of medium on a microscope slide with coverslip spacers on both edges, and another coverslip was placed on top. Time-course images of the areas around hub were taken once every minute or every 5 min for 60 min using a Zeiss LSM700 confocal microscope with a 340 oil immersion objective (numerical aperture 5 1.4). Four-dimensional data sets (x, y, z, t) were processed using Image J 32 . Immunofluorescent staining. Testes were dissected in PBS and fixed in 4% formaldehyde in PBS for 30 min. To preserve MT-nanotubes during fixation, taxol (1 mM) was added to 4% formaldehyde/PBS solution. For anti-a-tubulin staining, testes were fixed in 90% methanol, 3% formaldehyde for 10 min at 280 uC. Fixed testes were briefly rinsed three times and permeabilized in PBST (PBS 1 0.3% Triton X-100) at room temperature for 1 h, followed by incubation with primary antibody in 3% bovine serum albumin (BSA) in PBST at 4 uC overnight. Samples were washed three times for 20 min in PBST, incubated with secondary antibody in 3% BSA in PBST at room temperature for 2-4 h and then washed for 60 min (three times 20 min) in PBST. Samples were then mounted using VECTASHIELD with 49,6-diamidino-2-phenylindole (DAPI). The primary antibodies used were as follows: mouse anti-c-tubulin (1:500; GTU-88, Sigma), rabbit anti-b-galactosidase (1:500, Abcam), rabbit anti-Klp10A (1:2,000, a gift from D. Sharp 33 ), rabbit anti-Ser 453 and Ser 455 phosphorylated Mad (1:1,000, a gift from E. Laufer), rat anti-Vasa (1:20; developed by A. Spradling and D. Williams), mouse anti-Fasciclin III (7G10, 1:40, developed by C. Goodman), mouse anti-Armadillo (N2 7A1, 1:20, developed by E. Wieschaus) and mouse anti-a-tubulin 4.3 (1:50; developed by C. Walsh) were obtained from the Developmental Studies Hybridoma Bank. Guinea pig anti-STAT92E was generated using the synthetic peptide Ac-CSGTPHHAQESMQLGNGDFGMADFDTITNFENF-amide (Covance) and used at a dilution of 1:2,000. STAT92E antibody was validated by immunofluorescent staining of nos-gal4 ts .stat92E RNAi (Bloomington Stock Center, BDSC35600 and BDSC33637, data not shown). Guinea pig anti-Klp10A was generated as described previously 33 (Covance) and used at a dilution of 1:2,000. AlexaFluor-conjugated secondary antibodies were used at a dilution of 1:400. Images were taken using a Zeiss LSM700 confocal microscope with a 340 oil immersion objective (numerical aperture 5 1.4), or a Leica TCS SP8 confocal microscope with a 363 oil-immersion objective (numerical aperture 5 1.4) and processed using Image J 32 and Adobe Photoshop software. Three-dimensional rendering was performed by Imaris software. Mosaic analysis and clonal knockdown. The oseg2 452 homozygous clones were generated by FLP/FRT-mediated mitotic recombination 34 . Adult hs-flp, tub-gal4, UAS-GFP;; tub-gal80, 2AFRT/oseg2 452 , 2AFRT males were heat-shocked at 37 uC for 1 h twice a day for 3 days; hs-flp, tub-gal4, UAS-GFP;; tub-gal80, 2AFRT/ 2AFRT flies were used as controls. Testes were dissected at indicated times after clone induction. The number of testes containing any GFP-positive oseg2 452 homozygous clones was determined. For RNAi clonal analysis, hs-flp; nos-FRTstop-FRT-gal4, UAS-GFP 29 with UAS-che-13 RNAi , UAS-osm6 RNAi or UAS-klp10A RNAi flies were heat-shocked at 37 uC for 30 min. Testes were dissected at indicated times after clone induction. The percentage of GFP-positive GSCs was determined. The means 6 s.d. from two independent experiments were plotted to the graph. LETTER RESEARCH Quantification of pMad and STAT92E intensities. For pMad quantification, integrated intensity within the GSC nuclear region was measured for anti-pMad staining and divided by the area. To normalize the staining condition, data were further normalized by the average intensities of pMad from four cyst cells in the same testes, and the ratios of relative intensities were calculated as each GSC per average cyst cell. For STAT92E quantification, intensity within the GSC nuclear region was measured for anti-STAT92E staining and divided by the area. Data were normalized by DAPI signal intensities. The means 6 s.d. were plotted to the graph for each genotype. Quantitative reverse transcription PCR to validate RNAi-mediated knockdown of genes. Double driver females (nos-gal4 and c587-gal4) were crossed with males of indicated RNAi lines. Testes from 50 male progenies, age 0-2 days, were collected and homogenized by pipetting in TRIzol Reagent (Invitrogen) and RNA was extracted following the manufacturer's instructions. One microgram of total RNA was reverse transcribed to cDNA using SuperScript III First-Strand Synthesis SuperMix (Invitrogen) with Oligo (dT) 20 Primer. Quantitative PCR was performed, in triplicate, using Cybergreen Applied Biosystems Gene Expression Master Mix on an CFX96 Real-Time PCR Detection System (Bio-Rad). Control primer for atub84B (59-TCAGACCTCGAAATCGTAGC-39/59-AGCAGTAGA GCTCCCAGCAG-39) and experimental primer for oseg1 (59-TGATCATTCAG CACCTGATCTC-39/59-CGCCAGTCGATTCCGATAAA-39), oseg2 (59-TCTG AACGAGCGAGGAAATG-39/59-CCACTGGTCATCCTGCTAATC-39), oseg3 (59-ACTGGTTCTCGCAGGTAAAG-39/59-TAATGCCTCGCCAAGTGATAG-39), osm6 (59-CTTCCATCCCAAGGAGTGTATC-39/59-CTTCTCGTCACTGAA ATCGTAGT-39), che-13 (59-GATGGAGCAGGAGCTGAAA-39/59-GGTCGG TGGTTTGGTTCT-39), tkv (59-GCCACGTCTCATCAACTCAA-39/59-CTTT GCACCAGCAATGGTAATC-39) were used. Relative quantification was performed using the comparative CT method (ABI manual). Statistical analysis and graphing. No statistical methods were used to predetermine sample size. The experiments were not randomized. The investigators were not blinded to allocation during experiments and outcome assessment. Statistical analysis and graphing were performed using Microsoft Excel 2010 or GraphPad prism 6 software. Data are shown as means 6 s.d. The P value (two-tailed Student's t-test) is provided for comparison with the control shown as *P # 0.05, **P # 0.01, ***P # 0.001; NS, non-significant (P . 0.05). MT-nanotube orientation was measured as the angle between the two lines using imageJ 32 : one formed by connecting germ cell centre to hub centre; the other formed by connecting the tip and the base of MT-nanotube. Each angle was plotted to Wind Rose graph by Origin 9.1 software (OriginLabs). Figure 4 | Effect of RNAi-mediated knockdown of IFT components on Dpp signalling and cytoplasmic microtubules. a, b, Dad-LacZ staining was undetectable in control GSCs (a) but was enhanced in klp10A RNAi GSCs (b). c, Quantification of pMad intensity in the two-or fourcell spermatogonia (SG) of indicated genotypes. Graph shows average values 6 s.d.; n 5 30 GSCs were scored from at least ten testes from at least two independent crosses for each data point. d-i, Cytoplasmic microtubule patterns stained with anti-a-tubulin antibody upon RNAi-mediated knockdown of indicated genes (d-h) or colcemid treatment for 90 min (i). In control as well as upon knockdown of IFT-B components, cytoplasmic MTs, visible as fibrous cytoplasmic patterns, were not visibly affected, whereas colcemid treatment disrupted cytoplasmic MTs; h, klp10A knockdown led to hyper stabilization of cytoplasmic MTs. Asterisk indicates hub. P values from t-tests are provided as NS, non-significant (P . 0.05). Scale bar, 10 mm.
2018-03-30T13:12:40.228Z
2015-07-01T00:00:00.000
{ "year": 2015, "sha1": "6adaf8904324c1753f9a332e29109693dd9494ca", "oa_license": "unspecified-oa", "oa_url": "https://europepmc.org/articles/pmc4586072?pdf=render", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "e808d51b96f57f6d3ea6584baa30cd9925e87bfa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
209598047
pes2o/s2orc
v3-fos-license
Use of genetic markers of meat productivity in breeding of Hereford breed bulls The aim of the study was to assess the genetic potential in Canadian Hereford sires using DNA markers, identify complex genotypes and assess their impact on the growth, development and meat productivity of its offspring. Groups of sons were formed taking into account the complex genotypes of sires: group 1 (n = 28) – sons of bulls carrying in their genome a complex of genotypes with desired alleles; group 2 (n = 30) – sons of bulls with a complex of genotypes that lack the desired alleles. The offspring from bulls-carriers of the “desirable” alleles in complex CAPN1, GH, Lep, TG5 genes that meet the exterior requirements exceeded their peers in live weight (P < 0.05), carcass weight (P < 0.05) and muscle tissue (P < 0.05). The maximum conversion rate of feed protein into product protein was also established in the group of sons from selected bulls. Thus, animal selection for body conformation type is advisable to combine with the herd genotyping for a complex of genotypes associated with different economically useful traits when creating highly efficient population of beef cattle. Introduction The experience of developed countries shows that the improvement of the genetic potential in beef cattle and the creation of competitive beef production are possible only through the use of molecular genetic methods. Thus, the leading National Associations of the Angus breed [1], Hereford [2], Simmental and others [3] widely using DNA testing to predict and evaluate the expected productivity of animals continuously monitor its effectiveness [4]. For Russia, the question of using the already known different QTL-MAS models or choosing one's own genomic assessment method adapted to the world genetic process is extremely relevant. The present study will be directed to the solution of individual tasks in this strategic direction for livestock breeding. Calpain (CAPN1) is one of the genes associated with the marbling of meat, which determines its softness. It is proved that in the decomposition of muscle tissue that occurs after slaughter of an animal, the protein of the Calpain family (Calpain) takes an active part. Its mechanism of action lies in the fact that the Calpain system based on calcium-dependent cysteine protease and due to the decomposition of the skeletal muscle Z-discs and weakening of the bonds between muscle fibers creates conditions for the even distribution of intramuscular fat between the fibers, which provides a softness, juiciness of meat, its marbling [5][6][7][8][9]. Somatotropin (GH) is produced by the anterior lobe of hypophysis, is one of the most important regulators of the somatic growth in animals. It is established that the gene controlling the synthesis of Somatotropin regulates the growth of the animal, and also plays a key role in metabolic processes (carbohydrate, fat processes) [10][11][12]. Thyreoglobulin (TG5) influences lipid metabolism, participates in the formation of fat cells and forms the so-called "marbling" of muscle tissue [13]. Leptin (LEP) is a hormone produced by the cells of adipose tissue, plays an important role in metabolism, in particular, in the accumulation of fat in the body. In beef cattle breeding, polymorphism of the leptin gene is an important genetic factor affecting slaughter yield and meat quality [14][15][16]. The purpose of the study is to assess the genetic potential in the Canadian selection servicing bulls of the Hereford breed using DNA markers, identify complex genotypes and assess their impact on the growth, development and meat productivity of the offspring. Materials and methods Object of study. Large Hereford breed sires of Canadian origin (n = 18). Bull calves (n = 58) got from servicing bulls with a different complex of genotypes for the studied SNP markers. Animal attendance and experimental studies were carried out in accordance with Russian Regulations instructions, 1987 (Order No.755 on 12.08.1977, the USSR Ministry of Health) and "The Guide for Care and Use of Laboratory Animals" (National Academy Press Washington, D.C. 1966). In performing research, efforts were made to minimize the suffering of animals and reduce the number of samples used. To assess the genetic potential of large Hereford servicing bulls of Canadian selection (n = 18), the gene polymorphism of Calpain (CAPN1), Somatotropin (GH), Thyreoglobulin (TG5), Leptin (LEP) was studied in the herds of stud farms of the APC (Kolkhoz) "Rodina" and OJSC "Belokopanskoye" on the Stavropol Territory. Taking into account the complex genotypes of servicing bulls, groups of sons were formed, which were raised at the experimental fattening station in accordance with the requirements for the technology of specialized beef cattle breeding: group 1 (n = 28)sons of bulls carrying in their genome a complex of genotypes with desired alleles; group 2 (n = 30)sons of bulls with a complex of genotypes that lack the desired alleles. Evaluation of meat productivity, synthesis of meat components were determined by the results of control slaughter, 3 heads from each group, after 24 h period of fasting. Obtained during slaughter of young stock bio substrates (longest back muscle, fat, minced meat) were subjected to chemical and biochemical analyses. The statistical analysis of results was conducted using the statistical software package of Statistics 10.0 ("Stat. Soft. Inc.," USA). Comparison of results was carried out using the parametric method of Student's criterion. Parameter of P ≤ 0.05 was taken as a limit of significance. Results An analysis of the results in genotyping of large Hereford servicing bulls getting Canadian selection revealed features of the gene polymorphism (CAPN1, GH, TG5, LEP) that control meat productivity (Table 1). CAPN1 gene polymorphism in the studied population of large Canadian selection bulls is represented by two alleles (C and G) with a very low (0.06) occurrence frequency of the allele C and a high frequency (0.94) of the allele G. This provided a clear advantage of the homozygous (GG) genotype over the homozygous (CC) variant: 94.0% vs. 6.0% in the absence of a heterozygous (CG) genotype. The polymorphism of Somatotropin allelic profile (GH) in the studied bull population is represented by two alleles (V and L) with occurrence frequency of 0.36 and 0.64, respectively. The distribution among the studied bulls of the homozygous (VV) and heterozygous (LV) genotypes in the Somatotropin (GH) was relatively the same (22.0 and 28.0%) with a clear advantage (50.0%) of the homozygous (LL) genotype. A study of thyreoglobulin (TG5) gene polymorphism represented by two alleles (C and T), significant variability in their frequency of occurrence, which was 0.14 (T allele) and 0.86 (C allele) was found that is reflected in the occurrence frequency of homozygous (CC, TT) and heterozygous (CT) genotypes: 78.0; 5.0 and 17.0%, respectively. Allelic Leptin profile (LEP) in the studied beef livestock population is represented by two alleles (C and T). A distinctive feature in the polymorphism of this gene was a high (0.61) occurrence frequency of the C allele, a low (0.39) frequency of the T allele. This provided significant variability in the occurrence frequency of homozygous (CC, TT) and heterozygous (CT) genotypes: 33.0; 11.0 and 56.0%, respectively. Since the complex marking of a selective significant trait for several genes is more effective, one of the objectives in this study was to determine and to compare the genetic structure of Canadian selection large bulls using the complex genotypes of the CAPN1, GH, TG5, LEP genes. Nine complex genotypes with different occurrence frequency of marker alleles were identified ( Table 2). Complex genotypes, the allelic profile of which is represented by eight desired alleles (CAPN1 CC GH VV TG5 TT LEP TT ), by six (GH VV TG5 TT LEP TT ), by five (GH VV TG5 TT LEP TC and GH VV TG5 TC LEP TT ) were found in four servicing bulls, the progeny of which was combined into the 1st experimental group. Five bulls (27.8%) were carriers of the complex genotype (CAPN1 GG GH LL TG5 CC LEP CC ), in the allelic spectrum of which there were no desired alleles. Their sons represented the 2nd experimental group. In sons, depending on the presence of desirable alleles in the genotype, the features of growth, development, formation of meat productivity, quality of meat products were studied. Comparative analysis of live weight, average daily gain indices revealed the superiority in bull calves of the 1st group (Table 3). So, in the early period of ontogenesis (the first 3 months) bull calves of the 1st group surpassed their herd mates of the 2nd group in live weight by 5.4 kg (5.3%, P<0.05). The revealed regularity remained the same in subsequent age periods: at the age of 7 months, the advantage was 23.5 kg (12.3%, P<0.05), during the weaning period, at the age of 8 months, it was 20.8 kg (8.5%, P>0.05), and it was most clearly manifested at 15 months of age (37.3 kg, P<0.05). An analysis of the average daily gains dynamics, a factor that most fully reflects the productive qualities of young stock, established a common regularity for all animals: an increase in average daily gains up to 7 months of age. However, a large value of this index was typical for bull calves of the 1st group: 1089.5±52.81 vs. 925.6±42.82, with a difference of 163.9 g (17.7%, P<0.05). After weaning as a result of stress, a decrease in the growth rate in all animals occurred. However, this decrease was more noticeable in bull calves of the 2nd group: 806.1±40.24 vs. 883.4±49.87 g. A distinctive feature of bull calves in the 1st group, sons from bull carriers of complex genotypes with desired alleles, was a high growth rate during all the observed periods compared to herd mates in the 2nd group, sons from bull carriers of complex genotypes without desired alleles, with superiority in all age periods: by 23.3 g (2.7%) from 8 to 12 months, by 83.6g (9.4%) from birth to 15 months, by 160.3 g (14.5%) from 12 to 15 months. Analysis of the control slaughter results revealed that for the main traits (carcass weight, slaughter weight, slaughter yield) the advantage was behind the bull calves of the 1st group that has made 14.1; 12.4 and 1.7%, respectively, with a lower content of adipose tissue by 3.6% (Table 4). Analysis of the morphological carcass composition, which is an important index of the meat value as a food product, revealed that the absolute amount of flesh part was greater in the carcasses of bull calves in the 1st group, by 13.9 kg (14.9%, P < 0.05) ( Table 5). The process of muscle tissue accumulation in these animals was more pronounced: 15.7 kg (20.1%, P<0.05). The maximum ratio of edible and inedible parts of the carcass was observed in bull calves of the 1st group with herd mates' superiority of 0.21 units. A comparative analysis of the meat energy value suggests that in the average sample of minced meat in bull calves of the 1st group, there were 0.87% more protein and 2.48% less fat in comparison with meat obtained from the bull calves of the 2nd group. Analysis of the flesh chemical composition indicates a more favorable ratio of protein and fat in the bull calves meat of the 1st group: 1:0.70 vs. 1:0.87 in the 2nd group. The established differences in the accumulation of nutrients in the bull calves carcasses of different origin had a significant impact on protein and energy conversion ratios (Table 6). Bull calves of the 1st group differed in a relatively lower value of the feed energy conversion ration differing from their herd mates by 0.38%. On the contrary, the feed protein conversion ratio in sons from the bull carriers of the desired alleles was higher relative to the bull calves in the 2-nd group: 9.88% versus 9.8%. The revealed regularity indicates that the greater protein deposition in the body of bull calves from the 1st group contributed to a higher protein transformation into the meat parts of their carcasses. It should be noted that the youngsters of both experimental groups were characterized by a harmonious and proportional constitution, typical for meat cattle production direction. However, indices of body weight, the dynamics of average daily gains, the values of constitution indices, the quality of meat products, as well as the intensity of feed protein conversion into meat parts of carcasses indicate that the bull calves of the 1st group are more in line with the requirements for the model Hereford breed animals of Canadian selection. Studies conducted, analysis of the results indicate that with the greatest completeness, the genetic uniqueness of breeding animals is revealed by their genotyping for several genes that took part in the formation of productivity traits and product quality. In particular, the high prepotencies in large servicing bulls of Canadian selection, in the genome of which there are complexes of genotypes with a high concentration of alleles that mark economically useful traits, was a fundamental factor in the steady genetic inheritance contributing to the formation of desirable traits in the offspring. Discussion Breeding work with the Stavropol population of Herefords is aimed at the formation of large framed herds. Currently, the mature herd of the population has 2500 heads. At the present stage, selection for exterior traits is complemented by animals breeding with regard to molecular genetic markers associated with meat productivity and quality of beef in cattle. First of all, the identification and selection of desirable genotypes carriers is conducted in a group of sires, which are intensively used in herd reproduction. The population genotyping for MAS selection apply the number of DNA markers including CAPN1, GH, Lep, TG5. Single nucleotide polymorphisms in these genes have association with growth and development in animals, marbling and tenderness formation in beef. SNP in C2141G position of 5th exon in growth hormone gene results to substitution of coding aminoacid leucine to valine. This substitution has a reliable effect on growth traits and meat productivity in beef cattle [11]. The individuals with GHTT genotype had a significant superiority for average daily gain compared with its peers. Polymorphic variants of CAPN316 in exon 9 of the CAPN1 gene, located on chromosome 29 in cattle, are non-synonymous substitutions in the nucleotide sequence C-G, which results in the coding of alanine instead of glycine [17]. The role of this gene in formation of meat tenderness slaughter is noted. There are also studies that report the superiority of CC-genotype carriers by weight of the hind-quarter in half-carcass [6]. At the same time, MAS-selection, which includes only one gene associated with a small number of economically useful traits, shows low efficiency. In order to optimize breeding programs with beef cattle, aimed at improving the herds for a complex of traits, it is advisable to carry out the selection for several genes [18]. There were 7-8 variants of paired genotypes for the bGH and RORC genes with different frequencies out of nine possible variants when identified beef cattle populations [19]. At the same time, the pattern of distribution of the complex bGH / RORC genotypes differed greatly among the populations. In our researches, a total of 18 Hereford sires were genotyping by CAPN1, GH, Lep, TG5 genes. They were carries of 9 complex genotypes out of 81 possible variants. The highest frequency of "desirable" alleles was found in four bulls. Their intensive use in herd reproduction allowed to get 28 calves-descendant that meet the requirements for body type and exterior. Evaluation of growth and development in their sons showed a significant superiority (P<0.05) for live weight at 15 months of age compared with peers obtained from bulls with a low frequency of "desirable" alleles. In addition, the sires selection with regard to the "desirable" alleles in CAPN1, GH, Lep, TG5 genes contributed to production the descendants with more massive carcasses (P<0.05), which contained 13.9 kg (P<0.05) more muscle tissue. At the same time, the process of fat deposition was more intense in group II. Breeding work with composite population of MARC II was also aimed at increasing the frequency of "desirable" alleles in the studied herd through the selection of sires, taking into account genetic markers. A significant effect was established when studying the association of 9 combined genotypes CSN1S1 × TG with fat thickness (P <0.06) and meat tenderness (P <0.04) in the composite MARC II population [20]. The association of polymorphism in CAPN1 / CAST genes combination with the formation of the body type of animals was revealed in the study of the exterior of Angus cattle. The carriers of the genotype CG/GG and CG/CG in CAPN1316 and CAST282 genes had a higher score for the exterior development, and also there was a positive correlation between the number of alleles G with the harmonious body conformation (r = 0.78; P < 0.05). At the same time, animals with combined CC/CC genotype were characterized by more massive hip limbs [21] Conclusion The breeding work with Stavropol population of Herefords is aimed at the animals selection for exterior type with regard to the genetic markers associated with meat productivity and beef quality. The offspring from bulls-carriers of the "desirable" alleles in complex CAPN1, GH, Lep, TG5 genes that meet the exterior requirements exceeded their peers in live weight (P < 0.05), carcass weight (P < 0.05) and muscle tissue (P < 0.05). The maximum conversion rate of feed protein into product protein was also established in the group of sons from selected bulls. Thus, animal selection for body conformation type is advisable to combine with the herd genotyping for a complex of genotypes associated with different economically useful traits when creating highly efficient population of beef cattle.
2019-11-22T01:32:01.566Z
2019-11-15T00:00:00.000
{ "year": 2019, "sha1": "877f5b21898e97ae0b23d51dcbc84c31c8ccd91f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/341/1/012052", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e226d278eca7176de068fd7e01b793748eae1a29", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
221466852
pes2o/s2orc
v3-fos-license
Is the risk of second primary malignancy increased in multiple myeloma in the novel therapy era? A population-based, retrospective cohort study in Taiwan Longer survival in patients with multiple myeloma (MM) after treatment with novel agents (NA) such as thalidomide, bortezomib, and lenalidomide may be associated with increased risks of developing second primary malignancies (SPM). Few data describe the risk of SPM in patients with MM in Asia. This population-based retrospective cohort study assessed the risk of SPM in MM using the Taiwan National Cancer Registry and National Health Insurance Research databases from 2000 to 2014. Among 4,327 patients with newly diagnosed MM initiated with either novel agents alone (NA), chemotherapy combined with novel agents (CCNA), or chemotherapy alone (CA), the cumulative incidence of SPM overall was 1.33% at year 3. The SPM incidence per 100 person-years (95% confidence interval [CI]) was 0.914 (0.745–1.123) overall, 0.762 (0.609–1.766) for solid tumours, and 0.149 (0.090–0.247) for haematological malignancies. We compared risks of SPM using a cause-specific Cox regression model considering death as a competing risk for developing SPM. After controlling for age, gender, Charlson Co-morbidity Index, and time-period, the risk of developing any SPM or any haematological malignancy was significantly reduced in patients initiated on NA (2010–2014 period) compared to chemotherapy alone (adjusted hazard ratio 0.24, 95% CI 0.07–0.85, and 0.10, 95% CI 0.02–0.62, respectively). Contemporary treatment regiments using NA (mainly bortezomib) were associated with a lower risk for a SPM in comparison with CA. Scientific RepoRtS | (2020) 10:14393 | https://doi.org/10.1038/s41598-020-71243-z www.nature.com/scientificreports/ period estimated a 2.3-fold higher risk of death in patients with MM who developed a SPM, and a 4.9-fold risk if the second tumour was a haematological malignancy 8 . Survival in patients with MM who developed AML or MDS was lower than in patients who developed the malignancy de novo. Taiwan has a centralized health insurance system that reimburses healthcare for all residents. Using claims data from the National Health Insurance Research database (NHIRD) we showed that the unadjusted incidence of MM in Taiwan increased by 30% from 2007 to 2012, accompanied by a decrease in case fatality from 25.5 to 19.4% that coincided with the availability of novel agents for MM treatment in Taiwan 9 . There are concerns that the incidence of SPM will increase as survival from MM improves. In this retrospective cohort study, we conducted an in-depth study of SPMs in patients with MM, using the Taiwan Cancer Registry, covering the time period before and after the introduction of novel agents for first-line treatment of MM in Taiwan. Uniquely, we linked cancer registry data with patients' treatment data from NHIRD claims data to evaluate the risk of SPM in patients who received different MM treatments for first-line therapy. Methods Data source. The Taiwan Cancer Registry was founded in 1979 and is organized and funded by the Ministry of Health 10 . The registry is operated by the National Public Health Association, which maintains an expert advisory board to standardize definitions, coding, and procedures within the reporting system. All hospitals in Taiwan with at least 50 beds mandatorily report all newly diagnosed and confirmed malignancies to the registry 11 . Highly detailed information on provision of care, treatment and outcomes is collected from 80 hospitals covering more than 90% of all cancer cases diagnosed annually in Taiwan. Data in the registry are linked to death certificates, the National Health Insurance Catastrophic Illness Registry and cancer screening programs to identify missing cancer cases. Diagnoses are coded in the International Classification of Diseases for Oncology, 3rd Edition (ICD-O-3) format and data are validated through rigorous internal quality control 11 . Data from the Taiwan Cancer Registry were electronically linked to the NHIRD, which holds all data from the National Health Insurance system, which covers almost the entire population and records claims data on all medical services provided in Taiwan 12 . Primary and secondary diagnoses are coded using International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) format, and demographic information, date and type of service provided (physician services, drugs, prescriptions, laboratory and imaging examinations, and hospital ward) are recorded. Data quality is ensured by the National Health Insurance Administration [12][13][14] . Claims data from the NHIRD were used to identify specific treatment for MM using chemotherapy, novel therapy (bortezomib, thalidomide and lenalidomide) and steroids prescribed during outpatient visits or hospital admissions. Drugs were captured by Anatomical Therapeutic Chemical codes. Death and date of death were identified in the linked Death Registry. De-identified aggregated patient data were used for the analysis. The study was granted an exemption from ethical review by the Taipei Medical University-Joint Institutional Review Board, and an exemption from the need for patient consent. The study was conducted according to all applicable guidelines and regulations. Patients were excluded if they had a record of any primary cancer prior to the MM diagnosis date or if they had a death record before the MM diagnosis date. Patients were also excluded if they had any diagnosis of plasma cell leukaemia (203.1×) or other immunoproliferative neoplasms (203.8×) within 2 months of the first diagnosis of MM due to disease progression, if treatment was not initiated by 31 December 2014, or if initial treatment was with steroids alone. Patients were initially stratified into 3 cohorts according to the first-line treatment they received; novel agents alone, chemotherapy combined with novel agents, or chemotherapy alone. Patients were followed up from the initial treatment date for the occurrence of SPMs in the Cancer Registry. The study population was divided into three time-periods reflecting different phases in the advancement of MM treatment available in the formulary list of the NHI ( Table 1). The period 2000-2004 (pre-novel agent period) was characterised the exclusive use of chemotherapeutic agents for MM treatment. Novel agents started to be used during the 2005-2009 transition period: thalidomide was reimbursed for first-line therapy of MM in July 2009 and bortezomib was approved for third-line therapy in 2007. The majority of patients in 2005-2009 continued to be initiated on first-line chemotherapy but may have received novel agents, mainly thalidomide, in second or third-line. The novel agent period from 2010-2014 was characterised by the availability of bortezomib which was reimbursed for first-line treatment of patients < 65 years of age or eligible for ASCT in 2011 and was reimbursed without restriction in 2012. Lenalidomide was reimbursed for second-line treatment in January 2012. outcome measures and statistical analysis. The primary study outcome was the occurrence of a SPM after initiating treatment for MM. SPM were defined as new malignant tumours diagnosed more than 180 days after the MM diagnosis date to reduce the risk of including pre-existing undiagnosed malignancies in the analysis. The incidence rate of a SPM per 100 person-years was calculated with 95% confidence interval (CI) using a competing risk model where death without prior SPM was considered the competing risk. We initially determined cause-specific hazard ratios (HR) of SPM between different treatment regimens after adjusting for age, gender, Charlson Co-morbidity Index (CCI) score and year of diagnosis (Table 2). However, we found that the interaction terms of treatment regimen and time period had significant effects on SPM, particularly haematological SPM (Table S1). Therefore, we re-grouped patients into 5 groups, dividing the novel agents alone and chemotherapy combined with novel agents treatment groups into two treatment periods: 2005-2009 and 2010-2014. Cause-specific HR of SPM between different treatment regimens were re-assessed after adjusting for age, gender, CCI score and time-period, using the chemotherapy alone cohort as the reference group. Scientific RepoRtS | (2020) 10:14393 | https://doi.org/10.1038/s41598-020-71243-z www.nature.com/scientificreports/ Baseline demographic and disease characteristics were summarized using descriptive statistics. Frequencies and percentages were reported for categorical variables and the mean with standard deviation (SD) for continuous variables. An intention-to-treat analysis was performed for patients according to initial treatment cohort. The cumulative incidence for each treatment cohort was calculated using the cause-specific Cox model where death without prior SPM was considered the competing risk. All analyses were performed using SAS Version 9.4 (Cary, NC, USA). were excluded because of a pre-existing malignancy prior to the MM diagnosis ( Fig. 1). A total of 4,327 eligible patients were included in the cohort analysis. The mean age at diagnosis was 66.3 years (SD 11.8 years), 57.8% of patients were aged ≥ 65 years, and 57.8% were men (Table 2). At diagnosis, 30.4% of patients had anaemia, 16.5% had bone fracture, 14.0% had renal impairment and 11.7% had pneumonia. Results A total of 23.7% of patients initiated treatment with novel agents alone, 21.9% initiated treatment with chemotherapy combined with novel agents, and 54.4% with chemotherapy alone. Patients treated with novel agents were younger and had higher rates of renal impairment at baseline than patients who received chemotherapy combined with novel agents or chemotherapy alone (p < 0.01 for both). The mean CCI score was lower in patients initiated with chemotherapy alone (p = 0.02). Significant differences between treatments in terms of index year reflect the different availability of novel agents over the study period and form the basis for the three time-periods used in the analysis. The histological distribution of SPM differed from the primary cancer types in 340 patients later diagnosed with MM who were excluded because of the presence of the primary malignancy prior to the MM diagnosis date (Table 4). Haematological malignancies made up 5.8% (n = 21) of cancers diagnosed in the years prior to the MM Table 1. Number and percentage of first-line treatments received by patients with multiple myeloma, by treatment period. Chemo refers to bendamustine, cisplatin, cyclophosphamide, doxorubicin, etoposide, vincristine. NA, not applicable. a To protect patient privacy, all non-zero counts that were less than three were suppressed. cumulative incidence of developing a SpM. The cumulative incidence of any SPM was 0.42% at year 1, 0.92% at year 2, and 1.33% at year 3 ( Table 5). The 3-year cumulative incidence was 1.08% for developing a solid SPM, and 0.25% for a haematological malignancy ( Table 5). The 3-year cumulative probability of developing any SPM was 0.59% in patients initiated with novel agents alone, 1.84% in patients initiated with chemotherapy combined with novel agents, and 1.42% in patients initiated with chemotherapy alone. (Table 6). Only 15 patients developed haematological malignancies over the study period, rendering the estimates of incidence imprecise, with wide 95% CIs. The incidence (95% Table 2. Demographic and clinical characteristics of patients with multiple myeloma by treatment regimen. CCI, Charlson co-morbidity index; MM, multiple myeloma; N, number of patients; n (%), number (percentage) of patients with the indicated characteristic; SD, standard deviation; SPM, second primary malignancy. Novel agents refer to thalidomide and bortezomib. a January 2014 to June 2014. b To protect patient privacy, all non-zero counts that were less than three were suppressed. Table 6). The adjusted HR for developing solid malignancies was < 1 comparing the novel agents 2010-2014 period with the chemotherapy alone period but was not statistically significant. The risk of developing a haematological malignancy was also significant reduced in the chemotherapy combined with novel agents 2010-2014 period compared to the chemotherapy alone period (adjusted HR 0.17, 95% CI 0.03-0.85, p = 0.031). Discussion This is the first population-based, retrospective cohort study using the Taiwan National Cancer Registry to estimate the cause-specific incidence of SPM in patients with newly diagnosed MM, and to link this to the initial treatment regimen. The National Cancer Registry captures the diagnosis and the date of occurrence of all cancer diagnoses, and the NHIRD captures complete treatment information for individual patients from inpatient, outpatient, and pharmacy sources. Linking patient treatment information with registry data enabled us to follow patients in different treatment exposure cohorts to estimate the incidence of SPM for each treatment group. The NHIRD captures health-related data from almost the entire population (99.7%) of Taiwan, and all cancer cases are captured in the cancer registry database. Therefore, the major strengths of our study are the completeness of the data capture achieved by linking two comprehensive, longitudinal, population-based databases, and the availability of baseline characteristics to identify potential confounding factors. Other strengths of the study are the long observation period for SPM development, and selection of study years covering a period of fundamental change in MM treatment in Taiwan. In contrast to follow up studies of clinical trial cohorts, this study used realworld data to compare different treatments in the entire MM population for a follow up period of up to 15 years. The results can be considered representative, supported by a consistent distribution of cancer among the study population versus that reported for the general population in Taiwan 15 . For Kaplan-Meier methods used to estimate the time to development of SPMs, patients who die before experiencing a SPM are considered as censored observations, implying that a SPM could still occur over the remaining observation period even though the patient has died. This method is therefore prone to overestimating the probability of SPM, particularly in settings where the incidence of the competing event (death) is high. In the case of MM patients, the incidence of death is typically very high (case fatality rate of 18.3% in 2015 16 ), as the patient population is on average elderly (mean age of 66 years at MM diagnosis in Taiwan). We used a cause-specific Cox regression in which the occurrence of SPM was the outcome of interest and death without prior SPM was considered as the competing risk. All patients who died before developing SPM were censored and underwent no further follow-up. This model is appropriate for exploring questions of aetiology 17 , and in our study allowed evaluation of the cause-specific risk of SPM for each treatment cohort. MM treatment evolved over the study period in reflecting reimbursement for first-line treatment of MM approved for thalidomide in 2009 and bortezomib in 2011. We divided patients into 5 treatment groups on the basis of evidence of interactions between treatment regimen and time period on SPM. Prior to 2004, all patients were treated with chemotherapy alone regardless of their clinical condition and disease severity. Thereafter, the www.nature.com/scientificreports/ availability of first thalidomide, and then increasingly bortezomib, allowed physicians to select the optimum treatment based on the patient's clinical picture, potentially introducing significant confounding by indication. Because of the high risk of confounding by indication from 2005, we used patients who had received chemotherapy alone as the reference treatment regimen for evaluating the risk of SPM associated with different treatments, thereby avoiding direct comparisons between treatments potentially subject to confounding. Supporting our approach is the observation that the median follow-up period of patients treated with chemotherapy alone was not longer compared to other treatment groups (data not shown). This suggests that survival bias did not play a role in our findings. We observed that treatment with novel agents alone during 2010-2014 was associated with a significantly decreased risk of any SPM when compared to chemotherapy alone. Treatment with novel agents alone during 2010-2014 and combined chemotherapy with novel agents during 2010-2014 had a significantly lower risk of haematological SPM in comparison with chemotherapy alone. The 2010-2014 period was associated with substantial use of bortezomib and reduced use of thalidomide for first-line treatment compared with the 2005-2009 time-period in which thalidomide predominated. Thalidomide has been linked to an increased risk of SPM 18,19 , whereas the International Myeloma Working Group (IMWG) consensus of the available literature that concluded that the risk of SPM following treatment with bortezomib was low, and in some studies consistent with background rates 3 15 , (albeit with some variation in the ranking), who used the Taiwan Cancer Registry to assess cancer incidences in the whole population 15 . Seven of the 10 malignancies with the highest age-standardized incidence rates were shared, with thyroid cancer, uterine cancer and oral cancers potentially under-represented and skin, cervical and bladder cancer possibly over-represented in patients with a malignancy diagnosed prior to MM. However, in comparison with the distribution of SPMs, we observed an increase in haematological cancers, especially myeloid leukaemia, in patients with MM. Myeloid leukaemia ranked as the fourth most common SPM in patients with MM. By comparison, myeloid leukaemia in the general population of Taiwan (2012) ranked 44th 15 . Several studies have found that the incidence of SPM in patients with MM is similar to the risk of cancer in the general population, but that the site-specific incidence rates differ significantly. These include a population-based matched cohort study that also used the NHIRD 20 , and a SEER-based study in the United States 21 . In Taiwan, patients with MM had an 11.5-fold higher incidence of hematologic malignancies and a 2.1-fold lower incidence of solid tumours than patients without MM 20 . Similarly, the US SEER study found the risk of developing a solid tumour was decreased in patients with MM (standardized incidence ratio [SIR] 0.94, 95% CI 0.89-0.99), whereas the risk of a haematological malignancy was increased (1.68, 95% CI 1.46-1.92) 21 . A second SEER-based study confirmed a significantly lower risk of some solid tumours (prostate and breast) and an increase in haematological malignancies that was most significant for AML (SIR 6.51, 95% CI 5.42-7.83) 22 . The authors observed no change in the risk of solid or haematological SPM in patients with MM before or after the introduction of novel agents for MM treatment, but observed higher rates of SPM in younger patients, possibly due to the use of aggressive treatment regimens, although an 80% increased risk of AML in the first 2 years after the MM diagnosis points to other non-treatment-related contributing factors 22 . In a population-based study in Sweden, patients with MM had an increased risk of developing a SPM compared to the general population (SIR 1.26, 95% CI 1.16-1.36) which was highest for AML/MDS (SIR 11.51, 95% CI 8.19-15.74) 4 . To protect patient privacy, all non-zero counts that were less than three were suppressed in the presentation of results. However, this had no effect on calculations of incidence or SPM risk. Other potential study limitations included a lack of clinical staging information which did not allow us insights into patterns of SPM presentation in Taiwan. We conducted an intention-to-treat analysis in which the risk of SPM was assessed based on the initial treatment received. Any potential impact of subsequent treatments that might have contributed to a SPM was not assessed. For example, lenalidomide was approved for re-imbursement of patients with treatment failure to first-line therapy in December 2012, and has been associated with an increased risk of SPM, particularly when combined with melphalan 3 . Highlights of the IMWG into SPM in MM were that the overall risk of SPM is patients with MM is low and the pathogenesis is likely to be multifactorial 3 . The potential risk of SPM should not alter the therapeutic decisionmaking process. Nevertheless, the availability of novel agents appears to be favourable for the prognosis of MM in terms of SPM development, and previous estimates of SPM risk in MM are unlikely to be applicable in the new treatment environment. Our study showed that the incidence of SPM in MM is low but provides evidence of fewer SPM following first-line treatment with novel agents compared to chemotherapy or chemotherapy combined with novel agents. These data provide useful information for physicians selecting treatments that optimize long term safety of patients with MM. This study shows that linking high quality databases increases the breadth and depth of the knowledge that can be gained from analyses of real-world data and has important implications for future clinical research.
2020-09-03T09:04:49.026Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "dd84b853ac37237cee949ae5970607cec8b3f1a0", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-71243-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca694b7a17da8bd91cc8ff04585aef5c84b279b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248360092
pes2o/s2orc
v3-fos-license
Naïve Realism Face to Face with the Time Lag Argument Naïve realists traditionally reject the time lag argument by replying that we can be in a direct visual perceptual relation to temporally distant facts or objects. I first show that this answer entails that some visual perceptions—i.e., those that are direct relation between us and an external material object that has visually changed, or ceased to exist, during the time lag—should also count as illusions and hallucinations, respectively. I then examine the possible attempts by the naïve realist to tell such perceptions apart from illusions and hallucinations, and after showing the inadequacy of the answers relying on a mere counterfactual or causal criterion, I explain why the problem is solved by introducing a view of visual perception as temporally extended into the past of objects and, in particular, as consisting in the whole causal chain of events or states of affairs going from external material object x to subject S. But this solution is not immune from defects for the naïve realist. I show that this view of perception raises a number of significant concerns, hence leaving the issue of the time lag problem still open for naïve realism. 1. Direct realism is the thesis that "in veridical cases we directly experience external material objects, without the mediation of either sense-data or adverbial contents" (Bonjour 2013). As Le Morvan (2004, p. 221) puts it: Direct Realists hold that perception is an immediate or direct awareness of mind-independent physical objects or events in the external world; in taking this awareness to be immediate or direct, Direct Realists deny that the perception of these physical objects or events requires a prior awareness of some Indeed, this is not entirely true. For, if when we are at t in a direct perceptual relation to an external object, we are at t in a relation to something that might have relevantly changed or even ceased to exist at t, then all our (direct and relational) perceptions are at risk to also count as illusions and hallucinations. And, indeed, those perceptions that are direct relations to things that have actually visually changed or ceased to exist, do count also as illusions or hallucinations, respectively. Of course, the naïve realist can try a series of responses to this problem. Each of these responses can be seen as the proposal of a new view of what an illusion and a hallucination are aimed at excluding from illusions and hallucinations, respectively, the (direct and relational) perceptions of things that have simply visually changed or ceased to exist through the time lag. After explaining why it is not possible for the new view to rely on a mere counterfactual or causal criterion, I examine one way of arranging them that solves the problem. The basic idea consists in conceiving a (direct and relational) visual perception as temporally extended and, in particular, as consisting in the whole causal chain of events or states of affairs going from external material object x to subject S. By relying on this view of (direct and relational) visual perceptions, it is possible to solve the problem raised by the time lag argument. However, this view of visual perceptions is not immune from defects and raises not insignificant concerns. In the second part of the paper, I critically examine those seeming to me the main of these concerns. If I am right, those supporting naïve realism or the disjunctivist position should not consider even the response to the time lag argument which is based upon this view of visual perceptions as completely satisfying. This means, in turn, that they should not take the threat coming from the time lag argument as totally thwarted yet. 2. Consider first the traditional time lag argument against naïve realism. The argument was originally developed by Russell (1912Russell ( , 1948: […] It takes about eight minutes for the sun's light to reach us; thus, when we see the sun we are seeing the sun of eight minutes ago. So far as our sense-data afford evidence as to the physical sun they afford evidence as to the physical sun of eight minutes ago; if the physical sun had ceased to exist within the last eight minutes, that would make no difference to the sense-data which we call 'seeing the sun'. This affords a fresh illustration of the necessity of distinguishing between sense-data and physical objects. (Russell 1948, ch. 3) The argument generalises to all cases of visual perception, since the light from every object takes time to affect us in order for us to perceive the object. 2 We always visually perceive objects as they were in the past rather than as they are in the present time. But-the argument goes-whatever is in a direct perceptual relation with the subject perceiving it at t should be temporally located at t rather than in the past of t. 3 We should conclude that whatever it is that we are in a direct perception relation to while we perceive, if any, it is not the external objects, and naïve realism is false. The time lag argument has seldom appeared particularly serious to direct realists. 4 For instance, Le Morvan (2004) says that the argument only points out how we directly perceive mind-independent material objects in the visual modality: given the laws of physics, we can only directly visually perceive them with a time lag, however minute. Many direct realists have similarly replied that we do directly perceive mind-independent material objects in the visual modality, provided we accept that we always perceive them in the past (see, e.g., Snowdon 1992). In short, the naïve realist can concede that: (1) Every direct and relational visual perception of x is a visual experience of x as it was in the past. Nonetheless, change being ubiquitous, we must acknowledge that: (2) It is possible for a visual experience of x as it was in the past to be a visual experience of x as having a particular visible quality that x has ceased to have at the time at which the visual experience occurs. (1) and (2) jointly entail that: (3) It is possible for a direct and relational visual perception of x occurring at t to be a visual experience of x as having a particular visible quality that x has ceased to have at t. 5 Similarly, we cannot ever exclude the possibility that x has ceased to exist within the time lag. In other words: (4) It is possible for a visual experience of x as it was in the past to occur at a time at which x has ceased to exist. 5 Of course, each visual experience of x is not instantaneous, but has a duration, at least because of the finite speed of sensory information-processes in our brains. I will continue to refer to the time at which a visual experience occurs as an instant t for simplicity's sake, but what I will always intend is that it has a duration, however short. This will not affect my line of thought. For example, what I mean here by (3) is that it is possible for a direct and relational visual perception of x occurring in a certain interval of time T to be a visual experience of x as having a particular visible quality that x no longer possesses at each instant t i that is included in T. 3 In Moran's (2019) terms, if you hold that perceiving x is being in a direct relation to x where x is a constituent part of the perception of x, then you seem to have no choice but adhering to the Existence Principle, according to which "It is not possible to see something that no longer exists". 4 Moran (2019) being a notable exception. 3 (1) and (4) jointly entail that: (5) It is possible for a direct and relational visual perception of x occurring at t that x has ceased to exist at t. So, the naïve realist must embrace (3) and (5). Consider, however, what we normally take as a sufficient condition for illusion and hallucination to occur (e.g., Smith 2002, p. 23;p 191): (6) A visual experience of x at t counts as an illusion if x visually appears at t as having a particular visible quality that x has not at t. (7) A visual experience of x at t counts as a hallucination if x at t is not there where it visually appears to be at t. 6 For example, visually experiencing a Christmas decoration as a dagger hanging in the air counts as an illusion; and visually experiencing a dagger hanging in the air in case there is actually nothing in front of us counts as a hallucination. Now it seems that, given what we have said so far, the same visual experience will count as both a direct and relational perception and an illusion. In fact, from (3) and (6) it follows that: (8) A particular visual experience of x at t which (i) counts as a direct and relational perception, and (ii) is such that x visually appears at t as having a particular visible quality that x has ceased to have at t, counts as both a direct and relational perception and an illusion. Similarly, it seems that the same visual experience will count as both a direct and relational perception and a hallucination. In fact, from (5) and (7) it follows that: 6 Normally, necessary or sufficient conditions for illusion and hallucination to occur do not mention time: but I assume we should agree that they implicitly do, so that we can reformulate classic sufficient conditions such as those offered by Smith (2002) as (6) and (7). Note that Smith's conditions are at better only sufficient because they do not cover veridical hallucinations. A visual experience of x at t counts as a veridical hallucination if (a) x at t is there where it visually appears to be at t, and (b) x would visually appear to be there also in case x were not there. Veridical hallucinations are commonly thought as scoring a point for a causalist theory of (visual) perception and against perceptual disjunctivism. However, disjunctivists can account for veridical hallucinations by saying that for a subject S to (visually) perceive x requires for x to look some way to S; a case in which x visually appears to be in a specific location also if x were not there is not a case in which x can be said to look some way to S, because what actually goes on would go on whether the object were present or not; therefore, this case is not a case of (visual) perception. See Snowdon (1980-81, p. 42) on this point. (9) A particular visual experience of x at t which (i) counts as a direct and relational perception, and (ii) is such that x has ceased to exist at t, counts as both a direct and relational perception and a hallucination. 7 3. No doubt that admitting (8) and (9) constitutes a serious problem for the naïve realist. For one thing, postulating sensory experiences that at the same time both fully are, and fully are not, direct perceptual relations to external material objects seems to be an inadmissible contradiction. Moreover, if we consider the disjunctivist position, it seems to be equally fatal to it that the ultimately disjunctive characterisation of a visual experience be inclusive rather than exclusive. This is so because disjunctivists are committed to denying that whatever fundamental kind of mental event occurs when one is veridically perceiving the world, that kind of event can occur also when one is having a hallucination (or, an illusion) (Martin 2006). Yet (9) entails that a particular occurrence of a visual experience can both be a hallucination and-in virtue of its also being a perception-fall under the fundamental kind which perceptions fall under. 8 Note that this does not mean denying that "there is nothing inconsistent in the idea that a single event (a veridical perceptual episode) can be an instance of two different kinds-veridical and hallucinatory" (Nudds 2013, p. 275). The latter claim can be true because, according to Martin, "whatever the most specific kind of mental event that is produced when having a causally matching hallucination, that same kind of event occurs when having a veridical perception" (Martin 2006, p. 369), while of course "no instance of the most specific kind of mental event that occurs when having a veridical perception occurs when having a (causally matching) hallucination" (Ivi, p. 361). Therefore, it is admissible for a disjunctivist to hold that a veridical perception also falls under the most specific kind of mental event that is produced when having a hallucination, provided that only for hallucinations is this their most fundamental kind. What a disjunctivist cannot hold is that a single event falling under both the most specific kind of mental event that occurs when having a veridical perception and the most specific kind of mental event that occurs when having a hallucination, has (also) the latter as its most fundamental kind. If an event counts as a perception, it cannot also count as a hallucination-and vice versa. On a stronger conception of disjunctivism that Byrne & Logue (2008) name "metaphysical disjunctivism," there is no common mental element to perceptions and hallucinations. Yet (9) clearly entails that a common element exists, and that metaphysical disjunctivism is false. The naïve realist may try to organise a defense by rejecting (6) and (7) as proper sufficient conditions for an illusion and a hallucination to occur. But what is wrong in (6) and (7), and how should we modify them? One possibility is this: (10) A visual experience of x at t counts as an illusion if x visually appears at t as having a particular visible quality that x has not at t, unless x has had that quality in the past of t. (11) A visual experience of x at t counts as a hallucination if x at t is not there where it visually appears to be at t, unless x has been there in the past of t. Nonetheless this strategy seems doomed from the very beginning. Accepting (11) would require accepting it to be sufficient for a visual experience normally classified as a hallucination to occur in the right place for it to count as a perception. Imagine, for example, that a person living in New York happens to frequently hallucinate Julius Caesar. (11) entails that, if this person flies to Rome and waits for having her usual visual experience in Campus Martius, this Roman occurrence will count as a perception of Julius Caesar. This seems absurd. We may wish to further amend (10) and (11) by introducing a counterfactual requirement: (12) A visual experience of x at t counts as an illusion if x visually appears at t as having a particular visible quality that x has not at t, unless (i) x has had that quality in the past of t, and (ii) x would not have visually appeared at t as having that quality, had x had not that quality in the past of t. (13) A visual experience of x at t counts as a hallucination if x at t is not there where it visually appears to be at t, unless (i) x has been there in the past of t, and (ii) x would not have visually appeared as being there at t, had x not been there in the past of t. Such reformulations accommodate the idea that directly and relationally perceiving x entails x (as well as that directly and relationally perceiving x as having certain properties entails x instantiating those properties), while the same is not true about hallucination and illusion respectively. Yet (12) and (13) are not completely satisfactory at a deeper analysis. In fact, it is possible to imagine the case of a person who only hallucinates Julius Caesar in Campus Martius in Rome, because she knows from history books that Julius Caesar lived in ancient Rome and frequented Campus Martius, and these (true) beliefs psychologically affect her so as to make her hallucinate Julius Caesar no other than in Campus Martius. What we need is not simple counterfactual dependence between the visual experience and the object, but the right kind of grounding for it. So, we may decide to change (10) and (11) by directing attention to proper causal connection. The idea is that a visual experience of Julius Caesar in 2020 counts as a perception of Julius Caesar only if there is an adequate causal pathway connecting the real mind-independent Julius Caesar to the visual experience of Julius Caesar-just as a visual experience of a star in 2020 counts as a perception of that star only if there is an adequate causal pathway connecting the real mind-independent star to the visual experience of the star, no matter if that star is gone at the time at which the visual experience of it occurs. After all, if I could travel faster than light through the cosmic space and, say, get in just one year to a spot located 2065 light-years away from Earth, I could be directly and relationally visually aware of Julius Caesar in Campus Martius from that spot, no matter that Julius Caesar is dead long ago-as acknowledged by (1). Here is an attempt: (14) A visual experience of x at t counts as an illusion if x visually appears at t as having a particular visible quality that x has not at t, unless (i) x has had that quality in the past of t, and (ii) x's having had that quality in the past of t is adequately causally responsible for the occurring of the visual experience at t as well as for x's visually appearing at t as having that quality. (15) A visual experience of x at t counts as a hallucination if x at t is not there where it visually appears to be at t, unless (i) x has been there in the past of t, and (ii) x's being there in the past of t is adequately causally responsible for the occurring of the visual experience at t as well as for x's visually appearing at t as being there. Now, (15) has the merit of allowing a visual experience of a dead star to count only as a perception rather than also as a hallucination-as required by common sense. The argument seems no more a threat to naïve realism (provided that we are confident that we can easily distinguish among "adequate" and "non-adequate" ways for an object x to be causally responsible for the occurring of a visual experience as well as for that visual experience to have specific qualities). But a different kind of problem arises for naïve realism if we accept (14) and (15). As a version of direct realism, naïve realism is, among other things, the rejection of indirect realism. Indirect realism is the thesis that in veridical cases we indirectly experience external material objects with the mediation of some directly perceived mind-dependent tertium quid like, for instance, sense-data. According to indirect realism, what makes the difference between veridical and delusive visual awarenesses is that veridical visual awarenesses are appropriately caused by some external mind-independent object located in the scene before the eyes. It is thanks to this causal connection that the visual experience can only be classified as a perception, and the external mind-independent object counts as an indirect object of experience. Johnston (2004) refers to this conception as "the Conjunctive Analysis" of seeing and opposes it to the Disjunctive Analysis. On the Conjunctive Analysis of seeing, there is a common mental element among perceptions and hallucinations, perceptions and hallucinations fall under the same fundamental psychological kind, and the difference among the two states is due to an indirect causal connection to some external mind-independent object: [On the Conjunctive Analysis of seeing] the act of awareness involved in seeing must simply be the common act of awareness "augmented" in a certain way, namely by being causally connected to external particulars. […] In seeing there is a single act of awareness whose direct objects are exhausted by what one could be aware of even if one were hallucinating. But the act of awareness has external particulars as its "indirect" objects in just this sense: it is appropriately caused by those external particulars. (Johnston 2004, p. 211) It seems that (14) and (15) rely on "the Conjunctive Analysis" of seeing. According to (14) and (15), the difference among veridical and delusive visual awarenesses of x is that the former are appropriately caused by x, while the latter are not. But having a mind-independent object as the appropriate cause cannot be the factor for distinguishing between perceptions and hallucinations a naïve realist makes appeal to. For according to the naïve realist, external mind-independent objects constitute perception rather than causing it. Remind that naïve realism takes perceptual experiences to be constitutively relational states whose relata are external mind-independent objects (e.g., Martin 1997Martin , 2006. This means that a state of visually perceiving x is constituted, among other things, by there being x in front of the subject's eyes. The perceptual state cannot be caused by the external object, because in this case it would be an effect induced by, and hence separate from, the object itself. This is why naïve realists as Snowdon (1990) and Johnston (2004) reject a causalist theory of perception: a causal relation among an object and an inner experience is not the kind of relation we can refer to if we want to claim that we are in a direct and immediate relation to external mind-independent objects when we perceive them. 9 Snowdon argues that the reasons for doubting that a causal theory of vision is correct are at least in part identical to the reasons for doubting that "there is a visual 9 Moran (2018) has proposed to incorporate elements of a causal theory of perception within a naïve realist framework. In particular, he has maintained that there are causal constraints that must be met if a hallucinatory experience is to occur that are never met in perceptual cases. This would solve our problem. Unfortunately, these causal constraints merely amount to the fact that "it lies in the nature of [hallucinations] to be generated by deviant or non-standard causation" (p. 375). The first problem is that standard and deviant causation are very vague notions, as Moran himself acknowledges. The second and more fundamental problem is that this way of characterising hallucinations presupposes characterising genuine perceptions by appealing to standard causation-a view that, in fact, Moran explicitly endorses. But, again, thinking of perceptions as (standardly) caused by the perceived external objects seems incompatible with thinking of them as constituted by those objects. non-world-involving experience common to both hallucinations and perceptions" (Snowdon 1990, p. 55), which is the core claim disjunctivism rejects; and as a disjunctivist he denies that the concept of seeing "is a causal concept with a separable experience required as the effect end" (Snowdon 1990, p. 61). 10 4. Of course, disjunctivists agree that for a subject S to visually perceive object x, x must causally affect S some way. Only, they deny that we can characterise the perceptual visual experience as an effect of these causal processes. The idea, as said, is that the perceptual experience is constituted by x, among other things. So, it can be said to be also constituted by the whole causal chain going from x to S: The mental state of affairs, o's looking F to S, is not a state or event at the end of a causal chain of events initiated by o; it is, rather, a (larger-sized) event or state of affairs which itself consists in the whole chain of physical events (not merely events within S) by which o causally affects S. The experience is the complete state of affairs, o causally affecting S. The ultimate effect in this causal state of affairs -the state or event which lies at the end of the causal chain which starts with o -is something physical in S; but that ultimate effect is neither identical with not constitutive of the experience itself (Child 1992, p. 309). On this basis, we could try to reformulate (14) and (15) as such: (16) A visual experience of x at t counts as an illusion if x visually appears at t as having a particular visible quality that x has not at t, unless (i) x has had that quality in the past of t, and (ii) the visual experience of x consists of a causal chain of events starting from x in the moment of the past of t when x had that quality, continuing and largely consisting in the travel of the light originally diffusely reflected or emitted by x, and ending with some neural events in the visual cortex of S at t. 10 Child (1992) has tried to show that the causal theory of vision and the disjunctivist position can coexist. In his view, the disjunctivist holds that "there is a single sort of characterization which can be applied to cases of vision and to cases of hallucination: in both, it looks to S as if something is F. But there is no common type of state of affairs, no common ingredient; rather, any case of its looking to S as if something is F will be characterizable, more fundamentally and specifically, either as a case of something's looking F to S, or else as a case of its merely being for S as if something looked F to him" (p. 300). This said, Child claims that there is nothing incoherent in a disjunctivist holding that object o causes o's looking F to S. Still, he says that "the effect in vision […] can be described in mental terms as an experienceo's looking F to S" (p. 308). So his proposal amounts to admitting that o causes the most specific kind of mental event that occurs when having a veridical perception. This being the case, however, o, as a cause of this kind of event, cannot be a constituent of it. Child is aware of this difficulty. He rebuts, first, that disjunctivists can accept to drop the idea that o constitutes the perception of o, because what really matters is the epistemological achievement that the most fundamental mental characterization of a perceptual experience is world-dependent in that it involves mentioning a worldly object as a cause. Secondly, Child says that-following David Lewis (1986)'s idea of piecemeal causation, where a whole can possibly be said to cause one of its later parts-we could perhaps make sense of the idea of an external object causing the relational state which it is a component of. One could be forgiven for remaining sceptical about the compatibility between the causal theory of vision and the disjunctivist position even after considering Child's attempt. (17) A visual experience of x at t counts as a hallucination if x at t is not there where it visually appears to be at t, unless (i) x has been there in the past of t, and (ii) the visual experience of x consists of a causal chain of events starting from x in the moment of the past of t when x was there, continuing and largely consisting in the travel of the light originally diffusely reflected or emitted by x, and ending with some neural events in the visual cortex of S at t. I assume that (16) and (17) do solve the time lag problem in a satisfying way. In particular, there seems to be no difficulty in having a direct perceptual relation with an object temporally located in the past. For, since the perception of x by S consists in the whole causal chain going from x to S, it does not matter at all whether the causal chain is long or short, and how temporally distant from S x is. Inasmuch as there is a causal chain of the right kind going from x to S and consequently S perceives x, S necessarily is in a direct relation to x, simply because the perception that S is having is the causal chain, and the causal chain, as a whole, is obviously in a direct relation with its origin, i.e., x; and we can assume that if the perception that S is having consists in something that is in a direct relation to x, then also S is. To see it another way, let us resort to how a direct realist like Snowdon (1992) puts it. First, Snowdon offers a definition of what exactly is to perceive something "directly": S directly perceives x if and only if S stands, in virtue of her perceptual experience, in such a relation to x that, if S could make demonstrative judgements, then it would be possible for S to make the true nondependent demonstrative judgement "That is x" (a dependent true demonstrative judgement being a demonstrative judgement that is true only in dependence on the truth of a more basic demonstrative judgement not depending on the first one). The thesis of direct realism can therefore be paraphrased as the thesis that "in ordinary perception we are so related to external objects that we can (nondependently) demonstratively pick them out" (Snowdon 1992, p. 61; see also Campbell 2002). In Snowdon's terms, the time lag argument threatens direct realism in holding that the external objects cannot be the things, if any, that our nondependent demonstrative judgements are true of. Here is the time lag argument opportunely reformulated: (18) That (the nondependently demonstratively graspable and visually presented thing, whatever it is) exists now. (19) The dead star D, which would be the appropriately placed external object, does not exist now. It follows that: (20) That is not identical with the dead star D. The argument can be easily generalised to all external objects, because all of them are located in the past, however slightly, because of the finite speed of light, and might have ceased to exist at the time we visually experience them. The solution that Snowdon provides to this difficulty consists in denying (18) and conceding that we can nondependently demonstratively grasp things into the past. We can accept as true: (21) That (the nondependently demonstratively graspable and visually presented thing, whatever it is) does not exist now. This is the equivalent of conceding that we can directly perceive into the past. If we can nondependently demonstratively pick out things into the past in virtue of our perceptual experiences, then there is no difficulty in identifying these things with external objects like dead stars -what, in Snowdon's conception, is equivalent to saying that we can directly perceive external objects like dead stars. In Snowdon's own words: It is possible to treat the truth of [ (21)] both as enabling us to avoid the time lag problem, and also as revealing that the dogma about the impossibility of truth for such sentences should not be accepted. Thus I am suggesting we should regard the finite speed of light as enabling us at t to [nondependently] demonstratively think about items which no longer exist at t. That is why a sentence like [(21)] could express a truth. (Snowdon 1992, p. 77) Now, if we interpret a perception by S of an object temporally located in the past as consisting in the whole causal chain starting from the object in the moment of the past of t when it was there, continuing and largely consisting in the travel of the light originally reflected or emitted by it, and ending with some neural events in the visual cortex of S at t, we have a metaphysical account of how (21) can be true. Accordingly, we can accept: (22) That (the nondependently demonstratively graspable and visually presented thing) is identical with the dead star D. This amounts to say that we can directly visually perceive D. If conceiving perceptions as consisting in the whole causal chains going from x to S solves the problem raised by the time lag argument, however, we should not take it for granted that such a conception is immune from troubles, or at least that no further philosophical work is needed to clarify some obscure points in it. In the remaining part of the paper, I will try to highlight some possibly problematic aspects of this view of perceptions. 5. Perceptions are commonly taken to be mental states with a certain phenomenal character. Naïve realists do not normally distance themselves from this assumption. The naïve realist's thesis is that we can visually perceive into the past, or better, we always visually perceive into the past; and, visual perceptions should be regarded as consisting in the causal chains going from the material objects in the past to us now. But should we then regard the mental states that perceptions are as so temporally extended? Should we take the mental state corresponding to a perception of a dead star as something temporally extended from one hundred million years ago to now? Should we accordingly say that such a mental state supervenes not merely on a neural state, but on a causal chain temporally extended from one hundred million years ago to the present, only one of whose final components is a neural state of S? Is this the right way to account for the naïve realist's thesis that veridical perceptions are constitutively relational mental states, where the relation is to be intended as one between the perceiver and a worldly subject-matter? There are many difficulties in such a view. For example, it seems to entail that necessarily my perception of the dead star started one hundred million years ago. But in which sense, then, can I account for my belief that I only started to perceive the dead star seven seconds ago, when I first looked in the telescope? As Moran (2019) notes, it seems absurd to hold that, when I look to the star for just a few seconds, it is literally true that my visual experience of the star goes on for millions of years. Also, it seems absurd that my own perceptual experiences begin long before I was born. The naïve realist could try to arrange an answer by saying that, when we want to state that the perception started just seven seconds ago, we are referring to that part of the whole causal chain the perception consists in that happens within me. It is only this part of the perception that started seven seconds ago, while the whole perception started one hundred million years ago. But if that is so, would not it be preferable to describe the situation by saying that perception are relational states extended into the past, and they are partly constituted-as by the final state of the causal chain they consist of-by a neural state of S on which the perceptual mental state of S supervenes? This way, we would be differentiating perceptions (which would be temporally extended from the position in time of x to that of S) and the mental states they necessarily include (which would be just one component of the causal chain the perception consists in and would be as temporally extended as the neural event in S they supervene on). But if this is the case, the naïve realist's fundamental idea that "no instance of the most specific kind of mental event that occurs when having a veridical perception occurs when having a (causally matching) hallucination" (Martin 2006, p. 361) is in danger, because only perceptions-not also their constitutive mental states-would now be essentially relational; and although it would still be possible to argue that also for mental states, as so characterised, external objects "figure non-causally as essential constituents of them" (Martin 2004, p. 56), this move may appear difficult to defend. As Papineau (2021, p. 19) puts it, these mental states being conscious sensory experiences starting when the subject starts experiencing and ending when the subject stops experiencing, events occurring outside this time interval cannot make a constitutive contribution to them, although they could of course make a causal contribution-but causation is not constitution. In the same spirit, Moran (2019) acknowledges that it seems plausible that, if an event takes place in a specific time interval, then its constituent parts must be temporally located in this time interval, too; perhaps naïve realists could try to deny that it is absurd for an experience to have a constituent that is not compresent to the experience itself, but it is not easy to imagine how this could be done, and the burden of proof is definitely on them. 11 More basically, considering perceptions as being no mental states-and, in addition, consisting in items even 99,99995% of which (as in the dead star case) had already occurred or happened before my birth-would frustrate naïve realism's ambition of being the least revisionary among all theories of perception (Martin 2002, p. 421). Even if the naïve realist decides to go for the thesis that perceptions are mental states-and these mental states, in virtue of their being no other than perceptions, are temporally extended from the position in time of x to that of S-there is another problem to be solved. For, these mental states consisting of the whole causal chain going from x to the neural events in the visual cortex of S, they would include these neural events as well. We may see these mental states to supervene on the whole causal chain; in this case, they would supervene on a physical process partly constituted by a neural state in the visual cortex of S. But consider, now, the neural state in the visual cortex of S in isolation. It would be difficult to hold that it subvenes to no mental state. For, the same neural state (at least as to its intrinsic properties) does subvene to a mental state in a causally matching hallucinatory case. 12 It would be strange that, in the perceptual case, no mental state (irrespective of its being the same or not as to the hallucinatory case) does supervene on this neural state in isolation. But if a mental state supervenes on the neural state in the visual cortex of S taken in isolation, we implausibly have two distinct mental states occurring in the veridical perceptual case, one supervening on the neural state of S and another supervening on the whole causal chain including, among other things, the neural state of S itself (and, arguably, its supervening counterpart, too). 11 Moran (2019, p. 222) says that it seems absurd to hold that, say, a violinist could be a constituent of a musical performance without existing at the same time as the performance; nonetheless, he concedes maybe ironically, "perhaps objects of perception are not constituents of experiences in the same way that violin players are constituents of performances". Gu (2021, p. 11243) tries to argue that when a constituent is a necessary rather than contingent component of the constituted, as the external object must be with regards to the act of perceiving it, then the constitutive relation is nontemporal, as one can deduce by examining the case of the father-and-son relationship, which "is such a constitutive relation" and in fact holds also if the father has passed away. There are many flaws that can be found in Gu's argument, but one is surely that the father-and-son relationship is not a relation between a constituent and a constituted, because an individual cannot be seen as constituted by his father. In fact, Gu says that "any actual pair of a father and a son constitute a special father-and-son relationship", thus confounding a particular father's constituting a particular instantiation of the father-and-son relationship with that particular father's constituting his son. In general, Gu provides no convincing argument for thinking that if something is a necessary constituent of an experience, then it can be non-existent for all the time at which that experience takes place. 12 Visual hallucinations are today thought to have neural correlates involving the activation of specialised functional units also serving normal visual perception (see ffytche (2013)). See Moran (2018) for an attempt to deny that hallucinatory experiences occur whenever the right kind of brain state is produced, based on the idea that they are essentially (and so necessarily) caused in a certain way and that these causal conditions cannot be met in perceptual cases. For an explanation of why Moran's attempt is unconvincing, see footnote no. 9. It is no consolation that the nested mental state is not a perception itself. The problem, here, is that a visual perception is a mental state possibly mostly occurring at times when S is non-existent (or at least, mostly temporally located at times where S is not), and including in its supervenience base (the supervenience base of) another mental state of S. Moreover, since the supervenience base of this latter mental state is the same, at least as to its intrinsic properties, as of a causally matching hallucinatory mental state of S (and notably, a mental state consisting in S' hallucinating x), we have the embarrassing view that all perceptions of x are mental states supervening on a set of states one of which is (the supervenience base of) another mental state belonging to the kind that is the most fundamental kind of a hallucination of x (by S), but which cannot be a perceptual mental state by definition. In short, all perceptual mental states do have another mental state subjectively indistinguishable from themselves nested in them and having a shorter duration. Needless to say, this would be a very disconcerting conclusion for the naïve realist -who would have to determine, in addition, which of the two states is the primary bearer of the phenomenal character of the perception. Whatever is the strategy adopted by the naïve realist for setting the metaphysical relation among perceptions (regarded as consisting of the whole causal chains of events from x to S) and the corresponding mental states, it seems that many troubles appear. True, the naïve realist might be tempted to identify both the perception of x and the corresponding mental state with the psychological state supervening on the neural state in the visual cortex of S. To avoid falling into the Conjunctive Analysis of seeing, however, it would be prohibited to characterise this state in causal terms: its being appropriately caused by external particulars, in fact, would amount to saying that it-as an act of awareness-has those external particulars as its indirect objects (Johnston 2004, p. 211). The naïve realist should rather try to say that external objects "figure non-causally as essential constituents of them" (Martin 2004, p. 56). But if the external object in question is a dead star, which is temporally located one hundred million years ago, it seems that the perceptual relation between S and the star would remain indirect inasmuch as this state is not temporally extended into the past. For, it seems that S could be only perceptually directly aware of something that is temporally located at least in part where its act of perceiving is temporally located. But the psychological state supervening on the neural state in the visual cortex of S is not temporally extended into the past enough to be in touch with the dead star. It falls short of the star. As previously said, it seems that whatever is to be a constituent of this psychological state, it must be temporally located in the time interval in which the psychological state occurs (Moran 2019;Papineau 2021), which in turn seems to coincide with the time interval in which the neural state in the visual cortex of S subvening to the psychological state occurs. But the star is not temporally located in this time interval. This is why the naïve realist needs to identify the perceptions of external objects temporally located in the past with the whole causal chains of events starting with these objects and ending in the visual cortex of S. But then this rules out the possibility to identify both the perception of x and the corresponding mental state with the psychological state supervening on the neural state in the visual cortex of S. 6. On the view that perceptions consist of the whole causal chains going from x to me, another problematic point is that related to my being the subject of what is considered to be an experience of mine, and how this is supposed to be transmitted into the past. Consider a causal chain of events C going from star D one hundred million years ago to my telescope now. Suppose that C is adequate for becoming a perception of D if only I look in the telescope. Nonetheless, of course, C is not a perceptual experience of mine until it is adequately "completed" by me. Let us call "CC" the causal chain "completed" by me, and "CI" the same causal chain that remains forever incomplete. What are, exactly, the differences between CC and CI? Sure, CC "ends" in a neural state in my visual cortex (indeed, it continues indefinitely also after this step: but we are interested in taking it as an "end" here), while CI goes on-let us suppose-without including a neural or psychological state ever. The external world is populated not just by material objects and their properties, but also by a plethora of potential (visual) perceptions that never get actual. But can we say that CC and CI have C in common? From a physical standpoint, the answer is "yes." But in a metaphysical sense, C must be somehow changed by its becoming CC in a way in which C clearly is not when it becomes CI. For, when C is "transformed" into a perception of D by S, it is necessary that S becomes the subject not only of an event or a state which either is identical to or supervenes on {CC minus C}, but of an event or a state which either is identical to or supervenes on CC as a whole. The reason is this: the perception of D by S, intended as consisting in the whole causal chain going from D to S, is supposed to put S in a direct relation to D. For this to happen, S-which is never temporally located where D is, and a fortiori is not so temporally located while is visually perceiving D-must be the subject of all of the parts of the temporally extended perception, included those that "touch" or even "include" D. If this does not happen and S remains the subject of an event or a state which either is identical to or supervenes on {CC minus C} only-that is, S remains the subject of the sole part of the perception occurring within S-then we can hardly say that the whole perception is a perception of S. Either the events or states belonging to C cannot belong to what an act of S' perceiving D consists in, or-at best-S would be in a direct contact only with anything "touched" by or included in the proper parts of (the supervenience base of) the perception that S is a subject of, and in an indirect contact with the rest of the things "touched" or included in (the supervenience base of) the perception. Either way, regarding perceptions as consisting of the whole causal chain going from x to S would not vindicate naïve realism. The naïve realist needs the property of 'being something that S is the subject of' to be transmitted backwards along the temporal parts of the perception that correspond to the events or states being the components of the causal chain that by hypothesis the perception as a whole consists in, until D, in virtue of the mere C's becoming CC. And not only that: the transmission should be instantaneous, otherwise the relation between S and D would not be direct at least for part of the duration of the (alleged) perception (indeed, the perception could become direct at a time at which it has already ceased to occur. But I assume this to sound weird to many of us). It is not clear, however, how the mere occurring now of some neural events in my visual cortex and, arguably, of its supervening counterpart, can instantaneously and effectively expand my 'being the subject of this' through millions of years. Consider that the events or states of affairs that are the supervenience bases of the temporal parts of the perception that must be so affected have already fully occurred subjectlessly along the causal chain. Of course, we should not presuppose that the transmission of the property 'being something that S is the subject of' is tantamount to the transmission of a physical perturbation. As previously said, from a physical standpoint C is one and the same in CC and CI. Nor should we put in question the bare fact that a mere physical event or state (such as, for example, a neural state under a physical description) can be the supervenience base of a higher-level state instantiating the property of 'being something that I am the subject of'. Indeed, we should pay attention not to deny that also a physical state occurring outside my body (like, for example, a physical state which my computer is in) can be part of the supervenience base of a higher-level state of which I am the subject (according to the extended mind thesis; see Clark and Chalmers 1998). But in the latter case it is not difficult to tell a story according to which the physical or functional state which my computer is in enters into a relationship with one or more physical states which are either identical to or the supervenience base of states that I am the subject of, and in virtue of this becomes a proper part of a larger-sized physical state that is either identical to or the supervenience base of an extended state I am the subject of. In any case, it seems that all the parts of the so-conceived extended experience-included those consisting of, or corresponding to, the physical states occurring outside my body-must instantiate the property of 'being something that I am the subject of'. In the case of perceptions as temporally extended into the past of objects, however, it is not at all clear how the equivalent story could be told. How could the part of my perception of the star that occurs outside me one hundred million years ago instantiate the property of "being something that I am the subject of" now? And in virtue of what? The naïve realist must claim that, while whatever is identical to or supervenes on C before its "completion" is subjectless, C instantaneously becomes the base of a state which I am the subject of when it becomes a part of CC, that is, when it is "completed" into CC. But how can this be accounted for? 13 13 One anonymous reviewer objected that there is nothing particularly odd in an item instantaneously gaining a property, because items don't always gain properties by causal means. Suppose you are the only object in the world with a certain set of intrinsic properties. You have the property of being the unique bearer of those properties. But then God creates an intrinsic duplicate of you. You instantaneously lose the property of being the unique bearer of those properties, and instantaneously gain its negative counterpart (the property of not being the unique bearer of those properties). We can modify the story so that a property is instantaneously transmitted back in time. Suppose that it is true at t 1 that you are the only thing in the universe with F, then at t 2 a new F is created, and hence it is no longer true-on certain views of time-even at t 1 that you are uniquely F. I concede that there is nothing wrong in these examples. In the case at issue, however, the difficulty comes from the particularity of the property that is supposed to travel instantaneously back in time, that is, the property of 'being something that I am the subject of'. One could be forgiven for being reluctant to accept that an hourlong event which occurred subjectlessly one hundred million years ago and has never ceased to have occurred subjectlessly until present day, can become now something I am the subject of. Perhaps naïve realists should not be requested to come with a complete explanation of what it is to be the subject of an experience, since this is an open question for many if not all theories of perception. But unlike the others, their position requires that whatever it is to be the subject of an experience, it can swim upstream instantaneously through time. We can pretend that they take responsibility for exploring the issue of how such an immediate and instantaneous colonisation is supposed to occur. 7. According to Martin (2002, p. 421), the disjunctivist position scores a point against the rival theories of perception in that it does not convict common sense of any fundamental error about perception. Martin says that "the idea that introspection will lead us into error about how things seem to us is hardly an attractive one," and notes that "in contrast to the kind of global errors in introspection posited by sensedatum theories and intentional accounts, the disjunctivist can claim that veridical perceptual experiences are exactly as they seem to us to be: states in which parts of how the world is are manifest to us." Sure, also the disjunctivist position is forced to concede that we may be in error when we have a hallucination or an illusion, since we may be unable to distinguish these experiences from a perception. But if we come to perceptions, the disjunctivist position is not an "error-theory" like the other main theories of perception (the term 'error-theory' originating with J.L. Mackie's views of secondary qualities and of moral properties). 14 If being not revisionary is a point in favor of the disjunctivist position and naïve realism, which the disjunctivist position is supposed to vindicate, it should be considered as a point against both these theories that they posit that we do always perceive things into the past and our perceptions are temporally extended. For, it is true that under these theories what we actually perceive are the worldly material objects, just like it seems to us; but it is also true that these objects are temporally located in the past-sometimes in a very far away past-, while they seem to us to be temporally located in the present (sense-datum theories and intentional accounts being, by contrast, error-theories with respect to the kind of things that are presented to us, but not to their temporal positions). 15 It is perhaps worth adding that under naïve realism or disjunctivism, while no veridical perception is a perception of something being temporally located exactly where it seems to me to be (the light speed being finite), there might be at the same time many different potential perceptions 16 of the same object each of which only differs from the others as for the temporal distance between the object and me. This is possible because it seems that seeing an object through one or more plain mirrors rather than without them should not preempt the visual experience from being a genuine direct perceptual relation with that object, especially so if the object appears as spatially located where it really is. Taking a different position would mean for the 14 See Mackie (1975Mackie ( , 1977. naïve realist to be forced to admit that seeing a star in a reflecting telescope (i.e., one that uses mirrors to gather and focus incoming light) is no directly relational perception: and I assume that no naïve realist would accept such a conclusion. Indeed, it seems reasonable to agree that introducing one or more plain mirrors along the light travel from x to S should not preempt the visual experience of x by S from being a genuine directly relational perception of x by S, provided that these two important requirements continue to hold: Light Identity condition: the light hitting the foveas of S and causing the subsequent adequate neural events in the visual cortex of S at t is the same light that was diffusely reflected or emitted by x at the time in the past of t when x originated the causal chain ending with the neural events in S at t. 17 Spatial Location Identity condition: the spatial location at which x appears to be to S at t is the same where x was at the time in the past of t when x originated the causal chain ending with the neural events subvening to that visual appearance in S at t. Now, a skillful use of a relevant number of plain mirrors can delay at will the arrival of light from the object to my foveas. 18 As a result, we have the possibility to create a huge number of (potential) direct and relational perceptions of the same object x, all making x appearing as temporally located in the present, each differing from the others as to the real temporal location of x. Ironically, then, we seem to be able-at least in principle-to genuinely directly and relationally visually perceive at t an external material object in an enormous number of temporal positions 19 except the temporal position it seems to us to be in (that is, the very same temporal position we occupy while we are perceiving it). This is not the same as for the spatial position of an object. Also if we concede that in a hall of mirrors like the one we can see in the climactic shootout of the film The Lady from Shanghai (1947) all the visual appearances of Rita Hayworth had by Orson Welles are genuine direct and relational perceptions of Rita Hayworth, nonetheless at least one of them-i.e., the one that does not involve the presence of any mirror along the light travel from Rita Hayworth to Orson Welles-is such that she appears to be spatially located where she really is. Indeed, also some 17 In spite of the photons coming out of a transparent glass being not the same that went into it, I assume that we all would agree that the light is the same. This appears to me as a reasonable assumption about the criteria for light identity. 18 Imagine that light coming from a star located one hundred million light years away from us that enters the Earth atmosphere without being "completed" into a visual perception by any subject S; some of it passes just a few miles from a mountain chain, leaves again terrestrial atmosphere and travels another one hundred million light years until a mirror reflects it back towards Earth, when-after entering and leaving our atmosphere a second time-a second mirror placed on an artificial satellite finally reflects it into our foveas, making us having a visual appearance of the star as located in the same place (or, at least, in the same direction) as if we had had a visual appearance of it two hundred million years before. 19 In case time is infinitely divisible, this number is obviously infinite. If time is not infinitely divisible, this number is as high as the number of different temporal positions the object has occupied in the past, provided that the object is very close to our eyes. In general, the object can be genuinely directly and relationally visually perceived now as occupying each of the temporal positions it has occupied in the past that is more distant in time from me than the time taken by the most direct light travel starting from the spatial position the object occupied when it occupied that temporal position and ending in the spatial position we are occupying now. mirror-involving appearances may have the very same property. But if we come to the temporal position of a material object, no perception of it is such that it appears to be temporally located where it really is. It is like we were in a hall of mirrors where, if we get rid one after another of all the perceptions of x such that x appears to be temporally located where it is not (as Orson Welles does in the movie with all his perceptions involving an incorrect appearance of the spatial location of Rita Hayworth), unlike Orson Welles we are left with nothing. That being the case, we may start to doubt that naïve realists and disjunctivists have any right to flaunt the fact that they alone "can claim that veridical perceptual experiences are exactly as they seem to us to be," or even to use the expression "naïve realism" to refer to their own positions. As much as they might be the least revisionary positions among all the possible responses to the problem of perception, they are nonetheless revisionary to some degree. 20 8. One final point concerns the relation between naïve realism and the metaphysical conceptions of time. The very idea that we can directly and relationally perceive external objects in the past, and that there can be a direct relation between the perceiver and a merely past worldly subject-matter, is only acceptable if presentism, as a thesis about the reality of time, is rejected (Power 2013). According to presentism, only what is present is real (Sider 1999;Crisp 2004). But if only present items exist, and by contrast there are no such things as merely past and merely future items, then a necessary condition for an experience to take something real as its object is for it to take something present as its object. Since a defining feature of a perception is that it takes something real as its object, it follows from presentism that it is impossible to perceive into the past. Even if we want to leave room for someone to argue that, under presentism, we can indirectly perceive into the past, the principle that presentism rules out the possibility of direct relational perceptions of merely past things seems unassailable. Consider the view of perceptions as the whole causal chains of events going from the objects which existed in the past to S now. If presentism is true, then it is not possible for such a causal chain to be wholly real at one time: only the state or the event in the chain that is presently occurring is real, all the others being presently non-existent. Thus, either a perception is something that occurs wholly now-and in this case, we cannot identify it with a temporally extended causal chain-or we must concede that the merely past things are necessarily non-existent now while S is having a visual appearance of them. In either case, then, taking a perception of a merely past thing by S as a temporally extended causal chain going from the thing to S cannot ground the fact that S is able to perceive that thing directly and relationally (we are reasonably assuming it to be impossible for someone to be in a direct relation with something which is not real). We must consider, however, that presentism is not the only position on offer in the current metaphysical debate over the philosophy of time. Most important, some researchers argue that, irrespective of how strongly our pre-scientific intuitions may recommend it, presentism is incompatible with the relativity theory, which is widely accepted today (Savitt 2000;Sider 1999Sider , 2001Wüthrich 2013). Other scholars have stressed that presentism can be criticised under many other points of view, and in particular can be accused to get in trouble when it comes to account for the different kinds of true propositions we endorse, or, for cross-temporal relations and the passage of time itself (Ingram and Tallant 2018). Under eternalism, which is the position opposing presentism and holding that also past and future things exist, the time lag argument is commonly seen to lose all of its force against realism. Take, for example, Houts' point against (1). According to Houts, (1) should not be considered an overwhelming answer to the time lag argument. The problem with (1) is that the past things I am supposedly having a visual perception of are presently non-existent; and, presently non-existent things cannot be at any spatial distance to presently existent things. Since I am (or, if you prefer, my body is) a presently existent thing, then (1) implies that the things I perceive cannot be at any spatial distance from me. But this is absurd, because it entails that they cannot even be external to me. What does perception amount to if it is no longer perception of external things? Moreover, since most arrays of things I am supposedly having a visual perception of are at different times, they cannot even be spatially related to each other. This means that (1) undermines any perception of three-dimensional space (Power 2013). Yet, this is only true if we assume that presentism is true. By contrast, if we assume that presentism is false, these objections no longer stand, and the time lag argument seems rather inoffensive against realism-and indeed, also against naïve realism, provided that one agrees to see perceptions as temporally extended (a conception that, as I have showed, has some independent defects). What I want to highlight, however, is that naïve realism can only make the time lag argument inoffensive if it takes a specific position in the debate over the metaphysical nature of time. It has not been stressed enough so far that naïve realism must take a specific stand about the metaphysics of time in order to strive for immunity from the time lag argument. 21 Presentism, in particular, is lethal to naïve realism, because it is lethal to the naïve realism's defensive strategy against the time lag argument. Indeed, this is just one of a number of interesting ways in which the answers to the most important metaphysical questions about time are intertwined with those to the most important metaphysical questions about perception (Le Poidevin 2015). We should add that presentism has been traditionally considered by both presentists and non-presentists as the most intuitive position (e.g., Sider, 2001;Markosian 2004). This being the case, we have another reason for refusing the title of "naïve realism" to naïve realism, and to believe it to be more revisionary a position than it may seem at a first impression (especially if we consider, as remarked by Moran 21 Again, with the notable exception of Moran (2019). (2019), that the various rival 'conjunctive' theories of perception do seem to be able to adopt the common sense presentist view). True, recently some scholars (e.g., Torrengo 2017) have claimed that it is not true that common-sense favors presentism over eternalism, and that certain intuitions we all share that do seem to be supportive of presentism are actually either neutral or counterbalanced by rival shared intuitions supporting eternalism. But even if this were the case, presentism would remain a position whose intuitiveness is second to none. The fact that naïve realism and the disjunctive position must embrace non-presentism means, at worst, that they must reject the most intuitive position in time metaphysics and, at best, that they must embrace a position in time metaphysics that cannot be defended as the most natural and intuitive. In either case, there are further reasons to doubt that the overall intuitiveness of naïve realism and of the disjunctive position is as high as their supporters pretend. Indeed, we should not forget that they intensely appeal to intuitiveness in order to defend their positions. Such defense is to be thought as partly undermined by the necessity to embrace non-presentism. Conclusion I have tried to show that the time lag argument hits naïve realism and the disjunctivist position more significantly than it has been thought so far. Both those supporting naïve realism and those opposing it have generally considered that a very simple answer by the naïve realist, i.e., that "we can have a direct perceptual relation with ordinary objects in the past," could settle things once and for all. I have argued that this is not the case. First, I have shown that even if we can have a direct perceptual relation with things in the past, we must solve the problem that perceptions being direct relations to things that have visually changed or ceased to exist during the time lag seem to count also as illusions or hallucinations, respectively. To avoid these experiences to count both as perceptions and non-perceptions, the naïve realist needs to improve the sufficient conditions for an illusion and a hallucination to occur in order for them to exclude the perception of the Sun or that of a dead star. Nonetheless, the new formulations cannot rely on a mere counterfactual or causal criterion. Characterising the perception of a dead star as adequately caused by the star, in particular, is not an option for the naïve realist, because it would mean to rely on "the Conjunctive Analysis" of seeing -a view of visual perception that the naïve realists themselves regard as denying both the directness of the perceptual relation and "the Disjunctive Analysis" of seeing on which the disjunctivist position is based. The problem can be solved if the naïve realist agrees to consider (direct and relational) visual perceptions as temporally extended into the past and, in particular, as consisting in the whole causal chain of events or states of affairs going from an external object x to a subject S. Indeed, many naïve realists have accepted to see (direct and relational) visual perceptions this way. Again, both parties seem to have considered this answer to be the last word regarding the challenge launched by the time lag argument to naïve realism. I have tried to argue, however, that this conception of visual perception can be called into question. I have examined some problems arising from it. A first problem concerns the metaphysical relation between the so conceived perceptions and the corresponding mental states. A second problem has to do with the explanation of how my being the subject of my visual experiences is supposed to be transmitted into the past in the perceptual case. A third problem arises when the idea that "the disjunctivist can claim that veridical perceptual experiences are exactly as they seem to us to be" is set against the fact that, among the limitless temporal positions that a material object can occupy when we visually perceive it, none can ever be the one it seems to us to be in. Finally, a fourth problem regards the fact that so-called "naïve realism" can only stand if it takes a specific position in the debate over the metaphysical nature of timeindeed, one that many have considered as less intuitive than its rival. Since the only view of visual perception that we know can solve the problem of the existence of some experiences counting as both perceptions and non-perceptions cannot be considered as completely satisfying, and on the contrary seems to raise a number of significant concerns, I believe the conclusion to be this: the question concerning the challenge launched by the time lag argument to naïve realism is not settled yet, and-contrary to common view-the threat represented by the time lag argument to naïve realism and the disjunctivist position is still standing. Funding Open access funding provided by Università degli Studi di Sassari within the CRUI-CARE Agreement. Conflict of Interest The author declares no competing interests. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-04-24T15:07:46.163Z
2022-04-22T00:00:00.000
{ "year": 2022, "sha1": "3cf867aaafd1db4929c126b0c7f9ee86e614b1b1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12136-022-00519-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "ba26fd68429580e4dee477897a25b3354895fcd8", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [] }
21620045
pes2o/s2orc
v3-fos-license
TTT and PIKK Complex Genes Reverted to Single Copy Following Polyploidization and Retain Function Despite Massive Retrotransposition in Maize The TEL2, TTI1, and TTI2 proteins are co-chaperones for heat shock protein 90 (HSP90) to regulate the protein folding and maturation of phosphatidylinositol 3-kinase-related kinases (PIKKs). Referred to as the TTT complex, the genes that encode them are highly conserved from man to maize. TTT complex and PIKK genes exist mostly as single copy genes in organisms where they have been characterized. Members of this interacting protein network in maize were identified and synteny analyses were performed to study their evolution. Similar to other species, there is only one copy of each of these genes in maize which was due to a loss of the duplicated copy created by ancient allotetraploidy. Moreover, the retained copies of the TTT complex and the PIKK genes tolerated extensive retrotransposon insertion in their introns that resulted in increased gene lengths and gene body methylation, without apparent effect in normal gene expression and function. The results raise an interesting question on whether the reversion to single copy was due to selection against deleterious unbalanced gene duplications between members of the complex as predicted by the gene balance hypothesis, or due to neutral loss of extra copies. Uneven alteration of dosage either by adding extra copies or modulating gene expression of complex members is being proposed as a means to investigate whether the data supports the gene balance hypothesis or not. INTRODUCTION Maize has undergone a whole-genome duplication event due to allotetraploidization approximately 4.8 million years ago and was domesticated only about 10,000 years ago (Doebley, 2004). The whole-genome duplication event created 20 pairs of chromosomes, which were later reduced to 10 after diploidization, mostly by chromosome fusions and deletions of centromeres (Wei et al., 2007;Salse et al., 2008). The maize genome also underwent extensive retrotransposition, gene movement , chromosome contraction and fractionation, and the loss of homoeologous gene copies (Bruggmann et al., 2006;Schnable et al., 2011). Several hypotheses explain this fractionation process. The gene balance hypothesis calls for an ideal range of stoichiometric balance between members of protein complexes because a disruption due to imbalance in concentrations of the components can have harmful effects (Birchler and Newton, 1981;Veitia, 2002;Papp et al., 2003). This dosage-dependent function can also apply to the interaction of positive and negative regulatory effectors (Birchler et al., 2005), and involved many genes of different functions (Birchler et al., 2001). Indeed, genes that are thought to be dose-sensitive such as those encoding components of proteasome/protein modification complexes, signal transduction machinery, ribosomes, and transcription factor complexes mendelized in Arabidopsis after diploidization from its tetraploid ancestor (Thomas et al., 2006). The same study used the term "connected genes" to describe loci that seemed to be co-regulated and dependent on each other. In addition, there seems to be preferential removal of genes from one of the homoeologs, the same as in maize (Woodhouse et al., 2010). This Yellow genes indicate the copy that was retained after polyploidization in maize, and the orthologous gene in sorghum. Sorghum was used as a reference to align the maize syntenic regions (middle genomic segment with gray genes). The top (blue genes) and bottom (pink genes) genomic segments indicate syntenic regions from maize subgenomes 1 and 2 respectively. The coordinates and sizes of the syntenic regions are indicated to the right of the alignments. process is probably a common occurrence in eukaryotes with whole-genome duplication histories (Birchler et al., 2001;Sankoff et al., 2010). Preferential removal of genes from one homoeolog also explains the uneven expansion and contraction of syntenic blocks in maize (Bruggmann et al., 2006). On the other hand, loss of a duplicated copy could simply be due to neutral loss of an extra copy over evolutionary time. Although genome-wide studies support the fractionation bias of connected genes, we thought to investigate this at the level of specific examples of interacting genes that are parts of functional units during development. We recently described the first TTT co-chaperone complex in plants (Garcia et al., 2017). The TTT complex is composed of Telomere maintenance 2 (Tel2), Tel2-interacting protein 1 (Tti1), and Tel2-interacting protein 2 (Tti2) and functions as co-chaperones for maturation and stability of phosphatidylinositol 3-kinaserelated kinases (PIKKs) (Hurov et al., 2010;Takai et al., 2010). The PIKK family on the other hand is involved in cell signaling related to growth in response to nutrients (TOR), DNA damage response (ATM, ATR, and DNA-PKcs), epigenetic transcriptional regulation (TRRAP), and nonsense-mediated RNA decay (SMG-1) (Abraham, 2004;Lovejoy and Cortez, 2009). Mutations in the TTT complex and PIKK members are lethal in many organisms (Brown and Baltimore, 2000;Benard et al., 2001;Menand et al., 2002;Takai et al., 2007;Stirling et al., 2011;Yamamoto et al., 2012). Deregulated expression has been implicated in many diseases including cancer (Populo et al., 2012;Weber and Ryan, 2015;Rodina et al., 2016), which underscores the essential function of these proteins. MATERIALS AND METHODS The human TEL2 (UniProt Q9Y4R8), TTI1 (UniProt O43156), and TTI2 (UniProt Q6NXR4) protein sequences were used to identify orthologs in maize (version 3 assembly) and other animal and fungal species using BLASTP at default settings. These proteins are well-characterized in yeast and mammals and have unique conserved domains that can be used to identify orthologs. The selected maize sequences were then used to identify other plant TEL2, TTI1, and TTI2 homologs using a BLASTP search in the Phytozome database. Sequences from representative organisms were then selected and aligned using ClustalW. A maximum likelihood phylogenetic tree was then created using the JTT model as implemented in the software package MEGA 6 (Tamura et al., 2007), with 500 bootstrap replications to test for clade confidence. The accession numbers for the sequences used are listed in Table 1. For the PIKKs, Arabidopsis TOR (UniProt Q9FR53), ATM (UniProt Q9M3G7), ATR (UniProt Q9FKS4), and human DNA-PKcs (UniProt P78527), SMG-1 (UniProt Q96Q15), and TRRAP (UniProt Q9Y4A5) were used to find their orthologs in maize, sorghum, and rice using BLASTP at default settings. Maize syntenic homologs and their subgenome assignments were obtained from the sorghumreferenced Pan-Grass Syntenic Gene Set (Schnable et al., 2012(Schnable et al., , 2016. Transposable element (TE) insertions in introns were identified based on RepeatMasker annotations. Previous whole genome DNA methylation studies in B73 from West et al. (2014) were used to identify the CG, CHG, and CHH methylations of these TEs. To estimate the insertion time of the LTR-retrotransposons in maize, we used RepeatMasker to retrieve the left and right LTR sequences and aligned them using CLUSTALW. Nucleotide substitution rates between the two LTRs were then calculated using MEGA 6 software using the Distance Estimation analysis option with the Kimura 2-parameter method. Uniform rates were assumed among sites and gaps were deleted from the analysis. Insertion time was then calculated using the reported estimate for LTR nucleotide substitution rate of 1.3 × 10 −8 per site per year . RESULTS AND DISCUSSION Our analysis of the genomes of several animals, fungi, and plants indicate that Tel2, Tti1, and Tti2 are single copy genes as they returned single BLASTP hits. Phylogenetic analyses of their sequences conform to the predicted evolutionary relationships between the species (Figure 1). Likewise, the PIKK genes in maize and rice are single copy (Supplementary Table 1) just like in Arabidopsis (Templeton and Moorhead, 2005) and humans (Bosotti et al., 2000). Synteny analysis in maize using sorghum as a reference confirmed that PIKK and TTT complex genes became single copy because of the removal of the duplicated copy from one of the homoeologs (Figure 2). Dataset from a previous study identified the two maize subgenomes as remnants of the two progenitors of maize, termed maize1 and maize2 (Schnable et al., 2011). In this study, it was hypothesized that fractionation in maize is based on a pattern of overexpression of genes of maize1 over the maize2 subgenome, referred to as genome dominance. It was further suggested that the copy from maize1 was favorably retained because it contributed more to total gene expression relative to its duplicate pair. However, here we can show that in case of the TTT complex Tti1 is located on maize2, whereas the rest of the PIKK and TTT complex genes had copies retained on maize1. Such exceptions would indicate that selection for retention of a gene copy rather depends on the local pattern of transposition events than a particular subgenome. Indeed, the local chromosomal structural organization appeared to be required for the removal of gene copies because of historic homologous recombination events via unequal crossing over as shown for maize and foxtail millet compared to sorghum (Xu et al., 2012). Therefore, it is not so much the dominance of one subgenome over the other one, but rather the mendelization of critical functions, which could explain the reversion to single copy of the genes investigated here. The interaction of TTT co-chaperones amongst themselves, as well as their interaction with PIKK proteins as co-chaperones, could be dosage-sensitive. Therefore, the genes coding for these proteins needed to evolve together as a unit (i.e., either all remain duplicated or all lose one duplicate) to attain the best stoichiometric balance needed to maintain fitness. However, it is also important to point out that none of the TTT complex and PIKK mutants displayed haplo-insufficiency, as heterozygotes seemed to be normal. It is therefore possible that expressions of these genes are subject to dosage compensation. If normal phenotypes in heterozygotes are conditional, they would display increased sensitivity compared to wild type, when subjected to certain levels of stress. For example, downregulation of Tti2 expression in yeast is sufficient for normal growth. However, this downregulation strain was more sensitive to PIKK-related stresses compared to wild type (Hoffman et al., 2016). Another example is the Steroidogenic factor-1 (Sf-1) gene in mouse wherein the heterozygote displayed mutant phenotypes only when subjected to stress (Bland et al., 2000). Aside from their role in protein folding, the TTT complex is also needed for the assembly of PIKK complexes (Horejsi et al., 2010;Kaizuka et al., 2010;Takai et al., 2010). Therefore, we extended our investigation to some components of TOR and ATR complexes, which were well-characterized in several organisms. Two proteins called RAPTOR (also known as KOG1) and LST8 (also known as GβL) are integral components of the TOR complex (Loewith et al., 2002;Kim et al., 2003). ATR, on the other hand, closely associates with ATRIP (LCD1 in yeast) (Cortez et al., 2001). Like the TTT complex and PIKKs, RAPTOR, LST8, and ATRIP are conserved in many eukaryotes and are also encoded by single copy genes (Cortez et al., 2001;Loewith et al., 2002). In addition, all three display haplo-insufficient phenotypes in yeast, which is evidence of their dosage-sensitive nature (Pir et al., 2012;Shimada et al., 2013). Homologs of these three genes have been cloned in Arabidopsis and in contrast to humans and yeast, both RAPTOR and LST8 exist as two copies (Deprost et al., 2005;Moreau et al., 2012). However, their investigations revealed that one of the copies was barely expressed, and that mutation in the other copy is enough to cause a mutant phenotype. Moreover, disruption of the barely expressed copy of RAPTOR did not result in a mutant phenotype. Therefore, this is still consistent with the gene balance hypothesis. Analysis of many presence-absence variations (PAVs) in a diverse set of maize and teosinte lines suggests that fractionation might still occur in maize, and that it remains biased (Schnable et al., 2011). Possibly, epigenetic gene silencing is a prelude to the deletion of a gene copy and could reflect an ongoing fractionation in Arabidopsis. Given the potential critical dosage dependence of members of the TTT complex, their coding regions are sensitive to random transposon insertions that have taken place in maize, which also leads to epigenetic silencing. Based on a high confidence gene set, it was previously estimated that 11.6% of genes in maize contain TEs in their introns (Haberer et al., 2005). This estimate is close to the >10% estimate made by West et al. (2014) by surveying gene bodies for CHG DNA methylation and accompanying H3K9Me2 epigenetic modifications that mark the TEs. We found that six of the eight genes in our study contain TEs in introns ( Table 2 and Supplementary Figure 1). For example, there is a 6 Kb Gypsy retrotransposon in intron 3 of ZmTti2, and a 2 Kb hAT element in intron 12 of ZmTti1, which are absent in their putative sorghum orthologs. Estimation of the insertion times of five LTR retrotransposons indicated that three of them transposed less than 4.8 million years ago (Supplementary Table 2). This signifies that some of these TEs are recent transpositions that occurred after the split of maize and sorghum . The maize PIKKs ZmATM, ZmATR, ZmTOR, and ZmSMG1 also have many TE insertions in their introns that expanded the gene size relative to their sorghum counterparts ( Table 2 and Supplementary Figure 1). For example, ZmATM genic region expanded to about 131 Kb relative to 71.5 Kb in its sorghum ortholog. However, this is not to imply that the TE insertions were selected for in these genesthey are most likely results of random transposon insertions. Because TEs are silenced by DNA methylation and associated histone modifications to prevent their expression (Slotkin and Martienssen, 2007), we investigated the methylation states of these TEs using datasets from a previous DNA methylation studies in maize (West et al., 2014). As expected, the TEs in the introns are heavily methylated in the CG and CHG contexts, as shown for ZmTti2 and ZmATM in Figure 3. Nevertheless, these genes are still well-expressed in many tissues and at many times during development as shown by the gene expression database available in MaizeGDB. It has even been shown that insertion of a TE into an exon can permit proper gene expression as long as splicing is not affected (Wessler et al., 1987). To ensure that proper splicing occurred, we experimentally validated the expression of ZmTti1 and ZmTti2, enabling us to clone their full-length coding sequences using RT-PCR (Garcia et al., 2017). All the data are indications of strong expression of these genes despite TE insertions in introns. In a genome-wide study, West et al. (2014) observed that about 10% of maize genes have CHG methylation in gene bodies that can be partially attributed retrotransposon insertions in the introns. The CG and CHG methylations are also mostly confined to introns where the TEs are located, indicating a mechanism to stop the spread of methylation to exons. This very accurate marking of boundaries between TE and genic sequences likely enable the genes to be expressed at the right dosage. Methylation in TEs promotes the formation of heterochromatin to suppress transposition (Cedar and Bergman, 2009;Zemach et al., 2013) and methylation in promoters is correlated with transcriptional repression (Weber and Schubeler, 2007). In contrast, the role of gene body methylation in the regulation of gene expression is still not resolved, although some studies found that gene body methylation is associated with transcriptional activation (Zhang et al., 2006;Wang et al., 2015), others dispute this (Bewick et al., 2016). Nevertheless, future studies in maize that probe the role of gene body methylation in gene expression, whether because of TE insertions in introns or not, will help determine whether it has a role in changing the expression level of a "connected" gene, and hence its potential impact on dosagesensitive functions. The reversion to single gene copy after genome duplication could be due to neutral loss of extra copies, or selection based on unbalanced gene duplications. It is likely that both mechanisms were involved in the diploidization of maize from its allotetraploid ancestor, which could be based on whether the genes belong to a dosage sensitive protein-protein network or not. Although our data on the TTT complex and PIKK genes is consistent with the gene balance hypothesis, the alternative hypothesis of neutral loss of extra copies cannot be discounted. To test this, uneven alterations in the gene copy number (or uneven levels of gene expression) should be done to see if it results in protein complex destabilization and thus potential negative fitness effects. This approach can be extended to other genes in maize to see if unbalanced duplications in members of protein complexes are deleterious in general. AUTHOR CONTRIBUTIONS NG designed the experiments, performed the analysis, and wrote the manuscript. JM designed the experiments and wrote the manuscript. FUNDING This work was supported with the Selman A. Waksman Chair in Molecular Genetics to JM.
2017-11-07T18:09:39.713Z
2017-11-07T00:00:00.000
{ "year": 2017, "sha1": "55ef616fd936722ffaa9a54da030087bcee46d68", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2017.01723/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "55ef616fd936722ffaa9a54da030087bcee46d68", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256867452
pes2o/s2orc
v3-fos-license
Improving outcomes in co-morbid diabetes and COVID-19: A quasi-experimental study Background High-risk people living with diabetes (PLWD) have increased risk for morbidity and mortality. During the first coronavirus disease 2019 (COVID-19) wave in 2020 in Cape Town, South Africa, high-risk PLWD with COVID-19 were fast-tracked into a field hospital and managed aggressively. This study evaluated the effects of this intervention by assessing the impact of this intervention on clinical outcomes in this cohort. Methods A retrospective quasi-experimental study design compared patients admitted pre- and post-intervention. Results A total of 183 participants were enrolled, with the two groups having similar demographic and clinical pre-Covid-19 baselines. Glucose control on admission was better in the experimental group (8.1% vs 9.3% [p = 0.013]). The experimental group needed less oxygen (p < 0.001), fewer antibiotics (p < 0.001) and fewer steroids (p = 0.003), while the control group had a higher incidence of acute kidney injury during admission (p = 0.046). The median glucose control was better in the experimental group (8.3 vs 10.0; p = 0.006). The two groups had similar clinical outcomes for discharge home (94% vs 89%), escalation in care (2% vs 3%) and inpatient death (4% vs 8%). Conclusion This study demonstrated that a risk-based approach to high-risk PLWD with COVID-19 may yield good clinical outcomes while making financial savings and preventing emotional distress. Contribution We propose a risk-based approach to guide clinical management of high risk patients, which departs significantly from the current disease-based model. More research using randomised control trial methodology should explore this hypothesis. Introduction In November 2019, the world was informed of the first cases of coronavirus disease 2019 , caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which rapidly escalated into a full-blown pandemic in early 2020. In Italy, it was noted that diabetes mellitus (DM) was three times more prevalent in patients with severe COVID-19 than in the general population. 1 It was thus anticipated that DM would also predispose people to increased severity of COVID-19 in South Africa. 1 South Africa is a lower-middle-income country that lacks data on the role of intermediate care services in the health system. 2 In preparation for the COVID-19 pandemic, the government set up field hospitals to ensure that patients infected with COVID-19 would be adequately treated in an already strained health system. In Cape Town, the first field hospital, the 862-bed Hospital of Hope (HoH), was erected in the Cape Town International Convention Centre (CTICC). It was set up in May 2020 and admitted the first patient on 08 June 2020. The hospital offered inpatient intermediate care which included oxygen support, intravenous fluid and medical management, access to mobile x-rays, physiotherapists, dieticians, social workers, an onsite pharmacy and support from a nearby laboratory. 3 Diabetes mellitus is a global public health problem and is one of the leading causes of morbidity and mortality worldwide. 1 This is especially concerning in South Africa, where the healthcare system is not only overwhelmed by the escalating prevalence of noncommunicable diseases (NCDs) but carries an additional burden of disease as a result of the tuberculosis and human immunodeficiency virus (HIV) epidemics. 1 Diabetes mellitus can be regarded as a chronic inflammatory condition characterised by multiple metabolic and vascular abnormalities. There is also a dysregulated immune response increasing the diabetic patient's risk of infections. 4 This, together with an augmented inflammatory process, may contribute to the underlying mechanism that leads to a higher propensity to infections with worse outcomes. 4 Evidence suggests that SARS-CoV-2 induces a vigorous innate immune response, leading to a 'cytokine storm' which is thought to play a critical role in the high mortality of patients with COVID-19. 5 The 'cytokine storm' is a crucial cause of acute respiratory distress syndrome (ARDS), a systemic inflammatory response, and multiple organ failure. 6 The onset of dyspnoea and ARDS usually occurs at a median of 5 and 8 days, respectively. 7 Recently, the pulmonary pathology of SARS-CoV-2 infection was shown to be diffuse alveolar damage, alveolar oedema with proteinaceous exudates, thickening of alveolar walls, desquamation of pneumocytes and hyaline membrane formation, all indicative of ARDS. 8 The presence of Type 2 DM (T2DM) with chronic inflammation and other associated comorbidities may allow unrestricted viral replication and trigger heightened levels of inflammation and hyperimmune reaction, greatly exacerbating the response to SARS-CoV-2. 5 A retrospective multicentre study in Hubei province, China, investigated 952 patients with pre-existing T2DM who were also diagnosed with COVID-19. This study suggested that people living with diabetes (PLWD) required more medical interventions and had a significantly higher mortality (7.8% vs 2.7%; adjusted hazard ratio [aHR], 1.49) and multiple organ injuries than those without DM. 9 This study also found that if glycaemic variability was maintained between 3.9 mmol/L and 10 mmol/L, there was a significant reduction in medical interventions, major organ injuries and all-cause mortality. 9 Glycaemic variability has been shown to be an important indicator and a possible risk predictor for death and other complications in individuals with T2DM. 10 Efforts to ensure good inpatient glycaemic control are therefore the cornerstone in the management of PLWD who are diagnosed with COVID-19. To obtain good glycaemic control in the context of field hospitals with rapid turnover of patients and inexperienced staff, the use of clinical protocols assumes increased importance. In a systematic review and meta-analysis that included 475 publications, the weighted prevalence of mortality in hospitalised COVID-19 patients with DM (20.0%, 95% confidence interval [CI]: 15.0-26.0; I 2 , 96.8%) was 82% (1.82 times) higher compared to that in non-DM patients (11.0%, 95% CI: 5.0-16.0; I 2 , 99.3%). The prevalence of mortality among DM patients was highest in Europe (28.0%; 95% CI: 14.0-44.0) followed by the United States (20.0%, 95% CI: 11.0-32.0) and Asia (17.0%, 95% CI: 8.0-28.0). The weighted prevalence of DM among hospitalised COVID-19 patients was 20% (95% CI: 15-25, I2, 99.3%). 23 National guidelines and standards of care for DM are now available in many countries. 11 Translation of practice recommendations from developed countries to the practical care of PLWD living in developing countries is challenging as there is differential access to various aspects of care. 11 Data from the Western Cape Department of Health showed that COVID-19 and DM comorbidity had dramatically higher mortality rates than COVID-19 patients without DM. 12 Local data also similarly demonstrated that there is an increased mortality in patients diagnosed with COVID-19 who were older in age and had DM, hypertension and renal impairment. 13 It was therefore decided to offer high-risk PLWD an elective admission to the CTICC HoH, with the hypothesis that this would prevent increased morbidity and mortality. Because there was a lack of robust scientific data, a consensus document on inpatient DM management in the form of inpatient practice guidelines was adapted from a nearby tertiary hospital and implemented at the HoH (named the high-risk diabetes-COVID-19 protocol -HRDCp -Appendix 1). The aim of this study was to assess the impact of early elective admission of high-risk PLWD diagnosed with COVID-19 and the application of a clinical practice guideline (HRDCp) on clinical outcomes in a generalist-run intermediate care facility. The following objectives were fulfilled: a description of the demographics and baseline characteristics of participants; a description of the inpatient clinical course of this cohort, comparing the control and experimental groups; a description of the clinical outcomes of this cohort, comparing the control and experimental groups. Study setting The district health system in the Western Cape comprises six districts, five rural and one located in urban Cape Town. The Cape Town Metro district is further subdivided into eight subdistricts paired to form four substructures. It is to the substructure level that governance powers are decentralised. Hospitals in all these Metro substructures, and tertiary hospitals in Cape Town, referred patients to the HoH according to agreed-upon referral criteria (Appendix 2). The HoH medical staff comprised seven teams. Each team had a team leader, who was a senior clinician in family medicine (five teams), internal medicine (one team) or emergency medicine (one team). The HRDCp was implemented in the treatment of the experimental cohort of patients for inpatient management. Patients were preferably admitted for at least 8 days to ensure that they did not decompensate during the time period in which the 'cytokine storm' was expected to occur. Study design This was a retrospective quasi-experimental study. This study design was used to assess a real-world intervention, retrospectively, between two predefined groups of PLWD. Study population High-risk PLWD satisfying the inclusion criteria were identified by using a database from the Western Cape Data Centre. Code-named VECTOR (Virtual Emergency Care Tactical OpeRation), a group of medical officers obtained data from the data centre, which ran an algorithm to generate a list of high-risk PLWD with a COVID-19 diagnosis in a 10-day window. 13 These patients were then allocated to the medical officers who contacted them telephonically to offer them elective admission to the HoH. All 61 patients who accepted admission to the HoH via the telemedicine (VECTOR) community group were included as the experimental group, while 122 purposively selected patients matching the inclusion criteria below in a 2:1 ratio were identified from those admitted prior to the introduction of the intervention (HRDCp) to make up the control group. The two groups were matched for age, gender and renal function. Inclusion criteria were Type I or II diabetes mellitus with COVID-19 (polymerase chain reaction [PCR] test or clinical diagnosis) and renal impairment (creatinine of more than 100) or age older than or equal to 65 years of age. Exclusion criteria were age younger than 65 and normal renal function, and for controls, the exclusion criteria were being admitted after the HRDCp was introduced and not being part of the VECTOR cohort. Renal impairment was defined using the RIFLE (risk, injury, failure, loss of kidney function and end-stage kidney disease) criteria, which included a rise in creatinine, which results in risk, injury, failure, loss of kidney function and end-stage kidney disease (see Appendix 3). 24 Data collection Data were extracted from the HoH clinical database (described earlier) using a data extraction tool that was designed by the research team, piloted on five PLWD who were not part of this study and modified accordingly. The study period covered the entire duration of the facility's operation, from June 2020 to August 2020. Esc, escalated care to higher level; D/C, discharged home. The data extraction tool is attached as Appendix 4. Statistical analysis Data were analysed using Stata 14 (StataCorp LLC, College Station, Texas, United States). Patient characteristics, comorbidities and outcomes were compared using χ 2 or Fisher's exact tests for categorical data and Wilcoxon ranksum (Mann-Whitney) tests for continuous data. All statistical tests were two-sided with significance set at α = 0.05. Figure 1 shows the care pathways of participants in the study. Sixty-one patients accepted the offer for elective admission via the telemedicine group, forming the experimental group. For the control group, 122 PLWD were identified from the dataset and included in this group. The baseline characteristics and demographics of both populations are shown in Table 1. There was no significant difference in any comorbidity between the groups (Table 1), with the data indicating hypertension, chronic kidney disease, overweight or obesity and cardiac disease to be prevalent in both groups. Results On admission, the experimental group had significantly better diabetes control than the control group (HbA1c 8.1% Table 2 shows the clinical interventions that were administered during admission. The majority of participants in the experimental group (73%) only required room air in contrast to the participants in the control group, where the majority required oxygen. Fifty-six percent (11) required nasal cannula oxygen, and 28% required some type of face mask oxygen. Participants in the experimental group required non-rebreather facemasks as their highest level of oxygen (2%) versus 12% in the control group (p = 0.041), while some matched controls additionally required doublebarrel oxygen (3%) as their highest level of oxygen requirement. Antibiotics were more commonly used in the control group (35% vs 9%; p < 0.001). Corticosteroids (oral and intravenous) were also more commonly used in the control group (55% vs 15%; p < 0.005). Additionally, participants in the experimental group had significantly lower admission finger-prick glucose results compared to those in the control group (8.3 mmol/L vs 10.0 mmol/L, p = 0.006). The discharge glucose levels, insulin requirements and glucose-related adverse events were similar in both groups. Table 3 indicates the adverse events and clinical outcomes that occurred in the two groups. Participants in the experimental group had a shorter hospital stay than those in In the experimental group, there was a shorter hospital stay (5 days vs 6 days, p = 0.04), less occurrence of AKI (p = 0.046), less escalation of care (p = 0.286) and less mortality (p = 0.508), although the latter two indicators were not statistically significant. Discussion The key findings of this study relate to the clinical course and outcomes in two groups of PLWD. High-risk PLWD in the experimental group were managed with good outcomes at a field hospital. This will improve confidence in the future for the down-referral of high-risk PLWD to intermediate care facilities. These clinical outcomes were achieved without needing admission to an acute hospital first, implying savings in cost and possibly improved patient experience, although these were not measured in this study. Although the control and experimental cohorts were similar at pre-COVID-19 status, their COVID-19 clinical parameters were markedly different, possibly explaining the significant differences in oxygen need, steroid use, antibiotic administration and glycaemic control. What is known in this area is that hyperglycaemia, increased coagulation rate and elevated release of pro-inflammatory cytokines all facilitate the severity of COVID-19 in PLWD. 14 The authors suspect that the intervention of early elective admission, followed by tight glycaemic control, may have prevented the cytokine storm in this cohort. Further studies in this regard would be useful to evaluate and prove this hypothesis. Acute kidney injury was found to be significantly more prevalent in the control group; this is likely to be multifactorial but ultimately highlights that these participants had more severe COVID-19. In a study performed in Turkey which included 578 patients, the incidence of AKI at admission was higher in patients with chronic kidney disease (CKD) than those without CKD (52.1% vs 39.3% respectively, p = 0.006). 15 In a study of 4020 consecutively hospitalised patients in Wuhan, China, 285 were identified as having AKI and had an increased risk of inpatient mortality. 16 Acute kidney injury, regardless of the cause, remains a key adverse event that clinicians should be wary of. This study demonstrated that the most common comorbidity among PLWD with COVID-19 at the HoH was hypertension. This is similar to other studies. In New York, a study among 5700 hospitalised patients with COVID-19 showed that 1808 patients (33.8%) had diabetes and 3026 (56.6%) had systemic hypertension, while 1737 (41.7%) were obese. 17 Although the body mass index (BMI) was recorded as raised in the current study where it was measured, exact values were not recorded on all patients, and this was therefore difficult to interpret accurately. However, the link between obesity and diabetes is clear, and this remains an important clinical risk to be observed in this population. 18 Using a tertiary-level clinical guideline proved to be a feasible option in this study, resulting in safe and effective reported a significant correlation between well-controlled blood glucose and lower levels of inflammatory markers. 9 In Michigan, tailored protocols and algorithms were developed to improve glycaemic control for 200 patients admitted with COVID-19, and this allowed them to react to surges in glucose levels driven by disease activity and reduce the burden on the primary teams. 19 An important finding is that the clinical outcomes of the two groups were similar. Based on their pre-COVID-19 morbidity profile, participants in the experimental group represented a group that had a high likelihood of requiring admission, needing critical care and dying. 12,13 In this group, 94% were discharged home without having to potentially endure the anxieties associated with admission to an acute hospital, most often via an emergency centre. While the authors did not include anxiety or depression measurements in this study, several Chinese studies make the link between anxiety disorders and acute hospital admission for COVID-19. 20,21,22 These findings suggest a significant missed opportunity in this context for learning about the impact of acute hospital admission on the mental health of PLWD, and this should inform future research. Limitations of the study include the small sample size, missing clinical data and the fact that this study compared PLWD with differing levels of severity of COVID-19. A further limitation is that this study did not include longerterm follow-up data. The quasi-experimental design has implicit limitations in that it does not use random sampling in constructing experimental and control groups and has low internal validity. Using nonuniform comparison groups can limit generalisation of the findings, because noncontrolled variables may have influenced the results. While useful for health systems research, the strength of the evidence is not equal to a randomised controlled trial because of the uncontrolled confounding factors in real life. Conclusion This study compared a novel approach to managing risk for adverse outcomes by early admission and tight diabetes control to the usual practice of waiting for severe disease to arise and subsequent emergency admission. While showing similarity to usual care in terms of clinical outcomes in the context of a field hospital, it is suggested that savings were made in terms of medical complications and acute admissions costs (financial and emotional). Further studies should look at how digital innovations could enhance the coordination of care across all levels of the health system, the role of clinical risk factors as criteria for elective escalation of healthcare and ways to enhance interdisciplinary, interfacility and vertical collaborations. Specifically, attention should be paid to the cost-effectiveness of novel interventions and the psychosocial impact of these interventions.
2023-02-15T16:12:22.989Z
2023-02-13T00:00:00.000
{ "year": 2023, "sha1": "cea71e60cb9b59cdcb01976b8ac8f53d3baa10e3", "oa_license": "CCBY", "oa_url": "https://safpj.co.za/index.php/safpj/article/download/5631/7813", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "35cf27e699a0ea5b6b9db7bec6bb833f87e62ee7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228928924
pes2o/s2orc
v3-fos-license
BATU CAVE: PREHISTORIC OCCUPATION OF MERATUS MOUNTAINS, SOUTH KALIMANTAN GUA BATU: HUNIAN PRASEJARAH DI PEGUNUNGAN MERATUS, KALIMANTAN SELATAN Preliminary studies in the karst hills of the Meratus Mountains in Kotabaru Regency found rock-shelters and caves that were indicated to have traces of prehistoric dwellings. One of them is Batu Cave which is in Batangkulur village, Kelumpang Barat district. This article discusses the results of excavations carried out in Batu Cave in 2018. The problems raised on proof of occupancy and how human life in the past in Batu Cave. Archaeological data were obtained from excavations by using test-pit at two different locations. The excavation findings were analyzed quantitatively and qualitatively. Quantitative analysis was carried out to find out the quantity of findings. The qualitative analysis includes an initial classification, which divides archeological data according to the type, form, and style. The results show that Batu Cave is cave dwelling with living activities that rely on the surrounding resources. Exploitation of environmental resources shown in the use of several types of terrestrial and aquatic fauna as one of the main food sources. Various types of tools were made using rocks, as well as bones and shells. INTRODUCTION This research aims to reveal more details about human life used to live on this site. Besides that, this research also tries to present evidence of life and to reconstruct the life that happened in Batu Cave. Archaeological research in Kotabaru Regency has succeeded in finding indications of human activity in several caves in the Kotabaru karst hills. This research began with a cave survey in three districts, namely Kelumpang Barat, Kelumpang Hulu, and Hampang. The three districts are close to Mantewe in Tanah Bumbu Regency, which had formerly been researched (Figure 1). Intensified archaeological research in Mantewe district managed to find some occupancy sites, namely Liang Bangkai 1, Liang Bangkai 10, Ceruk Bangkai 3, Gua Sugung, Gua Landung, Gua Harimau, Gua Pembicaraan, Liang Ulin 2, and Gua Payung (Fajari and Kusmartono, 2013;Fajari and Oktrivia, 2015;Oktrivia et al, 2013;Sugiyanto, 2015). In general, cave dwellings in Mantewe display the use of human subsistence depending on terrestrial ecosystems and aquatic resources. Evidence of Liang Bangkai 1 occupancy is shown by findings in the form of stone artifacts, bone tools, pottery fragments, animal bone fragments, and shells . Liang Bangkai 1 was a residential site indicated as a location for the creation of stone artifacts. This is based on the abundant amount of rock artifact data in the excavation of Liang Bangkai 1 . Apart from Liang Bangkai 1, traces of activity were also found at Liang Ulin 2, which has three terraces. On the upper terrace of Liang Ulin 2, archaeological data were found, in the form of stone artifacts, pottery and bone tools, as well as fragments of animal bones, shells, skeletons and human teeth. Burning activities had been carried out to process food, indicated by several findings of burnt bones (Oktrivia et al, 2013). Several types of tools made to meet daily needs include earthenware, stone artifacts, and bone tools. Making tools was one of the human efforts to exploit natural resources to survive (Fajari and Oktrivia, 2015). Not far from Liang Ulin 2, the dwelling site is also located at Gua Payung. The findings of stone artifacts and bone jewelry, as well as shells and bone fragments are evidence of occupancy in Gua Payung. The occupancy in Gua Payung is known to have a neolithic cultural style at a chronology of 3007-3013 calBP (Fajari and Kusmartono, 2013). Evidence of occupancy was also found in the northern Meratus Mountains. Intensive research in Bukit Batubuli, Muara Uya District in Tabalong Regency found Gua Babi, Gua Tengkorak, and Gua Cupu. The human culture in Batubuli had a pre neolithic cultural character at 5050 ± 100 BP or around 5688-5898 calBP (Widianto and Handini, 2003). METHODS This research is a descriptive study which was carried out in several stages, namely data collection, analysis, and interpretation to answer the problems raised. Data collection was performed by surveying the cave to determine its archaeological potential, continued with excavations in Batu Cave which has an indication as a dwelling cave. Excavation was carried out by opening two 2x2 m sized TP. The technique applied is a lot system, a technique that combines spits and layers. The technique uses the spit interval as an arbitrary guide to the depth, while taking into account changes in soil layers in the excavation box. The excavation findings were analyzed quantitatively and qualitatively. Quantitative analysis aims to calculate the quantity and percentage of findings. The qualitative analysis involves the classification, formal analysis, and contextanalysis. Classification aims to construct the boundaries of the observed group (Clarkson and O'Connor, 2006). Eco-facts were grouped based on their types, namely bones, teeth, shells and charcoal. Artifacts were classified based on their material, namely stone artifacts, bone artifacts, shell artifacts, and pottery. The formal analysis of stone artifacts is based on the scheme prepared by Andrefsky, who divides stone artifacts into two groups, namely tools and non-tools (Andrefsky, 1998). Observations of bone artifacts and shell artifacts were based on the traces of work that had been found. Bone artifacts found in the form of spatula and spatulate. The shell artifacts are generally crescent like-shaped and have traces of workmanship on their edges. Pottery fragment analysis was carried out by grouping them based on the edges, the body and the bottom. Further observations of pottery were carried out to determine the technology and decorative motifs applied. Meanwhile, context-analysis that emphasizes the relationship among the archaeological data was carried out by observing the matrix, location, and spatial and temporal distribution (Tim Penulis, 1999). The results of the analysis then become the basis for the interpretation of the data, leading to the overview of the prehistoric settlements in Batu Cave. RESEARCH RESULTS An archaeological survey found the quantity and variety of surface finds indicating human occupational activities in Batu Cave. Batu Cave has suitable physical conditions for residential location, namely flat and dry floors, as well as good light intensity and humidity. The excavation was carried out to prove this assumption. Two test pits measuring 2x2 m were dug at different locations. The two test pits were referred to as TP 1 and TP 2. TP 1 is located in the western part of the cave, which is on the highest floor surface, while TP 2 is in the eastern part on a gentler floor surface ( Figure 2). The excavation of TP1 aims to find evidence of human activity in Batu Cave and its vertical distribution in the soil layer. TP2 excavation aims to determine the horizontal distribution of archaeological data, as well as the transformation process that occurs in Batu Cave. The excavation of TP 1 was carried out to a depth of 60 cm from the ground. TP 1 stratigraphy consists of three layers, namely A, B, and C ( Figure 3). Layer A is a gray, fine clay soil. In this layer found features of ash as well as a number of bone fragments and burned shells. Layer B is reddish brown sandy clay at a depth of 25-45 cm. Layer C is red sandy clay with a depth of 30 to 60 cm. The end of layer, C is filled with bedrock which is part of the floor of the cave. The excavation of TP 1 found a variety of archaeological data with a total of 17,938 pieces. The distribution of finds was mostly found in layer C with a total of 52%, while the finds from layer A amounted to 11% and layer B amounted to 37%. The results of the excavation of TP 1 can be seen in Table 2. The excavation of TP 2 reveals two soil layers, namely A and B ( Figure 4). TP 2 does not have a layer of soil mixed with ash like in TP 1. The soil in layer A of TP 2 is brown with loose sand and moist texture. The soil layer is dense with irregular-shard-formed shells and bones. Layer B has sandy clay textures and reddish color. The soil in layer B contains both large and small boulders of limestone. The distribution of the findings was mostly found in layer B with a total of 58.1% (n = 10,991), while the findings from layer A were 41.9% (Table 3). In general, the finds of the excavation of the Batu Cave are divided into two groups, artifactual and eco-factual. The types of artifacts found were stone artifacts, pottery, bone artifacts, and shell artifacts. There are two types of stone artifacts, namely tools and non-tools. The grouping was based on the scheme compiled by William Andrefsky (Andrefsky, 1998). Types of stone tools include bifacial or monofacial tools, shale tools, and core stones. Meanwhile, the group of non-tool consists of splinters and chips(debitage) (Andrefsky,1998). Stone tools from Batu Cave accounted for 10.1% of the total stone artifacts found. The stone tools consist of core tools and shale tools. Core tools are the remaining rock material with fields formed due to pruning in stone tool production (Poesponegoro and Notosusanto, 2010). Core tools were the most common type of stone tools (63.4% of the total stone tools). The analysis shows that core stones have two types, namely unidirectional core tools and multidirectional core tools. Unidirectional core tools have a one-way pruning in a plane of hit plain. This type is the dominant core-stone type in Batu Cave. One of the core tools found has the characteristics of hoabinhian technology, which is a thorough flaking on one side creating sharp edges all over the edges (Wiradnyana, 2017) (Figure 5). Meanwhile, multidirectional core tools are the result of flaking done towards two or more flaking planes. The non-tools stone artifacts were the most common type, which was 89.9% of the total. Andrefsky divides it into two, namely flakes and piece esquille (Andrefsky, 1998). This type of shale is referred to as a proximal shale, which is a non-tool stone artifact that has a complete shale morphology without any cracks or other signs of work (Andrefsky, 1998). Proximal shale is the most common type of stone artifact found and is quite easy to recognize compared to other types. Apart from the tools and non-tools group, it was also found that hammer stones and crusts were trimmed. The stone was found to have an oval shape with a scar on one side. These wounds are the scars that arise as a result of colliding with other stones, when used to break and shape stone tools. Only one piece of crust trimmed from Batu Cave was found, tended to have irregular and very simple forms. The pottery from Batu Cave have shards at the edge and the body of the container. The concave traces on its surface and some irregular striation lines indicate that the pottery was made using the ground-to-ground technique and slow wheel (Sharer and Ashmore, 2003;Tim Writer, 1999). Most of the pottery fragments are plain without decorative motifs, and only a few of them have decorative motifs. The types of decorative motifs found are trellis and irregular geometric lines. Batu Cave artifacts were made of long bones, simply modified, namely by dividing the middle to produce a concave cross section. Further works were performed at the edges creating sharper form. The types of bone artifacts found were spatulate and spatula. Both have different morphology. The spatulate is characterized by a sharp tip resulting from trimming or rubbing (Prasetyo, 2004). Trimming done on both sides intensively resulted in symmetrical pointed ends, while trimming on one side only produces asymmetrical tapering (Prasetyo, 2004). The spatulate found in Batu Cave has an asymmetrical tip. Meanwhile, the spatula has a characteristic in the form of a flat and wide sharpener, which is formed by rubbing the inside or outside of the bone to produce a sloping tip (Prasetyo, 2004). The spatula from Batu Cave is generally not scrubbed intensively. This is indicated by the concave and uneven center of the bony apparatus. Shell artifacts in Batu Cave were made from bivalve shells, either in whole or in pieces. The shell tool was made in simple form, without any complicated modifications. Work was done by slightly eroding the posteriorly without changing the complete shape at all. The abrasion marks can sometimes be seen by unaided eyes, but there are some clam tools with barely visible scrapings. The percentage of shellfish tools in Batu Cave indicates that shellfish are one of the most widely used alternative ingredients (food scraps), compared to bones. Shellfish was the most common ecofacts with a percentage of 83% (n = 24,863). Other types of ecofacts include bone fragments, teeth and charcoal. Most of the shells and bone fragments are small fragments that cannot be identified. The shells from the excavation of Batu Cave were grouped into three types, namely Gastropods, Bivalves, and unidentifiable shellfish. Gastropods are the most common type, namely 40% (n = 20,242). Some of the identified families are Thiaridae, Viviparidae, Neritidae, Pleuroceridae, Amnicolidae, Conidae, Trochidae, Planorbidae, and Cypraeidae. The most dominant types include Thiaridae and Viviparidae. The Thiaridae and Viviparidae or what are referred to as 'ketuyung' in local languages are one of the sources of animal protein consumed by humans. Most Thiaridae shells have the apex cut off, which indicates cutting to remove the clam flesh. Bivalves are the second largest phylum after Gastropods, which are characterized by a pair of shells with hinges that can be closed and opened (Reitz and Wing, 2008). There are not as many types of bivalves as gastropods. The findings of bivalves that can be identified include the families Corbiculidae and Veneridae. Types of Corbiculidae are shellfish that live in brackish water mangrove habitats, while Veneridae are seashells that live in saltwater. Freshwater and saltwater shells were one of the main sources of human consumption at that time. Apart from shellfish, foods were obtained from animals. Bone fragments are the second largest species of ecofacts after shellfish. The bone fragments are classified based on their form and type, namely long bones, chondylated bones, vertebrae, and bone fragments that cannot be identified. Initial analyzes have identified the turtle plastron, remains of fish spines, fragments of crab claws, bear hooves and pig spines. The spinal structure has a distinctive shape, with the neural canal and neural arch having a butterfly-like shape. The bone fragments collection in Batu Cave is assumed to be human food remains. The teeth found from Batu Cave consisted of human and animal teeth. Human tooth fragments were found in the form of molars. The teeth found generally have a worn occlusal condition. The cuspid occlusal and dentin morphology is difficult to analyze. Some teeth are known to have traces of attrition, the damage to the hard tissues of the teeth due to the friction between the teeth. Dental attrition generally occurs in the elderly, namely as a result of the physiological process of chewing that lasts for a long time and the habit of chewing pinang palm (Noviyanti, 2014). However, the condition of the molar teeth from Batu Cave did not show any traces of chewing. As for animal teeth, they have not been identified so far. DISCUSSIONS The excavation results show that Batu Cave were used by humans as a place to live. Evidence of dwelling activity was obtained from several tools produced from stones, shells, bone fragments, and pottery. In terms of quantity, stone artifacts are the most common type compared to other artifacts. This shows that the inhabitants of Batu Cave tended to use and utilize rocks to make living tools. They also used food scraps in the form of clam shells and bone fragments to make tools. The use of clam shells as a material for making tools is an interesting option. Some literature on Asia-Pacific archeology states that the use of shells for toolmaking materials was only done when the source of rock material was insufficient. The shells had only been used as a 'substitute' material; tools produced from stones and shells have the same function (Szabo, Brumm, and Belwood, 2007). This didn't happen at Batu Cave. Shells and stones were both used as materials for making tools. The resulting shell tools also had a different shape from the stone tools. The number of pottery fragments found indicates that the use of pottery for containers or daily needs was not intensive. If you compare the quantity of finds, the use of pottery in Batu Cave is not as intensive as in other sites, such as Payung Cave and Liang Ulin 2. The quantity of pottery found at the two sites is quite large. In addition to quantity, the pottery from Batu Cave has simpler decorative motifs than those from Payung Cave. Umbrella Cave pottery has various decorative motifs, including a round hole motif made with a puncture technique and the addition of a red slip to the outer surface of the pottery (Fajari, 2010). Similar to Payung Cave, the pottery from Liang Ulin 2 also has a red slip and a round hole motif (Oktrivia et al, 2013). Even though they have different decorative techniques and motifs, the technology used in the manufacture of pottery between Batu Cave, Payung Caves, and Liang Ulin 2 is similar, namely using face-to-ground technology and rotating wheels. Ecofacts found in Batu Cave provide an overview of exploration of natural resources for food fulfillment. The main food source is obtained from the aquatic environment, both freshwater and brackish water habitats. The types used were Polymesoda erosa and Polymesoda expansa 1 . Meanwhile, land fauna consumed by humans included Suidae or pigs. The pig bone fragments found were in the form of an upper jaw and several parts of the spine. Radiocarbon dating analysis was carried out with shell samples of Polymesoda erosa and Polymesoda expansa from TP 1 and TP 2. The analysis was carried out in the radiocarbon dating laboratory at the University of Waikato, results is displayed in Table 3. There raised an issue related to the results of the radiocarbon dating analysis 14 C done. The chronology obtained from the TP 2 shellfish sample seems to be reversed that the layer A has an older number than layers B and C. This is because the Polymesoda erosa shells sampled were transformed by the marine and land environment during their life. Clams live in brackish water habitats in mangrove forests, which are always affected by tides. University, the calibration is based on (Ramsey, 2017;Reimer et al., 2013) The results of research by Fiona Petchey et al. Thespecies Polymesoda erosa from the archaeological site in Caution Bay, PNG show that the extreme environmental conditions with high salinity and temperature can affect the isotope level on shellfish that live in these habitats (Petchey et al., 2013). Analysis using Polymesoda as a sample can give results several hundred years older or younger. The resulting chronology can also be an indication of changes in the environment at a certain time. For example, there was a decrease in sea level between layers A and B, it can affect carbon conditions in the environment where Polymesoda is located (Petchey, personal communication via email 29 February 2019). The dating analysis using a sample other than Polymesoda is necessary in order to confirm the chronology of the occupancy in Batu Cave. The results obtained in Batu Cave adds new data for cave occupancy in the southeast Meratus karst. Evidence of occupancy from Batu Cave has several similar characteristics with other sites. The similarity can be seen in the variety of equipment produced and the choice of food sources available. Residential cave sites found in the Meratus Mountains include Liang Bangkai 1, Liang Bangkai 10 Ceruk Bangkai 3, Gua Sugung, Gua Landung, Gua Harimau, Gua Pembicaraan, Liang Ulin 2, and Gua Payung (F Fajar and Kusmartono, 2013;and Oktrivia, 2015;Oktrivia et al, 2013;Sugiyanto, 2015). Liang Bangkai 1 is known as a residential site with abundant stone artifacts findings . Liang Bangkai 1 and Batu Cave have something in common, namely artifactual data which is dominated by stone artifacts. However, the types of stone artifacts found at Batu Cave are not as complex as those from Liang Bangkai 1. The difference is in the form of the use of shells for tools, which so far has not been found in other dwelling caves. The geographic location of the rock caves close to the coast provides a usable source of materials. As discussed earlier, Batu Cave shell artifacts are made from the shells of brackish water Bivalves. Findings of brackish or saltwater shells are not widely found at cave sites that are some distance from the coastline, such as in Mantewe and Hampang. The limestone hills in Hampang are currently 22.5 km from the nearest coastline (Fajari et al., 2018). Similar to Hampang, there are not as many brackish/saltwater shells found in the Mantewe karst hills as in Batu Cave. The type of saltwater shells found is the Cypraidae or known as kauri. Two Cypraids were found in Batu Cave in TP 2. This type of shell was also found in Liang Bangkai 1. The presence of Cypraidae at one cave site can be used as an indication of a relationship with occupancy in other areas. Suroto's research results state that in the 1900s, people living in the high mountains of Papua still used this type of shellfish as a medium of exchange. Some community groups, such as Kapauku and Mee who live around Lake Wissel, use the type Cypraea moneta shellfish as a medium of exchange, called kapaukumege (Suroto, 2009). This type of shell is also often used as jewelry. The shelter in Batu Cave describes activities related to daily activities, such as food gathering, food processing, tool making, and living. So far, no human skeleton has been found to indicate burial activity. Several human teeth from Batu Cave were found scattered without human skeletal context. An explanation of the human supporting culture still must be done with further research. Based on the thickness of Batu Cave sediments, the potential for finding human remains is quite large. Further research is needed to find human remains and confirm the role of Batu Cave in the settlement of the Meratus karst area in the past. Previous discoveries of human skeletons have been reported at several cave sites in the Mantewe karst. Human skeletal fragments were first found in excavations at the Liang Ulin 2 site (Oktrivia et al, 2013). The findings were in the form of teeth and skull bones which were very fragmentary in condition. The results of the analysis show that the remaining skeletons from Liang Ulin 2 are known to come from six individuals, consisting of 3 adults and 3 children . Individual identification successfully reveals information about population affiliation and special character. The rest of the Liang Ulin 2 skeleton is known as the Mongoloid race, experiencing malnutrition, recognizing the pangur culture, and chewing betel nut (Sugiyanto et al., 2016). The findings of human remains were also found in Liang Bangkai 10. The results of the analysis of the skeletal remains at Liang Bangkai 10 found at least four human individuals consisting of 2 adults who were buried almost intact, 1 adult individual, and 1 child whose grave is unknown . Human support for culture in Liang Bangkai 10 is known to come from the Mongoloid race with cultural characteristics of pangur and chewing betel (Sugiyanto et al, 2016). CONCLUSION The archaeological findings found in Batu Cave are evidence of human dwelling in the location. The occupancy in Batu Cave is shown by activities related to meeting the needs of daily life. The characteristics of the occupancy in Batu Cave show the same characteristics as other dwelling caves in the southeastern Meratus Mountains. This can be seen in the choice of materials for making tools, which use stone and bone. Stone as a tool material is more widely used than bone. The thing that distinguishes Batu Cave from other sites is the use of shells as a material for making tools. So far, shell artifacts have not been found at another residential site in southeast Meratus. The results of the dating analysis in Batu Cave have not shown reliable figures. As a comparison, the data were obtained from Liang Bangkai 1. Chronology dating at layer 1 in Liang Bangkai 1 yields the calendar 5920-6045 calBP. This period is equivalent to the results of Batu Cave dating of layer A at TP 1 with a chronology ranging from 6065-5650 calBP. The chronology of the occupancy in Batu Cave is likely to have the same timeframe as Liang Bangkai 1. This assumption is supported by several characteristics of the same findings. More analysis of the dating needs to be done to get a reliable chronology. In this research, no evidence has been found to explain who the inhabitants of Batu Cave are. Excavation has not reached a sterile layer.
2020-11-19T09:14:21.454Z
2020-11-13T00:00:00.000
{ "year": 2020, "sha1": "f381b1677b7caff2a0cb7229536463e39617ffd6", "oa_license": "CCBYNCSA", "oa_url": "https://berkalaarkeologi.kemdikbud.go.id/index.php/berkalaarkeologi/article/download/518/546", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "70151fb96a87783f8ad8e564152ca8c0de60a451", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Geography" ] }
268516119
pes2o/s2orc
v3-fos-license
Crowdsourced cycling data applications to estimate noise pollution exposure during urban cycling This research demonstrates a methodology to integrate freely available datasets to understand the relationship between road traffic noise and cycling experiences in a medium sized city. An illustrative example of the methodology was drawn from data for Dublin, Ireland. We aggregate local environmental data with 81,403 Strava cycle trips, contextualised by feedback from 335 cyclists to estimate exposure levels and infer impacts on experiences and behaviours. Results demonstrate that cyclists recognise that they are subjected to increased noise levels and experience negative psychophysical consequences as a result, but they tend to downplay the impact of noise as merely a minor annoyance. Noise also impacts behaviour, most noticeably through temporal and spatial detours. Geospatial mapping was used to visualise the relationship between noise pollution and cycling activity. Estimating traffic noise levels across two cycle routes, direct vs popular detour, revealed a +10 dB(A) increase in exposure for a saving of approximately 4 min on the direct route compared to the detour. Spatial inequities in exposure levels may have serious health consequences for cyclists in a city such as Dublin. The methodology is demonstrated as suitable for policy level interventions and planning purposes. Introduction An essential aspect of urban life is mobility [1], and with one-third of the world's population expected to reside in cities by 2030 [2], decisions regarding how we move in cities are pressing.Cycling is the most energy-efficient mode of transport, with unmistakable individual and societal health and well-being benefits due to physical exercise, space efficiency, no production of greenhouse gas emissions, air or noise pollution [3].In contrast, motorised vehicles exert grossly negative consequences on society, including but not limited to traffic congestion, severe accidents, air and noise pollution [4].Transitioning from driving a car to cycling can amount to a 9-fold benefit to the health of an individual [5], in the Irish context active travel saves the Health Service €29.2 m every year [6]. Importantly, greater numbers of people cycling and walking bring greater safety and enjoyment, unlike vehicular transport [7].Not surprisingly, there are renewed drives towards increasing the numbers of people choosing to cycle as urban transportation [8]. However, a risk factor to achieving this modal shift may be noise pollution, of which motorised road traffic is the most severe source in cities [9].The World Health Organisation recommends a limit of 53 dB L den average exposure, with levels exceeding this associated with harmful health effects.Others argue that these WHO limits underestimate the detriments to health and wellbeing that can result from much lower levels [10].In contrast to other stressors (such as exposures to second-hand smoke) which are falling, noise exposure is rising in Europe [11].With traffic noise frequently exceeding 60 dB (dBA), exposure levels are highest among active travel users due to their proximity to traffic and lack of protection [12,13]. The detrimental and far-reaching impacts of noise on public health are becoming increasingly hard to ignore [14].While researchers have long been aware of the dangers of noise induced hearing loss, more recently it has been shown that elevated or prolonged exposure can result in serious psychological and physiological impacts [15].Physiologically, this can involve the release of stress hormones, arousal of the endocrine and autonomic nervous system, increased blood pressure, and heart rate fluctuations [16,17].As a result people experience feelings of irritability, anxiety, nervousness, and mood swings [18].Far from being a mere irritant, traffic noise is independently associated with increased morbidity and mortality risks including obesity [19], immune system dysfunction [20], cognitive impairment [21], attention disorders in children [22], and diabetes [23] among a myriad of others [10].According to one survey, 44% of Europeans recognise the large impact noise exerts on people's health.However, Ireland featured the lowest recognition rate of any country at 16%, indicating that Irish citizens are significantly less informed than their EU peers on this matter [24]. While cycling facilitates building cardiovascular fitness, mental health, and improvements to quality of life [25,26], average noise levels cyclists experience have reportedly reached 71-75 dB, effectively acting as a "barrier effect", harming and further deterring people from riding bicycles [13].Research into cyclist noise exposure has begun to gain ground but is far from mature or mainstream.The majority of research on this subject has been published since 2015, suggesting an increase in concern for environmental factors related to cycling perhaps due to renewed global drives to increase the modal share [8] or advances in the development of low-cost, increasingly smart mobile technologies in the past decade [27].Indeed, many studies have used portable noise sensors, for example [14,28,29], carried by research staff of the relative projects on predefined routes [12,13,29,30].This reflects the frequent objective of simply monitoring city-level noise pollution rather than examining the experiences of people who choose to cycle.Indeed, fewer studies have been concerned with measuring the exposure of those who choose active mobility [18,[31][32][33].However, a recurring finding is that objective noise levels do not predict human perception of noise and reactions such as annoyance and stress [13,14,34]. Gössling et al. [18] highlighted several coping behaviours used by cyclists, demonstrating that the majority of cyclists detoured from the most direct path to avoid heavy traffic situations, increasing distances travelled by an average of 6.4%, at an extra cost of €0.24/km per person.Despite time and monetary costs, cyclists may substantially reduce their noise exposure taking these different routes [35].To develop healthy, sustainable, and inclusive cities, both objective and subjective measures of environmental noise must be considered in development of cycle networks.Urban planners and transportation engineers alike need to be informed by cyclists' views, in other words, the end-users [36]. Crowdsourced and open-access initiatives can assist in identifying city streets that cyclists use or avoid, thereby uncovering those they deem to be unsafe or unpleasant, while further matching of environmental data like noise heatmaps could clarify why specific routes are or are not chosen.In this way active travel users can provide insight, participate in and inform decision-making [37].Further, as people often report ignoring or even failing to perceive environmental risks [38], examining cyclists' behaviours could provide an alternative way of measuring experiences of noise and urban cycling. One popular fitness tracking smartphone application is Strava.Cyclist records from Strava have been widely used, for example, to understand where and how people cycle in cities [39], the barriers determining the gender gap in cycling [40], spatial and temporal variations in cycling volumes [41], cyclist interactions with new developments [42,43], as well as air pollution exposure during cycling [44].One Canadian study monitored noise exposure of cyclists on pre-defined routes, however, the authors used Strava solely to validate cyclists' GPS locations rather than to inform any insights regarding impacts of noise on cyclists [45].Additionally, Strava data is utilised by a number of transportation bodies across the world [46,47].The work will utilise such crowdsourced data to obtain insight into behavioural practices in relation to environmental health risks such as noise. We explore the potential of integrating multiple sources of data to understand the relationship between road traffic noise and cycling experiences.Firstly, there is a lack of knowledge of the problem of cyclist noise exposure, particularly in Ireland and, therefore, it is currently difficult to incorporate noise considerations into transport policies and developments to assure gains in health and wellbeing.Secondly, in general, there is little research into the relationship between cyclist exposure and their behaviours.This research aims to develop a methodology to gain insight into real behaviour and subsequently inform data-driven policy.One way for city planners and policy makers to incorporate cycling into their planning process is by utilising a straightforward and inexpensive method.Herein we demonstrate one such feasible methodology, prior to any rigorous implementation of the technique.This methodology will answer the following questions. 1. Do cyclists report that noise impacts their experiences cycling in Dublin? 2. Is there evidence of impacts of noise on their behaviour?3. Is there also evidence of noise impacts on behaviours in the crowdsourced data? 4. Can these datasets be used to estimate the exposure of an urban cyclist to noise? Materials and methods The illustrative study location is Dublin City, the capital city of Ireland, with a population of 588,233 [48].The entire Dublin Metropolitan Area hosts a reported 95 km of "traffic-free" cycle routes and 118 km lanes physically separated from traffic [6].Dublin has a cycling modal share of 6% as of 2019 [49] compared to 10-30% in many other cities [18].Every day the people walking and cycling in Dublin remove up to 330,000 cars off the roads.The yearly individual and societal benefits of these behaviours include 3207 serious health conditions avoided, €1.1 billion generated for the economy, and reductions in greenhouse gas emissions equivalent to 340,000 flights from Dublin to Heathrow [6].This study specifically considered the inner aspect of the city composed of the 5 local R. Wogan and J. Kennedy administrative areas of Dublin City Council (see Fig. 1) [50]. Data collection At present, monitoring the modal share of cycling in Dublin involves conducting manual counts of individuals commuting towards the city center at specific locations along the canal cordon, on certain dates throughout the year [52].Several datasets and procedures were used herein including crowdsourced cycling data, municipal noise data, as well as one of the largest surveys conducted on cyclists in Dublin, all described below.These were synthesised to gain insights into the sound environment, while the surveys operated as a check of the validity of cyclists' current experiences cycling in Dublin.See Fig. 2 for a visualisation of the data collection sources and analysis pipeline. Cyclist experiences This project, received ethical approval from the relevant Trinity College Dublin ethics committee prior to commencement.Drawing on previous research, survey questions were drafted enquiring into participants' general experiences cycling in Dublin city, perceptions of traffic noise, and potential impact of noise on their behaviour [18,34].In total, 19 questions were composed across 5 categories, for an outline see Appendix A TableA1.Six questions asked about (a) participants' general cycling habits, the length and breadth of their participation in cycling.Five questions asked about (b) their experience of cycling in Dublin, including the perceived affective quality of the soundscape of Dublin streets.Four questions enquired about (c) cycling behaviours, three about (d) general demographics regarding age, gender, and life situation.One item consisted of (e) a noise sensitivity assessment. The aforementioned noise sensitivity assessment included items of a reduced Weinstein Noise Sensitivity Scale (WNSS) [53], as previous research has shown that a limited number of items can still accurately define profiles of user's noise sensitivity [54].Noise sensitivity refers to an individual's degree of aversion or reactivity to noisy situations [54].People with high noise sensitivity pay more attention to noise, find it more threatening, and show slower reactions to noise [55].High sensitivity has many adverse effects including elevated heart rate [56], greater annoyance [57], and increased risk of psychological ill-health when in noisy road traffic environments [58].Sensitivity profiles can be used to predict who is more likely to experience negative effects on their health from noise, as well as who stands to benefit most from efforts to reduce their exposure [54]. Participants were recruited to complete the survey via university email, Twitter and subsequent snowballing.Data collection was conducted over the month of June 2022.Questions were presented to participants using the online statistical survey platform LimeSurvey, an open source application. Cyclist behaviours Strava is a smartphone app that allows users to record the GPS location and timing of their physical activities, such as running and cycling, and share them publicly or privately on their profile.Strava provides free access to this data to external partners who seek to improve active transport infrastructure.A Research Partnership Request was made to Strava for access to Dublin cycling data.Strava Metro, Strava's data service, aggregates the billions of data points of public cycling activities across street segments, origin and destination polygons which are mapped to OpenStreetMap.Cycling counts are rounded to increments of five, less than three trips on a R. Wogan and J. Kennedy segment is rounded to zero to protect user privacy.This data is de-identified from Strava users and is provided to research partners in the form of counts of trips along street segments.Street segments within the Dublin city region over the period from May 1st to 31st 2022 were extracted. Noise To examine which sources of information could be used to estimate a level of exposure for cyclists, this study considered spatial patterns of noise across the city using predicted noise data from the Environmental Protection Agency (EPA).Dublin City Council's (DCC) strategically placed real-time noise monitors were used to validate the EPA noise maps.These noise maps are made under the framework of the European Environmental Noise Directive [59,60].The directive has led to increasingly fine resolution noise modelling data for urban agglomerations and further refinement can be expected in the future rounds of noise mapping.Eventually these static noise maps could be replaced with a live data feed through a smart cities approach. EPA strategic noise maps. In global efforts to reduce and protect public health from noise pollution, every 5 years European Member States are legally required to generate strategic noise maps for major roads based on assessments or predictions of noise exposure in the given areas.In Ireland, these maps are created by the National Roads Authority in conjunction with relevant local authorities.Noise levels in decibels (dB(A)) along all roads exceeding 3 million passages per year, are represented in the dataset as Eq. ( 1), was used to convert from hourly averages to L den , where L evening and L night are subject to additional weighting to reflect increased annoyance of noise during evening and night periods: In this case study both the day and evening time periods are relevant for cyclist noise exposure.Due to the fact that L day/eve is not a standard metric it was decided to use the L den value.For the nearest monitoring station to the location used for this case study the L den was 59.8 dBA. To compare noise exposure of cyclists on direct vs detour routes, Strava segments surrounding these noise monitoring stations were screened for the presence of alternative cycle route options.In other words, Strava records where there was a direct route and a less direct, more often used route for one origin to destination pairing.One suitable segment was found for this analysis in the vicinity of DCC Rowing Club (for location, see Fig. 7a).Along this segment, noise values were extracted and an average equivalent noise exposure for each route was calculated using the following time varying noise equation in Eq. ( 2): Data analysis 2.2.1. Python The survey data was processed and analysed in Python (version 3.8.8).Several pre-processing steps were completed before analyses were conducted.Responses to the majority of questions are reported as percentages of the entire sample.Two questions involving categorical answers on a five-point Likert scale required assigning numerical values.Firstly, to calculate the metric for ratings of the soundscape of Dublin, the following coding scheme was applied "Strongly disagree": 1, "Disagree": 2 "Neutral": 3, "Agree": 4, "Strongly agree": 5.The negative adjectives ("annoying" and "chaotic") were reverse coded.Each participant's score across the 5 adjectives were summed and divided by the number of items (5) to produce a soundscape score.Soundscape scores less than 3 were coded as negative, in this way neutrality towards any adjective was considered a positive rating.Secondly, items of the noise sensitivity assessment were similarly assigned numerical scores, with "I get used to most noises without much trouble" reverse coded.These scores were summed and divided by the number of items.Sensitivity scores above 3 were coded as high.Soundscape scores and noise sensitivity scores were then used in statistical analyses. GIS To perform a visual analysis of the EPA noise contours and Strava data, data was imported into ArcGIS Pro (Esri Inc., 2020, version 2.9.0).Both datasets were clipped to include only information within the boundaries of the 5 administrative areas of Dublin city [50] and used to create a heat map of total trip counts on each road segment across Dublin.In creating a map of the predicted noise levels across the city, the contour value bands were adjusted for equal subdivisions and to better reflect the WHO health limits of levels greater than 53 dB L den presenting problems to humans [9].Symbology was changed across visualisations to facilitate a clearer view of insights from the datasets. Reflecting the evidence of detouring behaviours from the survey, we attempted to estimate the levels of noise exposure across routes selected by cyclists where an alternative more popular route diverges from the most direct route.For this, Strava Dashboard was used to identify segments between one origin and destination in which there was a direct route and a longer, yet more frequently chosen route.The four noise monitoring stations were used as reference points to begin the search process. Survey responses Three-hundred and thirty-five cyclists completed the survey (men = 65%, women 34%, non-binary = 1%).Most respondents had been cycling in Dublin for more than 3 years (75.2%),54.9% cycle several times a week while 28.4% cycled every day (Table 1).Interestingly, 15.5% stated they had started cycling during the pandemic restrictions.As the dissemination of the survey was in part communicated through university email, Trinity College Dublin's faculty and students could have significantly contributed to the sample.As such, Trinity College Dublin's demographic composition is likely to be more varied than that of a typical corporate entity, and its central location makes it highly accessible to commuters from various locations across the city.For more information regarding the demographic attributes of the sample see Table 1. Weekdays are divided into rush hour and non-rush hour time periods.Due to traditional work patterns weekends in Dublin are not considered to have a rush hour.To avoid confusion in survey respondents the phase rush hour was avoided for the weekends and replaced with morning, daytime and evening periods.The most common time of day to cycle was weekday rush hours (80%), followed by weekday non-rush hours (55.2%) (Fig. 4 a).The ubiquity of cycling during rush hours was reflected in 83.9% of respondents stating a purpose of their cycling was commuting, while approximately half of all respondents acknowledged cycling as a recreational activity R. Wogan and J. Kennedy itself, to reach leisure, or to engage in household responsibilities, such as shopping or transporting dependents.Only one person stated cycling is part of their employment (Fig. 4 b).When asked about their motivations for choosing to cycle, the majority stated time efficiency (89.6%).This was followed closely by environmental (81.2%) and health reasons (81.5%).Seventy-one percent expressed choosing to cycle because they enjoy it (Fig. 4 c). In being asked to describe their general experience cycling in Dublin, 66% reported a positive experience, that is, having chosen the descriptor "excellent", "good", or "acceptable".The remaining 34% described their experience as "bad" or "terrible" (Appendix B Fig. B1).However, when asked to appraise the sound environment while cycling, 77% of the sample expressed a negative experience, defined as agreement with negative adjectives and disagreement with positive adjectives (see Appendix A Table A1 for the list of adjectives).Further, descriptions of the soundscape eliciting the highest agreement and disagreement from cyclists were "chaotic" and "calm", respectively. Bombardment of the senses can evoke different behavioural coping strategies among cyclists of varying skill levels [61].In this vein, participants were asked to state whether they engage in specific protective practices to negotiate cycling when they are in loud environments.In terms of behaviours, 30% of the sample said they cycle closer to the kerb, 25% take the centre of the lane, and 22% wear headphones.Regarding psychological reactions, 60% report they are bothered by the noise, 42% actively try to ignore loud noise, 57% feel their body tense, while 47% report getting used to the noise easily.In terms of weighting of different environmental safety considerations, more people worry about air pollution in these contexts than noise: 82% worry about exhaust fume exposure compared to 22% worrying about noise exposure.Interestingly, 7% reported feeling safe in loud environments. This can exert an emotional toll.Almost half of the sample, 49%, responded that noise has affected their well-being while cycling (Table 2).Closer investigation into specific emotional states revealed a majority of the sample have been left feeling irritable/angry (91%), anxious/nervous (79%), and unhappy (67%) after cycling in Dublin.Further, 32% responded that traffic noise impacts the time of day they choose to cycle (Table 2).While this is a minority, it is noteworthy that it is unlikely that bus or car users would factor noise into their choice of time to travel. Detouring For many transport planners and engineers, success in transport is the fastest route from A to B. However, there is compounding evidence that it is not uncommon for cyclists to choose to sacrifice time in return for increased safety or pleasantness [61].In this sample, 39% answered positively when asked whether they take detours explicitly to avoid noisy routes (Table 2).These detours ranged in length from 4 to 40 min per day, some reported detours amounting to 10-20% extra time.Many participants mentioned R. Wogan and J. Kennedy choosing paths through parks: "I'd generally cycle through the Phoenix Park as a detour to avoid traffic build-up and noise, adding 20 min to journey" and "I take the war memorial gardens route instead, it's more enjoyable".One person wrote of the im-permeability of cyclable routes, "It takes knowing the short cuts though, so I only do this in my locality" while some mentioned they would like to detour but have few choices available to them, more pleasant, quiet or otherwise "No suitable detours available but if I could, I would".The presence of children was mentioned as an incentive to detour explicitly to be heard while cycling "with children, I chose routes with less traffic and less noise, so they can hear me if I'm guiding them".Some stated they detour for safety reasons rather than noise pollution "It's more about avoiding busy routes.Noise is of less concern than being run over" and "I always try to go for a safer route but if protected cycle route is noisy, I may still choose it".Simultaneously, others reported detouring and recognised a previous lack of consideration for noise "I hadn't thought about noise in this way before, but it does affect enjoyment and tension". Traditional transport philosophies have ascribed transport to a temporary state, devoid of sociality [61].However, urban infrastructure use is inherently social, as the life of a city is spoken through what is and is not facilitated [62].In this regard, participants were asked whether they can hear a companion speak while cycling in Dublin.Forty-four percent stated, "I don't talk to someone while cycling" perhaps based on pre-emptive safety concerns due to their close proximity to dense motor traffic.Thirty-five percent responded 'no', that is they may try to cycle with a companion but are physically unable to hear them, while 21% responded 'yes' (Table 2). Table 2 Percentage of cyclist responses (N = 335) to 4 survey items stating their level of dis/agreement with the influence of traffic noise on their experiences and behaviours.R. Wogan and J. Kennedy Noise sensitivity In general, higher objective noise levels tend to be associated with higher perceived noise levels, but this relationship is not always linear.The relationship between objective and perceived noise levels can be complex and influenced by a range of factors.One hypothesis is that cyclists may experience streets differently according to self-reported noise sensitivity.In this sample, 73% scored as highly noise sensitive, 27% as low.Six statistical tests were conducted at a Bonferroni-corrected α-level of 0.008 to assess the following relationships. The results of a Mann-Whitney U test indicated that ratings of the soundscape do not differ significantly between high (Md = 2.4) and low noise sensitive cyclists (Md = 2.4, U = 11842, p = 0.345), we conclude that there is no significant difference between low and high noise sensitive people in their ratings of the Dublin soundscape.Furthermore, five Chi-Square Tests of Independence were performed to assess the relationship between categorical variables, that is, between noise sensitivity and various cycling experiences and behaviours.Firstly, no significant relationship was found between noise sensitivity score and reports of detouring, X 2 (40, N = 335) = 16.33,p = 0.999.That is, scoring high in noise sensitivity had no association with reports of taking detours to avoid noisy routes.A second Chi-Square test was conducted revealing no significant relationship between sensitivity score and reporting that noise influences the time of day one chooses to cycle, X 2 (40, N = 335) = 23.14, p = 0.985.In an examination of whether high noise sensitivity was related to increased psychological distress, no significant relationship was found, X 2 (40, N = 335) = 25.63,p = 0.962.That is, scoring high in noise sensitivity had no association with noise affecting one's wellbeing while cycling.Finally, there was a significant relationship between reports that noise influences one's wellbeing while cycling and both detouring X 2 (4, N = 335) = 16.04,p = 0.003 and the time of day one chooses to cycle X 2 (4, N = 335) = 36.36,p = 0.000.That is, in feeling distress, people tend to alter their behaviours around the road traffic to mitigate the negative consequences of noise rather than ignoring or persevering through.The importance in behavioural change then appears to be not whether you are more sensitive to noise but whether noise noticeably affects your feelings of wellbeing. GIS-based mapping This research correlated crowdsourced cycling activities with municipal noise maps through geospatial mapping.Noise and cycling activity were visually assessed individually and compared using a GIS framework to gain further insight into potential patterns and relationships between cyclist route selection and road traffic noise.A total of 81,403 Strava trips were recorded by users in May 2022, with volumes greatest 8am-9am followed by 5pm-6pm.First, a heat map of routes taken by cyclists was created, revealing those most commonly cycled: arterial routes serving the city centre, the banks of the River Liffey, and along the cordon formed by the canals (see Fig. 5).See Table 3 for a brief summary of Strava cycling records for May 2022 and the previous May of 2021.In Table 3, there is a marked difference in the share of commuting and leisure activities between 2022 and 2021, likely reflecting the governmental measures to work from home where possible during the COVID-19 restrictions.The total number of users and activities recorded was lower in 2022, but the average number of trips recorded per user remained similar (6 in 2022 and 5.5 in 2021).Perhaps this reflects a waning popularity of the Strava app, fitness trackers in general, or growing awareness of data privacy concerns. Secondly, from the geospatial mapping of EPA noise data across Dublin (Fig. 6) we can see that city noise levels are often predicted to exceed 60 dB (orange to dark red contours), particularly in very central areas and outer routes near the motorway.Over the month of May 2022, the average L den results calculated from the real-time noise monitoring stations were as follows, Dolphins Barn: 62.52 dB, DCC Rowing Club: 59.84 dB, Ballymun: 67.68 dB, and Strand Road: 73.19 dB, somewhat exceeding the strategic map estimations. Detour case study From the Strava trip counts, there was some evidence that cyclists deviate from the most direct routes.This was echoed in the survey responses with 39% stating they detour.Further insights from the survey highlighted that many choose to detour through parks rather than mix with loud traffic.To examine the variations in levels of noise exposure of these detours taken by cyclists, this analysis focused on one route along the south of Phoenix Park near the DCC Rowing Club noise monitor.The origin and destination can be seen in Fig. 7a.The direct segment on the route of focus is a major urban arterial route, with a speed limit of 50 km/h, painted bike lanes on the footpaths, and is featured in blue in Fig. 7a (4.9 km, elevation 17 m).The detour features a shared-use road with a recently updated 30 km/h speed limit through green spaces and forested surroundings, in red in Fig. 7a (5.8 km, elevation 34 m).We overlayed the cyclist records with the EPA noise contours (see Fig. 7b).Values of the noise contours along both the popular and direct route were extracted.The mid-point of each contour band was taken as a proxy to estimate average exposure.For example, 52 dB was taken as the mid-point for the 50-54 dB contour band (see Table 4).As the EPA does not map predicted noise levels below 50 dB, some areas along the routes had no noise values, therefore, a value had to be substituted.The choice of substitute was quite trivial.For ease, 0 dB was chosen.This triviality is outlined in Table 5, where we can see that a choice of 0 dB, 35 dB, or 50 dB makes little difference to exposure The distances of each route and an average urban cycling speed, taken to be 13.5 km/h [63], were used to calculate a noise profile of time exposed to each dB level, for each route (Table 4).To be able to compare exposures for routes of differing lengths, the exposure for a cyclist on an equivalent 20-min journey was calculated using each route's noise profile and Eq. ( 2).This noise exposure estimation focused on the deviating sub-segment, where the routes deviate to where they re-join, to gauge the different exposure such a detour would equate to for a person.Noise levels were found to be 65.4 dB(A) for the direct route and 55.1 dB(A) for the popular detour route (see Table 5).Therefore, a cyclist's exposure to noise was 10.3 dB(A) higher on the direct route, saving 4 min 5 s compared to someone detouring through the park.A validity check of the nearby real-time noise monitor data was encouraging with an average of 59.8 dB L den , the slightly lower level likely reflecting its displaced position from the roadside.The average L Aeq1hr values for the monitoring station during the period of the study are reported in Fig. 8.The above is an estimate of the difference in exposure of cyclists who do and do not take detours.Noise exposure of a cyclist on a journey across the city would differ. Discussion Traditionally, spaces of mobility were considered temporary, transient, and not constitutive of places in which meaning is experienced, a concept owed to non-embodied forms of mobility [61].However, cycling is embodied, sensorily open to the environment, and socially oriented.The findings herein demonstrate that road traffic noise impacts cyclists negatively.These impacts are often underestimated, with excessive exposure levels potentially detrimental to cyclist's health and well-being as well as sustainable transport goals.The geospatial mapping and route noise-profiling showed that having the option to detour may help individuals avoid negative health consequences of excessive traffic noise exposure.However, insights from the questionnaire revealed many cyclists are not afforded alternative routes.The implications of these findings, related recommendations and policy implications will be highlighted in the following sections. The majority of this sample reported a positive holistic experience cycling in Dublin but a negative appraisal of the sound environment.In policy and research, urban sounds are frequently taken into account only as psychophysical stressors and, indeed, almost half the sample reported their wellbeing is affected, predominantly being left to feel irritable, angry, anxious, and unhappy after cycling.However, to comprehensively address noise effects, this research drew from soundscape approaches which consider sounds as resources [31].With the soundscape most frequently described as chaotic and not calm, this speaks to the character impressed on people by the streets of Dublin and is not favourable for a city hoping to increase its cycling modal share from 6 to 13% [49].As Ingold [64] outlines, the character of a place is ascribable to the "experiences it affords to those who spend time there to the sights, sounds and indeed smells".Local interventions providing alternative urban experiences are needed and are being implemented in comparable European cities, such as low traffic neighbourhoods, car-free centres, and greening strategies [65,66]. The future prospects for the urban noise environment are also influenced by the transition to electric vehicles.Research has shown that there may be a beneficial effect on people's perceptions of traffic noise due to the absence of engine tones and a reduction in noise level from electric vehicles [67,68].These benefits are potentially offset by an increase in accident risk at low speeds due to the quiet nature of electric vehicles [69].The methodology proposed in this paper will incorporate this changing road traffic soundscape through regular updates to the END noise maps.The CNOSSOS modelling adopted in EU legislation has open categories for future electric vehicles and is already adapting to the changing noise environment [10,60].Recent research has demonstrated that the CNOSSOS-EU framework is more accurate than previous methods such as CRTN-TRL [70] and RTN-96 [71].New data collection strategies, such as the use of UAVs [72], are leading to increased spatial and temporal resolutions for model input data.Such efforts by researchers and policy makers are likely to increase the reliability of strategic noise maps over time.In the interim, this research has demonstrated that the combination of existing datasets can bridge the gap and enable current policy makers to make more informed decisions without the need for high costs data gathering exercises. It was also found that a majority of these regular cyclists are unable to communicate with a travel companion.This is likely a barrier to enjoyment for many people and points to city inhabitants being let down by the current provision of infrastructure.Being able to communicate appears particularly important for people when cycling with children, for their safety and comfort.Cycling in childhood can influence propensity to cycle in later-life [73] but parental behaviours and attitudes can be very influential, with perception of Table 4 Noise profile for the direct and popular routes consisting of the time spent in each noise contour band when cycling along that route. Noise contour band (dB(A)) Average level (dB(A)) R. Wogan and J. Kennedy safety a widely-held priority [74].To be effective, bicycle infrastructure must prioritise the users' desires and needs [75], rather than solely considering traffic flow and cost.This lack of provision for communication may have devastating consequences for increasing the uptake of cycling.It has been highlighted how the sociality of cycling is integral to the maintenance of Amsterdam as a "cycling city" in its function of attracting non-Dutch groups with little experience cycling [76].Further, experiencing a socially engaging area has been shown to compensate for negative influences of high noise pollution [14].Almost 40% of this sample reported that they take detours (spatial detours) to avoid noisy routes, sacrificing what are often substantial amounts of time for reduced unpleasantness.Additionally, approximately one third of the sample reported that noise influences the time of day they cycle (temporal detours).Traditionally, urban mobility is construed and designed for in terms of rapid connections between origins (A) and destinations (B), defined by the push and pull factors of A and B [61].However, this research highlights how practitioners must redefine their conceptualisation of urban mobility to incorporate not only push and pull factors, user demand, or network capacity but also the auditory quality of travel experience.Indeed, the Dutch bicycle traffic manual, considered by many to be the gold standard, highlights attractiveness as one of the five crucial components of safe and effective cycle networks, referring to the desirability of one's surrounding environment [77].Additionally, in terms of successful urban mobility, the temporal and energy efficiency of cycling has been previously acknowledged by municipal decision-makers and cyclists alike.However, similar to this study, cyclists acknowledge more embodied benefits, such as health and enjoyment, than are considered by decision-makers [36], highlighting the need to incorporate user motivations in bicycle network planning and policy. Annoyance in response to noise is itself associated with increased risk of high blood pressure [78] and cyclists who are highly sensitive to noise have a worse perception of the auditory environment [34].To avoid reductions in sustainable mobility use, cycling infrastructure must be accessible and comfortable for everyone, not only the most fearless or environmentally insensitive.Our results demonstrated no associations between noise sensitivity and wellbeing, spatial detours, temporal detours, nor ratings of the street soundscape.There was a relationship between reports of noise influencing wellbeing and noise induced temporal and spatial detours.This could suggest that noise noticeably affecting your wellbeing is more significant for behaviour change than a general sensitivity to noise, however, the nature of this relationship was not directly assessed.Ultimately, attempts to mitigate noise impacts would do better to focus on wellbeing in general rather than assigning resources to focus on the experiences of noise sensitive groups. These results also highlighted that cyclists are bothered by the noise and feel physical and psychological tension but their concerns regarding noise pale in comparison to those regarding physical traffic or traffic fume inhalation, with noise relegated to a "nuisance".Thus, perceptions of a route as more "pleasant" may implicitly be referring to the absence of bombardment of the senses.This supports both Bickerstaff's [38] finding that the public frequently overlooks or downplays perceptions of environmental threats and earlier findings that the Irish populace gives noise pollution less consideration than is typical of other European citizens [24]. Patterns of cycling can be difficult to monitor, however, 'big data' can serve to help researchers and decision-makers make informed estimations.This research visualised historical behaviours of 13,519 people who regularly choose to cycle in a medium city across 81,403 journeys.Despite the local authority having a smart city team supporting mobility projects, current monitoring of cycling modal share in Dublin involves manual counts of people travelling towards the city centre at discrete locations along the canal cordon and on discrete dates each year.These numbers are then used to inform cycling infrastructure developments [52].However, from our geospatial map, a concentration of cycling within the bounds of the canal cordon is evident.Indeed, the small size, flat gradient, and traffic congestion of Dublin facilitates choosing a bike for short journeys within the city centre.These spatial patterns of cycling attest to the likely underestimation of the City Council's current measurements of cycling demand and, therefore, under-investment in infrastructure provision.Indirect measurements, such as the temporally detailed Strava data, should be taken advantage of to fill existing knowledge gaps.Indeed, using Strava data to gain insight into cycling is becoming increasingly common in research and transport practices [46,79]. In visualising Strava fitness activity, it is also remarkable that social inequities experienced can become discernible on a map.Constructing the Strava heatmap of Baltimore, USA, led to the stark realisation that segregation felt in the streets and opportunities of the city was also apparent in the density of running activities across the city [80].The cycling map of Dublin shows no such clear-cut inequities, at least when considered in isolation.When integrated with the strategic noise maps and reports of cyclist experiences, we see a picture begin to form of inequity in Dublin.Thirty-nine percent of the sample reported that they take detours, many of whom mentioned this involves routes through parks, gardens, or along the canals.Others reported a desire to avoid traffic but have no detour options.Therefore, integrating multiple datasets can unearth unexpected insights such as the spatially unequal distribution of green routes or public parks.For example, high-income urban areas have been shown to possess more green spaces [81,82] as well as facilitate greater and safer access to cycling provisions [83,84].Alternatively, a lack of knowledge of alternative routes could speak to a need for more public sharing of area-specific knowledge, for example, hidden trails, and the importance of integrating this within active travel technologies.It should also be noted that research has shown that vegetation is able to improve environmental noise perception which may provide wider benefits [85]. This analysis unexpectedly revealed that the number of users and activities recorded in Dublin declined from 2021 to 2022.This could reflect a waning popularity of the Strava app, or fitness trackers in general, although this seems unlikely as reports attest to the continually growing popularity of self-tracking fitness apps [86].Alternatively, it may reflect a growing awareness of data privacy concerns, although, it has been noted that while internet users worry about their privacy, they often do not go to the effort of actively determining how they present themselves online [87].This is mere speculation, however, changing trends in society that might be reflected in changes in technology usage must be investigated if technological tools within smart city initiatives are to serve the interests of citizens.For instance, what public needs are these apps no longer meeting? As cycling is an activity strongly embodied and highly interactive with the surrounding environment, conditions of the surroundings such as noise pollution exert a much stronger influence on experience than for other transport modes [61].This analysis R. Wogan and J. Kennedy examined exposure over a direct and an indirect segment amounting to 65.4 dB(A) and 55.1 dB(A), respectively.These levels are lower but comparable to objective measurements of cyclist noise exposure in larger cities such as Copenhagen, Paris, Montreal, and Ho Chi Minh, which have demonstrated levels between 68.4 dB(A) and 78.8 dB(A) [30,35].This is an approximate exposure level of one area of Dublin and would differ if estimates were conducted over a greater expanse of the city. As environmental noise rarely exceeds 70 dB for extended periods, the harmful effects of noise on human health are typically nonauditory [10].Research has shown that noise exposure in traffic can dominate a person's daily noise exposure [88].Additionally research has shown that an increase in noise exposure of 5 dB(A) leads to measurable changes in blood pressure and heart rate [89,90].For every 10 dB L den rise in transportation noise exposure, the risk of Ischaemic Heart Disease is increased by 6% [91].These increases in daytime noise exposure can also lead to night-time sleep disturbance with only small increases to occupational noise exposure leading to sleep disturbance [92].Considering a total daily commute of 45 min there can be considerable differences in the L den value for the cyclist due to the traffic noise exposure.For a person who had an otherwise quiet day, with workplace noise exposure of an office [93], the L den calculation can vary from between 2 and 10 dB due to the cycling noise exposures reported here. In other words, choosing to cycle poses little risk of damage to one's hearing but other risks to health, wellbeing, and society remain significant.Noise levels experienced by cyclists in Dublin are likely having measurable consequences on their health and should inform planning actions to curb noise attributable to motor traffic.Traffic planners must address lowering cyclists' exposure to noise pollution from traffic as well as their risk of collision and other pollutants.Using the methodology outlined here, noise estimation information can be incorporated into local authorities' procedures for infrastructure developments as well as public communication of healthy and pleasant routes.For example, data could be used to inform cyclists of upcoming loud traffic situations and provide information on crowdsourced detours.A similar endeavour is undertaken by the app Waze which provides motorists with real-time updates on situational conditions such road works, accidents, or heavy traffic. A limitation of this study is the degree of error in the datasets.First, in the interest of user privacy, Strava rounds activities on segments to counts of 5, removing those that have less than 3 trip counts.This was not important for this analysis but is important to note for future fine-grained analyses.Secondly, the road traffic noise contours are estimations by the EPA, therefore exposure calculations herein are estimations.Further, averaging noise levels, while facilitating ease of exposure estimation, operates within the implicit assumption of a stationary nature of exposure [94].Finally, there is no temporal overlap between the time spans covered by the Strava data and the noise maps, this temporal misalignment can be mitigated by considering the noise monitoring station.The noise monitoring station data can be aligned with the time period of the Strava data and can be used as a check on the validity of the noise mapping data for that period. In addition, previous qualitative research has noted the utility of traffic noise during cycling to predict when there is a need to be more vigilant or defensive [61].This advantage of noise was not included in the survey nor was it mentioned by the participants.This may suggest a degree of priming in the questions towards regarding only the negative consequences of noise, however, 7% of the sample did report feeling safe in loud environments.A comment section on this question could have provided the opportunity for deeper insight into participants responses.Further, in asking participants about their wellbeing while cycling, the concept was not defined and was only assessed via one question.Therefore, it may have had different implications for different people, for example, considering it in more hedonist or eudemonist terms.Previous studies have proposed multifaceted measurements [95].However, for the purposes of this research, participant's conception of the term was deemed less important than assessing whether participants felt impacted by noise.Additionally, due to time constraints, this study extracted data from Strava and the noise monitors for only one month, May 2022.Further work should consider a seasonal analysis as both cycling patterns and noise levels may vary throughout the year with changes in weather and daylight availability [96].Finally, future research could filter subsets of the Strava data, such as by trip purpose or demographics [79], to investigate whether specific cohorts show more behavioural aversion to high traffic routes in efforts to inform more equitable transport policy and planning. Conclusion This study has demonstrated a mixed-method approach integrating multiple datasets to estimate urban cyclist noise exposure and gain insight into the influences of traffic noise on cyclists' experiences and behaviours.The intention of this study was to demonstrate the utility of existing data sets to provide evidence for informed policy decisions.It was demonstrated that the weakness of the noise mapping data set could be partially offset by the noise monitoring stations.The noise map is spatially detailed data, and the noise monitoring stations are temporally detailed data.In combination these datasets can provide a valid estimate of noise levels experienced by cyclists.This works shows that useful conclusions can be drawn for active travel planning from noise mapping data and in future policy makers could integrate active travel planning into their noise action plans under noise mapping frameworks, such as the EU's Environmental Noise Directive. It is important to consider the detrimental health effects of noise exposure in the planning of future cycling infrastructure.Spatial visualisations of exposure data and cycling activity using geographic information system (GIS) approaches can be used in communications with and by decision-makers.Using this method, this study provided evidence that cyclists are detouring and in doing so are reducing their exposure to noise and, therefore, noise-induced health risks.Transport practitioners must adopt practices that include noise exposure to reduce the time and costs associated with preventative detouring cyclists engage in to mitigate stressful situations.This study has also highlighted the value of direct cyclist experiences in understanding individual cycling behaviours.As cyclists are not one homogenous group, this may be crucial to encourage cycling and address the concerns of cyclists at greatest risk of being perturbed by high noise levels.Ultimately, this study has shown that crowdsourcing for exposure estimation is feasible and therefore can be automated and scaled-up for use within local governments to inform cycling developments. R. Wogan and J. Kennedy Fig. 1 . Fig. 1.An overview of the region of interest featuring the five local administrative areas of Dublin City Council [51]. Fig. 2 . Fig. 2. Data collection and analysis pipeline.a EPA, Environmental Protection Agency; b GIS, Geographical Information Systems; c ROI, Region of interest. 2. 1 . 3 . 2 . Dublin City Council strategic noise monitors.DCC operates and maintains a network of 24-h environmental sound monitors at 17 locations across the city where quantifying the real-time sound quality is deemed valuable to the community.This network is shown in Fig. 3. Considering that many local authorities lack the resources for extensive data collection campaigns of cyclist noise exposure and this work attempts to demonstrate what can be achieved with existing datasets informed by stationary noise monitors.The ensuing data informs already informs environmental policy decisions and has potential for informing active travel policies.Four noise monitoring stations across the Dublin City region were chosen for their proximity to major urban arterial roads.Sound level data from these stations across the month of May 2022 were obtained from the DCC Dublin City Air and Noise website.The following equation, Fig. 4 . Fig. 4. Summary of responses to survey questions enquiring into the time of day usually cycled, purpose, and reasons behind choosing to cycle as their means of transport.Multiple answers were possible. Fig. 5 . Fig. 5. Strava cycling trips across Dublin City administrative areas for the month of May 2022. Fig. 6 . Fig. 6.Geospatial map of the 2019 strategic noise data of the Environmental Protection Agency.Colours represent noise contour ranges pertaining to differences in dB(A) levels.Blue place-markers note the locations of noise monitoring stations.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) Fig. 7 . Fig. 7. (a).Screenshot of the route on Strava Metro with origin (O) at a Southwest entrance of Phoenix Park at Lower Lucan Road and destination (D) at the Southeast entrance of Phoenix Park at Parkgate Street.The blue line (4.9 km) represents the most direct route, and the red line (5.8 km) shows the most popular route from O to D according to trips logged on Strava.The blue place marker represents the location of the nearest real-time noise monitor, located at the DCC Rowing Club along the banks of the River Liffey.(b) Visualisation of Strava cycling records overlayed with contours of the strategic noise map.The area shown features the direct and popular routes of origin (O) and destination (D) visualised in Fig. 7a.Dark grey regions represent areas for which there was no noise data.The blue marker representing the DCC Rowing Club monitor can be used as a reference.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) Table 1 Summary of cyclist demographics obtained from the surveys. Fig. 3. Locations of the 17 noise monitoring stations in the Dublin City Council region.R.Wogan and J. Kennedy Table 3 Strava cycling trip summary within the Dublin city region for the period May 1st -31st. R. Wogan and J. Kennedy levels due to the logarithmic nature of decibels. Table 5 Average noise exposure calculated for a cyclist on the direct and popular routes showing the difference in exposure level across noise level substitution options for areas where no noise contour data is available.
2024-03-19T15:03:53.179Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "9a16cb2f927cc2c175ded3d424147049255e0a57", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2405844024039495/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d37d6fbaddc950bcd41091c4ca58b7d9b16eb0a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
144057785
pes2o/s2orc
v3-fos-license
The meaning of being young with dementia and living at home Studies that explore the subjective experiences of younger people with dementia living at home are rare. Therefore, the aim of this study was to gain an understanding of the lived experience of younger persons (<65 years) who lived at home and suffered from early-onset dementia, and the meanings that might be found in those experiences. The researchers conducted a qualitative study using a phenomenological hermeneutic approach. Data were collected through narrative interviews with four informants. Two men and two women ages 55 to 62 participated. Three of the informants lived with their spous-es, and one lived alone, close to his children. The informants’ subjective experiences revealed the following four key themes: entrapment by circumstances, loss of humanity, the preservation of hope and willpower, and the desire to ensure one’s quality of life. These themes provide a deeper understanding of the experiences of younger people with dementia who live at home. The theme of preserving hope and willpower rebuts prejudicial con-tentions that life with Alzheimer’s syndrome does not have anything more to offer and may be seen as diminishing a patient’s humanity. Patients’ autonomy and self-determination should not be ignored. In all phases of the progression of dementia, the person in charge of giving care to the relative with dementia should be ethically aware of and reflective to the progress of his/her illness. Introduction In Scandinavia, approximately 350,000 people suffer from dementia. About 3 percent of these people are less than 65 years old and have been diagnosed with early-onset dementia. In Norway, it is estimated that 1400 citizens suffer from early-onset dementia, but no prevalence studies have been carried out, and the number of unreported cases is unknown. 1 When studying the experiences of people with early-onset dementia, a number of social challenges should be taken into consideration, such as the changes in relationships between dementia clients and their immediate families, 2,3 awareness of life changes, 4 feelings of being socially isolated and useless, 5 emotional and mental affliction in the form of depression, 6,7 and inactivity and lack of meaningful tasks in the absence of an active working life. 8 However, Beattie et al. 9 pointed out that most of the conclusions of research on younger people with dementia were based on the perspectives of health professionals and their relatives, and that little attention has been given to the experiences of the affected people themselves. A few recent studies use affected persons' own experiences to provide insight into how younger people with dementia experience living with this disease, but more are needed. In Johannessen and Möller's 1 study, the informants described their progress towards a medical diagnosis of dementia, detailing memory losses, failures to cope with the tasks of daily living, and struggles upon receiving the diagnosis. They also described the difficulties of developing dementia at a relatively young age, relating how this affected their self-image as well as their social interactions with others. In another recent study, 10 similar results indicated that the diagnosis of dementia was a particular area of concern, as well as coping with others' reactions when receiving the diagnosis. Following the Norwegian government's national plan for dementia, 11 most people who suffer from early-onset dementia live at home. Living at home can offer the security of familiar rooms and items; 12 it may increase opportunities to engage in a familiar community and meaningful relationships, 13 but it may also lead to boredom and inactivity. 8 Therefore, the aim of this study was to gain an understanding of the lived experiences of younger persons (<65 years) who lived at home and suffered from early-onset dementia, and the meanings that could be found in their experiences. Materials and Methods In order to understand the lived experiences of people with early-onset dementia, the study employed an inductive qualitative approach. 14 This method focuses on individual meaning and the importance of understanding the complexity of the situation at hands. 15 The present study used a method of phenomenological hermeneutic analysis inspired by Lindseth and Norberg. 16 The interpreted data represented personal stories about each informant's experiences; that is, these experiences are unique and not to be generalized. Research context and participants The inclusion criteria for the informants were: aged below 65 years and diagnosed with dementia, living in his or her own home, able to understand the study and answer questions for the interview, and able to give informed consent. Four people, two women and two men aged 55 to 62 years and living in rural Norway, agreed to participate; written consent was obtained. Three of the informants lived with their spouses, and one lived alone. One of the informants had received the diagnosis recently, while the others had been diagnosed two to four years earlier. They were classified by medical staff in the healthcare centers as having mild to moderate dementia. The main characteristics of the participants are summarized in Table 1. Recruitment was performed by a health coordinator who had prior contacts with dementia groups. This was done to ensure the informants' integrity and to prevent unnecessary contact. The health coordinator contacted people who met the inclusion criteria with a request for participation and gave them both written and oral information about the study. The researcher who had experience working with dementia patients contacted the potential participants to discuss the study further only when interest, opportunity, and access were granted. Once these criteria were met, new requests for participation were offered; and consent forms were signed. Interviews were performed only when consent from the participants had been obtained. Data collection Data were collected through narrative interviews with broad open-ended questions. 17 Narrative interviews are especially suited to research studies investigating the importance of identity, lifestyle, culture, and life history, and also more specifically when dealing with people's lived experiences and attempting to understand their health conditions. 18 The informants were encouraged to speak freely and in as much detail as possible about their subjective experiences in their everyday lives. The intention was to create a normal, everyday conversation; the opening question was Can you tell me about an ordinary day in your life? Each informant was interviewed once. Two informants were interviewed at their own homes; one informant was interviewed at a welfare center; and one informant at a daycare center. All interviews, except one, were recorded and transcribed verbatim. The interviews were performed in 2010, lasting about 40 to 60 min, and were conducted in the Norwegian language. Data analysis The narrative interviews were analyzed using a phenomenological hermeneutic approach inspired by the philosopher Paul Ricouer. 16 This approach engaged the phenomenological philosophy of hermeneutic interpretation in a dialectical process. This meant that the interpretation of the text consisted of a dialogue of questions and answers, arguments, and counter-arguments, with the purpose of finding the content of each term. Interpretation moved back and forth between understanding and explanation, allowing a more comprehensive understanding to be achieved. 17 The goal was to interpret, explain, and understand the importance of phenomena. 16,19 According to Lindseth and Norberg, 16 the analysis process included three phases. The first step involved a naïve (passive) reading of the text. In this study, the texts from which the initial understanding was derived were read several times. This created a preliminary interpretation of the experiences of younger people with dementia who lived at home. The next phase, structural analysis, consisted of a thematic analysis of text. This step is regarded as the methodological part of the interpretation, offering an opportunity to confirm or reject the naive understanding. The texts were read several times and then categorized into meaning units based on different stories. The goal was to find topics that might be understood to convey essential meanings about the informants' worlds. The structural analysis phase was necessary to move from the naive understanding to a critical interpretation, progressing from a superficial to a deeper form of interpretation. 20 In the final step, the texts were interpreted as a whole in light of the understanding obtained from the naive reading, structural analysis, and the researcher's prior understanding. 21 Methodological considerations The first author (DR) carried out the interviews. To ensure the trustworthiness of the findings, all three authors read the interview transcripts individually, followed the path of analysis from naïve reading to structural analysis, and then interpreted the whole. Finally, the authors discussed the analysis together until consensus was established. Ethical considerations The study was approved according to the Health Research Act § 10 by the Regional Ethics Committee for Medical and Health Research South East D (2010/1114 -1 Living at home). Participants were informed ahead of time, both orally and in writing, of the purpose and procedure of the study. The participants were granted confidentiality, and written consent was established. If they wished, the informants had the opportunity to consult trained personnel when their story-telling created a considerable level of distress. Naïve reading The informants described their existence as chaotic. They found themselves in a contradictory state in which they were losing contact with some aspects of the human condition, while still struggling to be human. The disease's progression passed unnoticed over time. Getting the diagnosis felt wrong, and they felt lonely and powerless. This situation reflected the feeling of being trapped by cir- cumstances. The informants spent a lot of energy maintaining a normal situation in daily life and preserving their hope and willpower. Their statements also revealed their feelings about their dependence on professional assistance. Being able to stay at home was essential in order to continue experiencing a good life, particularly security and a sense of well-being. At the same time, they said that staying at home gave them not only a sense of meaning but also frustration and anger at the experience of being idled. They often found themselves in a situation where they sat and waited for something, but they did not know what they were waiting for. Structure analysis The following four themes are presented in the analysis: i) entrapment by circumstances; ii) loss of humanity; iii) the preservation of hope and willpower; iv) the desire to ensure one's quality of life. Entrapment by circumstances All the informants expressed the feeling that they had been placed into a situation that they were unable to change. They did not notice the progress of their disease cognitively, but they experienced multiple negative emotional reactions before and after getting the diagnosis. Although the informants experienced great trials as a result of this serious illness, they tried to adjust and accept their situations. Sub-theme: lack of cognitive consciousness of the disease's progress None of the informants noticed the progress of their disease. The paths to their current situations faded, as the disease's process was gradual and they were not able to trace its contours. It happened over time, so that I didn't notice (case 2). Sub-theme: experiencing emotional stress However, the participants described a number of reactions upon their discovery that they suffered from a serious and progressive disease. A certain doubt was expressed when they described their experiences: … this is unreal because I'm not so old (case 1).…it was almost horrible (case 2). ... damn, now it's done (case 4). Now the world has come apart, and I was shocked when it happened (case 3). Sub-theme: being able to live with the diagnosis Participants indicated that they had reached an understanding, attempted to control the situation, and tried not to focus on the negative emotional reactions, instead focusing on a sense of acceptance. They tried to adjust to their own situations. There was no way to overcome the disease; they should carry on, bear the situation and take each day as it comes. But that is how life is, and one just has to live this way (case 3). Loss of humanity Informants experienced life as a state of chaos, and their explanations highlighted the complex range of losses that they experienced, including loss of self-identity, self-esteem and self-respect, and self-determination, as well as the loss of social relationships. Controlling and coming to terms with these losses took a lot of energy and were accompanied by a series of emotional outcomes. The informants' experiences of being human were affected. A lack of knowledge about the symptoms and progression of the disease was also a common experience. Sub-theme: losing self-determination and becoming dependent Informants' stories revealed a reduced existence and a minimized world -coping with their situations takes a lot of energy. The informants described the entrance into a phase of life where others took control of their health and treatment as a major assault against their ability to participate in society and made their own decisions. At the same time, they knew and/or remembered little about their situations. They also remembered little about the information they had been given and how to handle it. The decline was visible and it affected their self-determination and independence. It is my wife who forced the diagnosis upon me; it was only my wife the doctor listened to (case 4). Another man describes his changed situation: It is wrong; I am the same as before (case 3). Several of the informants described the loss of their driver's license, expressing sadness at their increased dependency. One of the women claimed: The fact that they took my driver's license is the biggest change. To lose the license makes me dependent on others and the loss of my license worsens the situation (case 1). Sub-theme: reduced activities and social interactions Several informants wished to have a good life and they expressed this desire clearly. However, the loss of their social networks made this a challenge. Difficulty in establishing and sustaining social interactions contributed to their experiences of no longer being part of society. Progressive reduction of functioning as the disease took its toll highlighted those declines even further. Several informants emphasized the importance of having something to do and expressed their feel-ings of passivity and ineffectiveness. I feel a lot of meaninglessness, depression, and anger. I sit a lot and wait, Not being able to succeed in doing things I prefer to do makes me upset (case 1). The preservation of hope and willpower Almost every informant exhibited a strong feeling of determination and hope for survival. None of them had given up; in struggling day after day, they demonstrated impressive strength. Having dreams and interests appeared to play a significant role in keeping their spirits alive. Although they seemed to be aware of the fact that their disease could not be cured, their dreams and interests were perceived to offer an opportunity to have a good life. Sub-theme: maintaining an identity and self-reliance The informants' descriptions revealed a strong desire to prove that they could cope with normal life. Although they were aware of the changes in their life situation and the fact that their independence was endangered by the disease, they related stories of taking initiative to preserve their faith and control their lives. These actions had already given them a certain sense that they could preserve their self-identities and maintain some forms of selfreliance. But I'll still be energetic and not give up… I am not going to just sit down (case 3). I have decided; I have made [up] my mind, I will go as far as I will myself to (case 2). Receiving help from others made them feel a strong desire to remain self-reliant and independent. A woman said about her meeting with home care personnel: I had a home care service before, but it was just chaos. The personnel came and she/he ordered me to go to the bathroom, an experience I found quite uncomfortable and anxietyinducing. I found out that I could manage better by myself without this service (case 1). Comprehensive understanding The informants' stories revealed a number of complex experiences relating to the changes in their life situations and experiences of being human. Our interpretation suggested that younger people with dementia who lived at home felt like they were in a situation characterized by frustrations, despairs, and meaninglessness as a result of the loss of their humanity. However, they still showed courage and struggled to maintain their humanity and self-value. Discussion The meaning of the informants' subjective experiences revealed the following four key themes: entrapment by circumstances, loss of humanity, the preservation of hope and willpower, and the desire to ensure their quality of life. The theme of entrapment by circumstances is strongly linked to the experience of being bound by an incurable and serious progressive disease. Dementia and other diseases related to cognition involve a series of complex changes in a person's ability to live her/his daily life. The informants' stories verify a wide series of reactions and actions due to these circumstances, both before and after the diagnosis. Researchers have suggested that younger people with dementia are more aware of the fact that something is wrong than older people with the disease. 22 However, none of the informants in this study were able to give an explicit description of the progress of their own dementia. Instead, they described a range of emotional reactions to the knowledge of their diagnosis. Questions about what a person with dementia wants to be told remain largely unanswered. 23 Undergoing an extensive medical examination and receiving a correct diagnosis are the keys to understanding one's diagnosis in the early stages. This will activate further choices in terms of independence, planning, and adequate follow-up. 23 The meaning of the informants' narratives also demonstrates that people suffering from sudden and/or incomprehensible events try to regain some balance through their actions. Antonovsky 24 and Lazarus 25 relate the experience of stressful situations to individuals' previous experiences. Depending how a person understands what has happened, the coping process might entail reconstructing meaning in life, trying to regain social roles and functions, or trying to manage one's life despite the illness. Even with a serious illness, people can transform the experience of chaos into something organized to help them to deal with their situations. 26-28 Corbin and Strauss 29 described how experiences and assigned meanings affect social relationships and practical matters, which are both separate from the disease and, at the same time, part of it. Several studies describe coping strategies in people with dementia. 30,31 However, in this study, informants' narratives reflect conflicting meanings, from entering a recovery phase after learning of the diagnosis to being irritated at the experience of personal loss. In order to survive this emotional conflict, individuals transform reality. 32 The theme of the loss of humanity was derived from the informants' descriptions of a series of incidents that weakened their experiences of being human. As in studies by Macquarrie 33 and Steeman et al., 31 the results show that younger people with dementia experience losses in mastery of functions and abilities. Those experiences affect the informant's self-determination and independence. Several of the informants described the loss of their driver's license with great sadness. This increased their dependence and the feeling that they were trapped in their homes and limited their self-determination. The informants' descriptions also highlight the experience of lacking meaningful activity. This experience is described as frustrating and meaningless and seen to cause social isolation and loneliness. Inability to work any longer reduces social interaction, alienates one from the community, and produces a sense of worthlessness. Clare 34 and Menne et al. 35 describe similar findings. Steeman et al. 31 found that social interactions and sense of being a part of a community are of great importance in determining one's understanding of oneself as a valuable human. Having a disease related to dementia does not prevent one from being hurt and feeling annoyed at one's inability to function as before. As in Phinney's 36 study, the informants in this study experienced feelings of anger and frustration at this lack. Eriksson 37 interpreted and discussed the losses caused by dementia diseases, concluding that the problems that follow in the wake of such declines and suffering are those most central to the human experience. People affected by chronic diseases find that certain life histories are interrupted, their sense of coherence is undermined, and the future becomes uncertain and unpredictable. 38 The theme of the preservation of hope and willpower is expressed in the informants' stories that discuss their desire to be acknowledged and valued as humans. The theme defies all preconceptions that life with dementia offers no rewards and runs counter to human dignity. As younger people affected by dementia will lose most of their characteristic life roles, 39 the opportunity to fulfill their interests and hobbies can offer a means of preserving their senses of self and creating new roles. Kielhofner 40 noted that when young people with dementia are devoted to activities that are personally useful, these activities provide a positive effect in the form of independence and preserve individuals' personalities. Conversely, hope and willpower are considered to be forces that help cope with the situation and the experience of conflicting emotions through the cultivation of sacred moments and plans. The informants' stories are similar to findings in other studies in which hope is defined as an inner resource and mechanism for coping. 41,12 According to Frankl,43 humans' search for meaning in life can help them to cope with loss. Erikson 44 found that hope is essential for human life and ego maintenance. When one is affected by dementia, good moments can create nice days. By possessing hope and willpower, one may comprehend one's situation and experience as gaining something, have great fun, and in many ways enjoy a good quality of life. Negative experiences with professional assistance increase the desire to be independent and self-reliant. Other studies describe a similar negative experiences and judgments due to the influx of many aides; 45 criticisms related to a lack of time, a lack of continuity, and a lack of available services; 46 and feelings of worthlessness. 45 In addition to the difficulties they experienced as a result of practical functional impairment, informants' difficulties reflect their need to preserve their identities and participate in society. Such assistance is available through state auxiliary home services, a term that covers several services, offering either an auxiliary at one's own home, or at day care centers or institutions. 47 Auxiliary services have been promoted around the world for more than 20 years; they have offered positive experiences to many and provide a rational basis for maintaining quality of life, identity, socialization, security, and activity levels. 47 However, Chaston 48 found that younger people with dementia are more physically active and have different needs and interests than older people. In general, judging from their stories, the informants have different individual needs and functional levels; professional help is experienced very differently from one person to another. To a very large extent, they depend on their caregivers and their caregivers' ability to be present to meet and fulfill their needs. 49 Care and support must be given in a way that complies with the wishes and needs of each individual. This means that health professionals have to listen to people with dementia and learn from them in order to understand how they perceive their situation and what kind of support they need. 49 The desire to ensure one's quality of life refers to the informants' subjective experiences of confidence as well as the permanent and secure perspective gained through comfort with one's home surroundings. Therefore, the phrase being at home refers to more than the physical properties involved. On one hand, the informants describe the experience of being able to live in their own accommodations as a positive one. Living at own home offers informants feelings of positivity, happiness, and security. The home can create a feeling of security and act as a place where a person experiences recognition. 12 The importance of feeling at home may also help people experience peace of mind and the sense of being at ease with themselves. People can experience contact with a familiar community as a meaningful relationship. 13 If a younger person with dementia is recognized as an important person, this could make that person feel like a part of the community and offer meaning to that person's life, in spite of the disease. When attachment to a place such as a home or a room becomes stronger, humans begin to identify themselves with that place. 50 On the other hand, changes in environment create negative experiences of the disease. The sense of missing familiar and known surroundings, as well as dissatisfaction with new surroundings, was palpable. Dementia also affects people's perception of their environment. Particularly for younger people with dementia, their surroundings should be familiar and represent something known, understandable, and manageable. As a result of increasing cognitive disability, a person's perception and assessment of the environment changes during the course of the disease. 51,52 If it is not possible to adjust their accommodations, younger people with dementia will experience isolation at home; as a result of the disease's progression, a move to an institution might be necessary. 47 However, it is generally preferable for people to remain in contact with the community, and when this is not too challenging, emotional rewards will accrue not only for younger people with dementia but also for many other people. 47 Methodological considerations The results of this research study must be seen as a unique product of the personal narratives used as texts and the theoretical apparatus employed; they also reflect our own personal experiences to some degree. Each of the informants' unique and personal lived experiences contributed to the process of interpreting the results. The main restriction affecting the study is the small number of informants. It was difficult to recruit informants, since only a few registered cases of dementia in younger people are diagnosed and known of by health care centers in this rural area. In addition, the ethical obligation to be cautious about causing patient distress may have reduced the persistence of recruitment efforts. Moreover, since persons with early-onset dementia are usually diagnosed in the later stages of the disease, 1 we can assume that there are a number of unknown cases. Nevertheless, the participants' characteristics vary, as reflected in their information-rich narratives. The purpose was not to generalize, but to contribute to a process of generating meanings and teasing out nuances that can amplify the quality of informants' lived experiences and enrich our knowledge about those experiences, which is currently scarce. The knowledge that emerges will be as important and meaningful as any other knowledge. 53 Conclusions When a person is suffering from dementia, it is assumed that the person lacks an awareness of his or her situation. This can mean that person's autonomy and self-determination are ignored or not considered, and that they experience a lack of respect. Our interpretations of the informants' experiences, however, show that despite experiencing complex problems in their daily lives with remembering and maintaining their own needs, they desire to be independent and have self-determination. Therefore, in all phases of progression of dementia, the person in charge of caring for the person with dementia should reflect on and maintain ethical awareness about the care recipient's expectations and needs. The fact that people with dementia who live at home may not want to receive help also creates a complex ethical dilemma for further investigation and discussion.
2019-05-04T13:05:30.708Z
2013-08-06T00:00:00.000
{ "year": 2013, "sha1": "39f2522877226aa01aaa2efd0e77ad1980d32649", "oa_license": "CCBYNC", "oa_url": "https://www.pagepressjournals.org/index.php/nursing/article/download/nursrep.2013.e3/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3e69720782c1ac21b8dff0154a3d324b965dd6e5", "s2fieldsofstudy": [ "Medicine", "Sociology", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
260887373
pes2o/s2orc
v3-fos-license
A Unifying Generator Loss Function for Generative Adversarial Networks A unifying α-parametrized generator loss function is introduced for a dual-objective generative adversarial network (GAN) that uses a canonical (or classical) discriminator loss function such as the one in the original GAN (VanillaGAN) system. The generator loss function is based on a symmetric class probability estimation type function, Lα, and the resulting GAN system is termed Lα-GAN. Under an optimal discriminator, it is shown that the generator’s optimization problem consists of minimizing a Jensen-fα-divergence, a natural generalization of the Jensen-Shannon divergence, where fα is a convex function expressed in terms of the loss function Lα. It is also demonstrated that this Lα-GAN problem recovers as special cases a number of GAN problems in the literature, including VanillaGAN, least squares GAN (LSGAN), least kth-order GAN (LkGAN), and the recently introduced (αD,αG)-GAN with αD=1. Finally, experimental results are provided for three datasets—MNIST, CIFAR-10, and Stacked MNIST—to illustrate the performance of various examples of the Lα-GAN system. Introduction Generative adversarial networks (GANs), first introduced by Goodfellow et al. in 2014 [10], have a variety of applications in media generation [21], image restoration [29], and data privacy [14].GANs aim to generate synthetic data that closely resembles the original real data with (unknown) underlying distribution P x .The GAN is trained such that the distribution of the generated data, P g , approximates P x well.More specifically, low-dimensional random noise is fed to a generator neural network G to produce synthetic data.Real data and the generated data are then given to a discriminator neural network D scoring the data between 0 and 1, with a score close to 1 meaning that the discriminator thinks the data belongs to the real dataset.The discriminator and generator play a minimax game, where the aim is to minimize the generator's loss and maximize the discriminator's loss. Since their initial introduction, several variants of GAN have been proposed.Deep convolutional GAN (DCGAN) [30] utilizes the same loss functions as VanillaGAN (the original GAN), combining GANs with convolutional neural networks, which are helpful when applying GANs to image data as they extract visual features from the data.DCGANs are more stable than the baseline model, but can suffer from mode collapse, which occurs when the generator learns that a select number of images can easily fool the discriminator, resulting in the generator only generating those images.Another notable issue with VanillaGAN is the tendency for the generator network's gradients to vanish.In the early stages of training, the discriminator lacks confidence, assigning generated data values close to zero.Therefore, the objective function tends to zero, resulting in small gradients and a lack of learning.To mitigate this issue, a non-saturating generator loss function was proposed in [10] so that gradients do not vanish early on in training. In the original (VanillaGAN) problem setup, the objective function, expressed as a negative sum of two Shannon cross-entropies, is to be minimized by the generator and maximized by the discriminator.It is demonstrated that if the discriminator is fixed to be optimal (i.e., as a maximizer of the objective function), the GAN's minimax game can be reduced to minimizing the Jensen-Shannon divergence (JSD) between the real and generated data's probability distributions [10].An analogous result was proven in [5] for RényiGANs, a dual-objective GAN using distinct discriminator and generator loss functions.More specifically, under a canonical discriminator loss function (as in [10]), and a generator loss function expressed in terms of two Rényi cross-entropies, it is shown that the RényiGAN optimization problem reduces to minimizing the Jensen-Rényi divergence, hence extending VanillaGAN's result.Nowozin et al. generalized VanillaGAN by formulating a class of loss functions in [27] parametrized by a lower semicontinuous convex function f , devising f -GAN.More specifically, the f -GAN problem consists of minimizing an f -divergence between the true data distribution and the generator distribution via a minimax optimization of a Fenchel conjugate representation of the f -divergence, where the VanillaGAN discriminator's role (as a binary classifier) is replaced by a variational function estimating the ratio of the true data and generator distributions.The f -GAN loss function may be tedious to derive, as it requires the computation of the Fenchel conjugate of f .It can be shown that f -GAN can interpolate between VanillaGAN and HellingerGAN, among others [27]. More recently, α-GAN was presented in [19], where the aim is to derive a class of loss functions parameterized by α > 0, expressed in terms of a class probability estimation (CPE) loss between a real label y ∈ {0, 1} and predicted label ŷ ∈ [0, 1] [19].The ability to control α as a hyperparameter is beneficial to be able to apply one system to multiple datasets, as two datasets may be optimal under different α values.This work was further analyzed in [20] and expanded in [35] by introducing the dual-objective (α D , α G )-GAN, which allowed for the generator and discriminator loss functions to have a distinct α parameter with the aim of improving training stability.When α D = α G , the α-GAN optimization reduces to minimizing an Arimoto divergence, as originally derived in [19].Note that α-GAN can recover several f -GANs, such as HellingerGAN, Vanilla-GAN, WassersteinGAN and Total Variation GAN [19].Furthermore, in their more recent work which unifies [19,20,35], the authors establish, under some conditions, a one-to-one correspondence between CPE loss based GANs (such as α-GANs) and f -GANs that use a symmetric f -divergence; see [34, Theorems 4-5 and Corollary 1].They also prove various generalization and estimation error bounds for (α D , α G )-GANs and illustrate their ability in mitigating training instability for synthetic Gaussian data as well as the Celeb-A and LSUN Classroom image datasets.The various (α D , α G )-GAN equilibrium results do not provide an analogous result to the JSD and Jensen-Rényi divergence minimization for the VanillaGAN [10] and RényiGAN [5] problems, respectively, as it does not involve a Jensen-type divergence. 1 The main objective of our work is to present a unifying approach that provides an axiomatic framework to encompass several existing GAN generator loss functions so that the GAN optimization can be simplified in terms of a Jensen-type divergence.In particular, our framework classifies the set of αparameterized CPE-based loss functions L α , generalizing the α-loss function in [19,20,34,35].We then propose L α -GAN, a dual objective GAN that uses a function from this class for the generator, and uses any canonical discriminator loss function that admits the same optimizer as VanillaGAN [10].We show that under some regularity (convexity/concavity) conditions on L α , the minimax game played with these two loss functions is equivalent to the minimization of a Jensen-f α -divergence, a Jensen-type divergence and another natural extension of the Jensen-Shannon divergence (in addition to the Jensen-Rényi divergence [5]), where the generating function f α of the divergence is directly computed from the CPE loss function L α .This result recovers various prior dual-objective GAN equilibrium results, thus unifying them under one parameterized generator loss function.The newly obtained Jensen-f α -divergence, which is noted to belong to the class of symmetric f -divergences with different generating functions (see Remark 1), is a useful measure of dissimilarity between distributions as it requires a convex function f with a restricted domain given by the interval [0, 2] (see Remark 2) in addition to its symmetry and finiteness properties. The rest of the paper is organized as follows.In Section 2, we review fdivergence measures and introduce the Jensen-f -divergence as an extension of 1 Given a divergence measure D(p∥q) between distributions p and q (i.,e., a positive-definite bivariate function: D(p∥q) ≥ 0 with equality if and only if (iff) p = q almost everywhere (a.e.)), a Jensen-type divergence of D is given by 1 2 D p∥ p+q 2 + 1 2 D q∥ p+q the Jensen-Shannon divergence.In Section 3, we establish our main result regarding the optimization of our unifying generator loss function (Theorem 1), and show that it can be applied to a large class of known GANs (Lemmas 2-4). We conduct experiments in Section 4 by implementing different manifestations of L α -GAN on three datasets, MNIST, CIFAR-10 and Stacked MNIST.Finally, we conclude the paper in Section 5. Preliminaries We begin by presenting key information measures used throughout the paper. Definition 1 [1,7,8] The f -divergence between two probability densities p and q with common support R ⊆ R d on the Lebesgue measurable space (R, B(R), µ) is denoted by D f (p∥q) and given by3 where we have used the shorthand R g dµ := R g(x) dµ(x), where g is a measurable function; we follow this convention from now on.Here, f is referred to as the generating function of D f (p∥q). We require that f is strictly convex around 1 and that it satisfies the normalization condition f (1) = 0 to ensure positive-definiteness of the f -divergence, i.e., D f (p∥q) ≥ 0 with equality holding iff p = q (a.e.).We present examples of f -divergences under various choices of their generating function f in Table 1. We will be invoking these divergence measures in different parts of the paper. Table 1: Examples of f -divergences.[12,22,32] Hα The Rényi divergence of order α (α > 0, α ̸ = 1) between densities p and q with common support R is used in [5] in the RényiGAN problem; it is given by [31,33] Note that the Rényi divergence is not an f -divergence; however, it can be expressed as a transformation of the Hellinger divergence (which is itself an fdivergence): We now introduce a new measure, the Jensen-f -divergence, which is analogous to the Jensen-Shannon and Jensen-Rényi divergences. Definition 2 The Jensen-f -divergence between two probability distributions p and q with common support R ⊆ R d on the Lebesgue measurable space (R, B(R), µ) is denoted by JD f (p∥q) and given by where We next verify that the Jensen-Shannon divergence is a Jensen-f -divergence. Lemma 1 Let p and q be two densities with common support R ⊆ R d , and consider the function f : [0, ∞) → (−∞, ∞] given by f (u) = u log u.Then we have that Proof.As f is convex (and continuous) on its domain with f (1) = 0, we have that ■ Remark 1 (Jensen-f -divergence is a symmetric f -divergence) Note that JD f (p∥q) is itself a symmetric f -divergence (with a modified generating function).Indeed, given the continuous convex function f that is strictly convex around 1 with f (1) = 0, consider the functions which are both continuous convex, strictly convex around 1, and satisfy f 1 (1) = f 2 (1) = 0. Now direct calculations yield that and Thus where is also continuous convex, strictly convex around 1 and satisfies f (1) = 0. Since by (4), JD f (p∥q) = JD f (q∥p), we conclude that the Jensen-f -divergence is a symmetric f -divergence. 4emark 2 (Domain of f ) Examining (4), we note that the Jensen-f -divergence between p and q involves the f -divergences between either p or q and their mixture (p + q)/2.In other words to determine JD f (p∥q), we only need f 2p p+q and f 2q p+q when taking the expectations in (1).Thus, it is sufficient to restrict the domain of the convex function f to the interval [0, 2]. Main Results We now present our main theorem which unifies various generator loss functions under a CPE-based loss function L α for a dual-objective GAN, L α -GAN, with a canonical discriminator loss function loss function that is optimized as in [10].Under some regularity conditions on the loss-function L α , we show that under the optimal discriminator, our generator loss becomes a Jensen-f -divergence. Let (X , B(X ), µ) be the measure space of n × n × m images (where m = 1 for black and white images and m = 3 for RGB images), and let (Z, B(Z), µ) be a measure space such that Z ⊆ R d .The discriminator neural network is given by D : X → [0, 1], and the generator neural network is given by G : Z → X .The generator's noise input is sampled from a multivariate Gaussian distribution P z : Z → [0, 1].We denote the probability distribution of the real data by P x : X → [0, 1] and the probability distribution of the generated data by P g : X → [0, 1].We also set P x and P g as the densities corresponding to P x and P g , respectively.We begin by introducing the L α −GAN system. , with strict convexity (resp., strict concavity) around ŷ = 1, and such that L α is symmetric in the sense that Then the L α −GAN system is defined by (V D , V Lα,G ), where V D : X × Z → R is the discriminator loss function, and V Lα,G : X × Z → R is the generator loss function, given by Moreover, the L α −GAN problem is defined by inf We now present our main result about the L α −GAN optimization problem. ) be the loss functions of L α −GAN, and consider the joint optimization in (9)- (10).If V D is a canonical loss function in the sense that it is maximized at D = D * , where then (10) where JD fα (•∥•) is the Jensen-f α -divergence, and f α : [0, 2] → R is a continuous convex function, that is strictly convex around 1, given by where a and b are real constants chosen so that f α (1) = 0 with a < 0 (resp., a > 0) if uL α 1, u 2 is convex (resp., concave).Finally, ( 12) is minimized when P x = P g (a.e.). Proof.Under the assumption that V D is maximized at D * = Px Px+Pg , we have that = −2 X P x + P g 2 = −2 X P x + P g 2 where: (7), where u = Px Px+Pg . • (b) holds by solving for L α (1, u) in terms of f α (2u) in (13), where u = Px Px+Pg in the first term and u = Pg Px+Pg in the second term. The constants a and b are chosen so that f α (1) = 0. Finally, the continuity and convexity of f α (as well as its strict convexity around 1) directly follow from the corresponding assumptions imposed on the loss function L α in Definition 3 and on the condition imposed on the sign of a in the theorem's statement.■ Remark 3 Note that not only D * given in (11) is an optimal discriminator of the (original) VanillaGAN discriminator loss function, but it also optimizes the LSGAN/LkGAN discriminator loss function when their discriminator's labels for fake and real data, γ and β, respectively satisfy γ = 1 and β = 0 (see Section 3.3). We next show that the L α −GAN of Theorem 1 recovers as special cases a number of well-known GAN generator loss functions and their equilibrium points (under an optimal classical discriminator D * ). VanillaGAN VanillaGAN [10] uses the same loss function V VG for both generator and discriminator, which is and can be cast as a saddle point optimization problem: It is shown in [10] that the optimal discriminator for ( 15) is given by D * = Px Px+Pg , as in (11).When D = D * , the optimization reduces to minimizing the Jensen-Shannon divergence: We next show that ( 16) can be obtained from Theorem 1. Lemma 2 Consider the optimization of the VanillaGAN given in (15).Then we have that where Proof.For any fixed α ∈ R, let the function L α in (8) be as defined in the statement: Note that L α is symmetric, since for ŷ ∈ [0, 1], we have that Instead of showing the continuity and convexity/concavity conditions imposed on ŷL α 1, ŷ 2 in Definition 3, we implicitly verify them by directly deriving f α from L α using (13) and showing that it is continuous convex and strictly convex around 1. Setting a = 1 and b = log 2, we have that Clearly, f is convex (actually strictly convex on (0, ∞) and hence strictly convex around 1) and continuous on its domain (where f (0) = lim u→0 u log(u) = 0).It also satisfies f (1) = 0.By Lemma 1, we know that under the generating function f (u) = u log(u), the Jensen-f divergence reduces to the Jensen-Shannon divergence.Therefore, by Theorem 1, we have that which finishes the proof.■ α-GAN The notion of α-GANs is introduced in [19] as a way to unify several existing GANs using a parameterized loss function.We describe α-GANs next. The α-loss between y and ŷ is the map Definition 5 [19] For α > 0, the α−GAN loss function is given by The joint optimization of the α−GAN problem is given by It is known that α-GAN recovers several well-known GANs by varying the α parameter, notably, the VanillaGAN (α = 1) [10] and the HellingerGAN (α = 1 2 ) [27].Furthermore, as α → ∞, V α recovers a translated version of the WassersteinGAN loss function [4].We now present the solution to the joint optimization problem presented in (19). Proposition 1 [19] Let α > 0, and consider the joint optimization of the α-GAN presented in (19).The discriminator D * that maximizes the loss function is given by Furthermore, when D = D * is fixed, the problem in (19) reduces to minimizing an Arimoto divergence (as defined in Table 1) and a Jensen-Shannon divergence when α = 1: where ( 21) and ( 22) achieve their minima iff P x = P g (a.e.). Recently, α-GAN was generalized in [35] to implement a dual objective GAN, which we describe next. Definition 6 [35] For α D > 0 and α G > 0, the (α D , α G )−GAN's optimization is given by inf where V α D and V α G are defined in (18), with α replaced by α D and α G respectively. Proposition 2 [35] Consider the joint optimization in (23)- (24).Let parameters α D , α G > 0 satisfy The discriminator D * that maximizes V α D is given by Furthermore, when D = D * is fixed, the minimization of V α G in (24) is equivalent to the following f -divergence minimization: where We now apply the (α D , α G )-GAN to our main result in Theorem 1 by showing that (12) can recover (27) when α D = 1 (which corresponds to a VanillaGAN discriminator loss function). Lemma 3 Consider the (α D , α G )−GAN given in Definition 6.Let α D = 1 and α G = α > 1 2 .Then, the solution to (24) presented in Proposition 2 is equivalent to minimizing a Jensen-f α -divergence: specifically, if D * is the optimal discriminator given by (26), which is equivalent to (11) where L α (y, ŷ) = ℓ α (y, ŷ) and Proof.We show that Theorem 1 recovers Proposition 2 by setting L α (y, ŷ) = ℓ α (y, ŷ).Note that ℓ α is symmetric, since As in the proof of Lemma 2, instead of proving the conditions imposed on ŷL α 1, ŷ 2 in Definition 3, we derive f α directly from L α using (13) and show that it is continuous convex and strictly convex around 1. From Lemma 2, we know that when α = 1, f α (u) = u log u (which is strictly convex and continuous). For α ∈ (0, 1) ∪ (1, ∞), setting a = 2 (13), we have that Clearly f α (1) = 0. Furthermore for α ̸ = 1, we have that which is positive for α > 1 2 , and f α is convex for α > 1 2 (as well as continuous on its domain and strictly convex around 1).Thus by Theorem 1, we have that We now show that the above Jensen-f α -divergence is equal to the f 1,α -divergence originally derived for the (1, α)-GAN problem of Proposition 2 (note from Proposition 2, that if , so the range of α concurs with the range above required for the convexity of f α ).For any two distributions p and q with common support X , we have that ■ Note that this lemma generalizes Lemma 2; the VanillaGAN is a special case of the (1, α)-GAN for α = 1. Shifted LkGANs and LSGANs Least Squares GAN (LSGAN) was proposed in [24] to mitigate the vanishing gradient problem with VanillaGAN and to stabilize training performance.The LSGAN's loss function is derived from the squared error distortion measure, where we aim to minimize the distortion between the data samples and a target value we want the discriminator to assign the samples to.The LSGAN was generalized with the LkGAN in [5] by replacing the squared error distortion measure with the absolute error distortion measure of order k ≥ 1, therefore introducing an additional degree of freedom to the generator's loss function.We first state the general LkGAN problem.We then apply the result of Theorem 1 to the loss functions of LSGAN and LkGAN. Definition 7 [5] Let γ, β, c ∈ [0, 1] and let k ≥ 1.The LkGAN's loss functions, denoted by V LSGAN,D and V k,G are given by The LkGAN problem is the joint optimization inf We next recall the solution to (33), which is a minimization of the Pearson-Vajda divergence |χ| k (•∥•) of order k (as defined in Table 1). Proposition 3 [5] Consider the joint optimization for the LkGAN presented in (33).Then, the optimal discriminator D * that maximizes V LSGAN,D in (31) is given by Furthermore, if D = D * , and Note that the LSGAN [24] is a special case of LkGAN, as we recover LSGAN when k = 2 [5].By scrutinizing Proposition 3 and Theorem 1, we observe that the former cannot be recovered from the latter.However we can use Theorem 1 by slightly modifying the LkGAN generator's loss function.First, for the dual objective GAN proposed in Theorem 1, we need D * = Px Px+Pg .By (35), this is achieved for γ = 1 and β = 0.Then, we define the intermediate loss function Comparing the above loss function with (8), we note that setting c 1 = 0 and c 2 = 1 in (37) satisfies the symmetry property of L α .Finally, to ensure the generating function f α satisfies f α (1) = 0, we shift each term in (37) by 1. Putting these changes together, we propose a revised generator loss function, denoted by Vk,G , given by We call a system that uses (38) as a generator loss function a Shifted LkGAN (SLkGAN).If k = 2, we have a shifted version of the LSGAN generator loss function, which we call the Shifted LSGAN (SLSGAN).Note that none of these modifications alter the gradients of V k,G in (32), since the first term is independent of G, the choice of c 1 is irrelevant, and translating a function by a constant does not change its gradients.However, from Proposition 3, for γ = 0, β = 1 and c = 1, we do not have that γ − β = 2(c − β), and as a result, this modified problem does not reduce to minimizing a Pearson-Vajda divergence.Consequently, we can relax the condition on k in Definition 7 to just k > 0. We now show how Theorem 1 can be applied to L α -GAN using (38). Lemma 4 Let k > 0. Let V D be a discriminator loss function, and let Vk,G be the generator's loss function defined in (38).Consider the joint optimization , where f k is given by Examples of V D (D, G) that satisfy the requirements of Lemma 4 include the LkGAN discriminator loss function given by ( 31) with γ = 1 and β = 0, and the VanillaGAN discriminator loss function given by (14). Proof.Let k > 0. We can restate the SLkGAN's generator loss function in (38) in terms of V Lα,G in ( 8): we have that We have that L k is symmetric, since We derive f α from L α via ( 13) and directly check that it is continuous convex and strictly convex around 1. Setting a = 1 2 k and b = 2 k − 1 in ( 13), we have that We clearly have that f k (1) = 0 and that f k is continuous.Furthermore, we have that f ′′ k (u) = k(k + 1)u, which is non-negative for u ≥ 0. Therefore f k is convex (as well as strictly convex around 1).As a result, by Theorem 1, we have that . ■ We conclude this section by emphasizing that Theorem 1 serves as a unifying result recovering the existing loss functions in the literature and moreover, provides a way for generalizing new ones.Our aim in the next section is to demonstrate the versatility of this result in experimentation. Experiments We perform two experiments on three different image datasets which we describe below.Experiment 1.In the first experiment, we compare the (α, α)-GAN with the (1, α)-GAN, controlling the value of α. 5 Recall that α D = 1 corresponds to the canonical VanillaGAN (or DCGAN) discriminator.We aim to verify whether or not replacing an α-GAN discriminator with a VanillaGAN discriminator stabilizes or improves the system's performance depending on the value of α.Note that the result of Theorem 1 only applies to the (α D , α G )-GAN for α D = 1. Experiment 2. We train two variants of SLkGAN, with the generator loss function as described in (38), parameterized by k > 0. We then utilize two different canonical discriminator loss functions to align with Theorem 1.The first is the VanillaGAN discriminator loss given by ( 14); we call the resulting dual objective GAN by Vanilla-SLkGAN.The second is the LkGAN discriminator loss, given by (31), where we set γ = 1 and β = 0 such that the optimal discriminator is given by (11).We call this system by Lk-SLkGAN.We compare the two variants to analyze how the value of k and choice of discriminator loss impacts the system's performance. Experimental Setup We run both experiments on three image datasets: MNIST [9], CIFAR-10 [17], and Stacked MNIST [23].The MNIST dataset is a dataset of black and white handwritten digits between 0 and 9 of size 28 × 28 × 1.The CIFAR-10 dataset is an RGB dataset of small images of common animals and modes of transportation of size 32 × 32 × 3. The Stacked MNIST dataset is an RGB dataset derived from the MNIST dataset, constructed by taking three MNIST images, assigning each one of the three colour channels, and stacking the images on top of each other.The resulting images are then padded so that each one of them have size 32 × 32 × 3.For Experiment 1, we use α values of 0.5, 5.0, 10.0 and 20.0.For each value of α, we train the (α, α)-GAN and the (1, α)-GAN.We additionally train the DCGAN, which corresponds to the (1, 1)-GAN.For Experiment 2, we use k values of 0.25, 1.0, 2.0, 7.5 and 15.0.Note that when k = 2, we recover LSGAN.For the MNIST dataset, we run 10 trials with the random seeds 123, 500, 1600, 199621, 60677, 20435, 15859, 33764, 79878, and 36123, and train each GAN for 250 epochs.For the RGB datasets (CIFAR-10 and Stacked MNIST), we run 5 trials with the random seeds 123, 1600, 60677, 15859, 79878, and train each GAN for 500 epochs.All experiments utilize an Adam optimzer for the stochastic gradient descent algorithm, with a learning rate of 2 × 10 −4 , and parameters β 1 = 0.5, β 2 = 0.999 and ϵ = 10 −7 [16].We also experiment with the addition of a gradient penalty (GP); we add a penalty term to the discriminator's loss function to encourage the discriminator's gradient to have a unit norm [11]. The MNIST experiments were run on one 6130 2.1 GHz 1xV100 GPU, 8 CPUs, and 16 GB of memory.The CIFAR-10 and Stacked MNIST experiments were run on one Epyc 7443 2.8 GHz GPU, 8 CPUs and 16 GB of memory.For each experiment, we report the best overall Fréchet Inception Distance (FID) score [13], the best average FID score amongst all trials and its variance, and the average epoch the best FID score occurs and its variance.The FID score for each epoch was computed over 10 000 images.For each metric, the lowest numerical value corresponds to the model with the best metric (indicated in bold in the tables).We also report how many trials we include in our summary statistics, as it is possible for a trial to collapse and not train for the full number of epochs.The neural network architectures used in our experiments are presented in Appendix A. The training algorithms are presented in Appendix B. Experimental Results We report the FID metrics for Experiment 1 in Tables 2, 3 and 4, and for Experiment 2 in Tables 5, 6 and 7. We report only on those experiments that produced meaningful results.Models that utilize a simplified gradient penalty have the suffix "-GP".We display the output of the best-performing (α D , α G )-GANs in Figure 1 and the best-performing SLKGANs in Figure 3. Finally, we plot the trajectory of the FID scores throughout training epochs in Figures 2 and 4. Experiment 1 From Table 2, we note that 37 of the 90 trials collapse before 250 epochs have passed without a gradient penalty.The (5,5)-GAN collapses for all 5 trials, and hence it is not displayed in Table 2.This behaviour is expected, as the (α,α)-GAN is more sensitive to exploding gradients when α does not tend to 0 or +∞ [19].The addition of a gradient penalty could mitigate the discriminator's gradients diverging in the (5,5)-GAN by encouraging gradients to have a unit norm.Using a VanillaGAN discriminator with an α-GAN generator (i.e., the (1,α)-GAN) produces better quality images for all tested values of α, compared to when both networks utilize an α-GAN loss function.The (1,10)-GAN achieves excellent stability, converging in all 10 trials, and also achieves the lowest average FID score.The (1,5)-GAN achieves the lowest FID score overall, marginally outperforming DCGAN.Note that when the average best FID score is very close to the best FID score, the resulting best FID score variance is quite small (of the order of 10 −3 ), indicating little statistical variability over the trials. Likewise, for the CIFAR-10 and Stacked MNIST datasets, the (1,α)-GAN produces lower FID scores than the (α, α)-GAN (see Tables 3 and 4).However, both models are more stable with the CIFAR-10 dataset.With the exception of DCGAN, no model converged to its best FID score for all 5 trials with the Stacked MNIST dataset.Comparing the trials that did converge, both (α, α)-GAN and (1, α)-GAN performed better on the Stacked MNIST dataset than the CIFAR-10 dataset.For CIFAR-10, the (1,10)-and (1,20)-GANs produced the best overall FID score and the best average FID score respectively.On the other hand, the (1,0.5)-GANproduced the best overall FID score and the best average FID score for the Stacked MNIST dataset.We also observe a tradeoff between speed and performance for the CIFAR-10 and Stacked MNIST datasets: the (1, α)-GANs arrive at their lowest FID scores later than their respective (α, α)-GANs, but achieve lower FID scores overall. Comparing Figures 2c and 2d, we observe that the (α, α)-GAN-GP provides more stability than the (1, α)-GAN for lower values of α (i.e.α = 0.5), while the (1, α)-GAN-GP exhibits more stability for higher α values (α = 10 and α = 20).Figures 2e and 2f show that the two α-GANs trained on the Stacked MNIST dataset exhibit unstable behaviour earlier into training when α = 0.5 or α = 20.However, both systems stabilize and converge to their lowest FID scores as training progresses.The (0.5,0.5)-GAN-GP system in particular exhibits wildly erratic behaviour for the first 200 epochs, then finishes training with a stable trajectory that outperforms DCGAN-GP. A future direction is to explore how the complexity of an image dataset influences the best choice of α.For example, the Stacked MNIST dataset might be considered to be less complex than CIFAR-10, as images in the Stacked MNIST dataset only contain four unique colours (black, red, green, and blue), while the CIFAR-10 dataset utilizes significantly more colours. Experiment 2 We see from Table 5 that all Lk-LkGANs and Vanilla-SLkGANs have FID scores comparable to the DCGAN.When k = 15, Vanilla-SLkGAN and Lk-SLkGAN arrive at their lowest FID scores slightly earlier than DCGAN and other SLkGANs. The addition of a simplified gradient penalty is necessary for Lk-SLkGAN to achieve overall good performance on the CIFAR-10 dataset (see Table 6).Interestingly, Vanilla-SLkGAN achieves lower FID scores without a gradient penalty for lower k values (k = 1, 2), and with a gradient penalty for higher k values (k = 7.5, 15).When k = 0.25, both SLkGANs collapsed for all 5 trials without a gradient penalty. Table 7 shows that Vanilla-SLkGANs achieve better FID scores than their respective Lk-LkGAN counterparts.However, Lk-LkGANs are more stable, as no single trial collapsed, while 10 of the 25 Vanilla-SLkGAN trials collapsed before 500 epochs had passed.While all Vanilla-SLkGANs outperform the DC-GAN with gradient penalty, Lk-SLkGAN-GP only outperforms DCGAN-GP when k = 15.Except for when k = 7.5, we observe that the Lk-SLkGAN system takes less epochs to arrive at its lowest FID score.Comparing Figures 4e and 4f, we observe that Lk-SLkGANs exhibit more stable FID score trajectories than their respective Vanilla-SLkGANs.This makes sense, as the LkGAN loss function aims to increase the GAN's stability compared to DCGAN [5]. Conclusion We introduced a parameterized CPE-based generator loss function for a dualobjective GAN termed L α -GAN which, when used in tandem with a canonical discriminator loss function that achieves its optimum in (11), minimizes a Jensen-f α -divergence.We showed that this system can recover VanillaGAN, (1, α)-GAN, and LkGAN as special cases.We conducted experiments with the three aforementioned L α -GANs on three image datasets.The experiments indicate that (1, α)-GAN exhibits better performance than (α, α)-GAN with α > 1.They also show that the devised SLkGAN system achieves lower FID scores with a VanillaGAN discriminator compared with an LkGAN discriminator. Future work consists of unveiling more examples of existing GANs that fall under our result as well as applying L α -GAN to novel judiciously designed CPE losses L α and evaluating the performance (in terms of both quality and diversity of generated samples) and the computational efficiency of the resulting models.Another interesting and related direction is to study L α -GAN within the context of f -GANs, given that the Jensen-f -divergence is itself an f -divergence (see Remark 1), by systematically analyzing different Jensen-f -divergences and the role they play in improving GAN performance and stability.Other worthwhile directions include incorporating the proposed L α loss into state-of-the-art GAN models, such as among others BigGAN [6], StyleGAN [15] and CycleGAN [2], for high-resolution data generation and image-to-image translation applications, conducting a meticulous analysis of the sensitivity of the models' performance to different values of the α parameter and providing guidelines on how best to tune α for different types of datasets. Table 9 : Discriminator architecture for the MNIST dataset. Table 10 : Generator architecture for the MNIST dataset.
2023-08-15T06:43:13.807Z
2023-08-14T00:00:00.000
{ "year": 2024, "sha1": "d616ddfe19481de71225885c5c0197198daa8416", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/26/4/290/pdf?version=1711535664", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "58251158ef2d6e94eb4de9f21f3590780c95566d", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
256107912
pes2o/s2orc
v3-fos-license
Relationship between polymorphisms in homologous recombination repair genes RAD51 G172T、XRCC2 & XRCC3 and risk of breast cancer: A meta-analysis Background Genetic variability in DNA double-strand break repair genes such as RAD51 gene and its paralogs XRCC2、XRCC3 may contribute to the occurrence and progression of breast cancer. To obtain a complete evaluation of the above association, we performed a meta-analysis of published studies. Methods Electronic databases, including PubMed, EMBASE, Web of Science, and Cochrane Library, were comprehensively searched from inception to September 2022. The Newcastle-Ottawa Scale (NOS) checklist was used to assess all included non-randomized studies. Odds ratios (OR) with 95% confidence intervals (CI) were calculated by STATA 16.0 to assess the strength of the association between single nucleotide polymorphisms (SNPs) in these genes and breast cancer risk. Subsequently, the heterogeneity between studies, sensitivity, and publication bias were performed. We downloaded data from The Cancer Genome Atlas (TCGA) and used univariate and multivariate Cox proportional hazard regression (CPH) models to validate the prognostic value of these related genes in the R software. Results The combined results showed that there was a significant correlation between the G172T polymorphism and the susceptibility to breast cancer in the homozygote model (OR= 1.841, 95% CI=1.06–3.21, P=0.03). Furthermore, ethnic analysis showed that SNP was associated with the risk of breast cancer in Arab populations in homozygous models (OR=3.52, 95% CI=1.13-11.0, P= 0.003). For the XRCC2 R188H polymorphism, no significant association was observed. Regarding polymorphism in XRCC3 T241M, a significantly increased cancer risk was only observed in the allelic genetic model (OR=1.05, 95% CI= 1.00–1.11, P=0.04). Conclusions In conclusion, this meta-analysis suggests that Rad51 G172T polymorphism is likely associated with an increased risk of breast cancer, significantly in the Arab population. The relationship between the XRCC2 R188H polymorphism and breast cancer was not obvious. And T241M in XRCC3 may be associated with breast cancer risk, especially in the Asian population. Introduction In all countries around the world, cancer is the leading cause of death and an important obstacle to improving life expectancy. Female breast cancer (BC) has overtaken lung cancer as the leading cause of global cancer incidence in 2020, with an estimated 2.3 million new cases, representing 11.7% of all cancer cases (1). The mechanism of breast carcinogenesis is not yet fully understood. It is considered a polygenic disease and has a component of inheritance due to lowpenetrant and common genetic variants. The steady repair of DNA damage is very important for the survival of cells and the maintenance of genetic stability (2). Over the years, it has been increasingly recognized that variations in the genetic background of individuals combined with environmental exposure can ultimately lead to the occurrence and progression of cancer. DNA repair genes have been considered considerable factors in the prevention of genomic damage and continuously monitor chromosomes to correct injuries caused by exogenous agents such as ultraviolet light or endogenous mutagens (3,4). Aberrant double-stranded break (DSB) repair leads to genomic instability, a hallmark of malignant cells. Double-stranded breaks are repaired by two pathways: homologous recombination (HR) and non-homologous end joining (NHEJ). Previous analysis has revealed several important features of DSB repair in breast cancer cells: (i) HR is evidently increased in breast cancer cells compared with normal cells; (ii) Non-homologous end joining(NHEJ)repair is the major DSB repair route in both normal and malignant breast epithelial cells; (iii) NHEJ efficiency does not differ significantly between normal and cancerous cells (5). The two pathways of DSB repair are independently controlled, and only HR is increased in breast cancer cells compared with normal breast epithelial cells. RAD51 is a homolog of the E. coli RecA protein, which is essential for maintainability such as meiotic and mitotic recombination, and also plays a critical role in homologous recombination repair (HR) of DNA double-strand breaks (DSB) (6)(7)(8). Researchers recently discovered that the Rad51 promoter in cancer cells is on average 840-fold more active in cancer cells than in normal cells and the fusion of RAD51 promoter and diphtheria toxin gene selectively kills cancer cells. Transcriptional targeting therapy using up-regulated HR gene expression can effectively eliminate cancer cells without toxicity to normal tissues. The human RAD51 gene, located on chromosome 15q15.1, is considered to participate in a common DSB repair pathway and is involved in the development of breast cancer development (9). RAD51 functions by assembling on a single-stranded DNA, inducing homologous pairing, and in turn mediates strand invasion and exchange between homologous DNA and damaged site (10). In recent years, the RAD51 gene polymorphism has attracted a great deal of attention. The RAD51 family of genes, including RAD51 and the five RAD51-like genes, are known to have crucial non-redundant roles in this pathway. Recently, researchers have revealed that RAD51 paralogs (RAD51B, RAD51C, RAD51D, XRCC2, XRCC3) could serve as central proteins during the HRR process. The function of RAD51-like genes is to transduce DNA damage signals to effector kinases that promote break repair. A central player in homologous recombination is the RAD51 recombinase that binds to singlestranded DNA at break sites, the XRCC2 and XRCC3 genes are structurally and functionally related to the RAD51 genes (11). Two commonly studied polymorphisms of the RAD51 gene are G135C (rs1801320), a G to C transversion at position +135, and G172T (rs1801321), a G to T transversion at position +172, both of which are located in the 5 Untranslated region (5'UTR) and appear to be related to functional polymorphisms. Two variants of 135G/C and 172G/T would affect mRNA stability or translational efficiency, resulting in altered levels of polypeptide products, altering the function of encoding the RAD51 protein, and in some way influencing DNA repair capacity and malignancies (12). RAD51 interacts with BRCA1 and BRCA2, acting through HR and NHEJ. For example, downregulation or mutation of DNA DSB repair proteins involved in the NHEJ pathway was shown to be associated with both BC risk and increased chromosomal radiosensitivity (CRS) (13)(14)(15). In addition, RAD51 overexpression is acknowledged to be associated with therapeutic antagonism, aggressiveness, metastatic behavior, and poor prognosis. X-ray repair cross complementing group 2(XRCC2)gene, located in 7q36.1, is an essential part of the homologous recombination repair pathway and a functional candidate for involvement in cancer progression. Its XRCC2 protein product, together with other proteins encoded by the XRCC2 gene such as RAD51L3, forms a complex that plays a critical role in chromosome segregation and the apoptotic response to DSBs (16,17). As a member of the RAD51 family of proteins, it is widely acknowledged to mediate HRR (18). However, the exact function of SNPs in the XRCC2 gene in response to different DNA-damaging agents still remains unclear. There is a Gto-A polymorphism located in exon 3 of the XRCC2 gene resulting in a substitution of histidine (His) for arginine (Arg). Known as Arg188His (R188H, rs3218536), this polymorphism has been widely investigated to explore its potential impact on cancer susceptibility. Furthermore, DNA damage caused by anticancer drugs and radiation have been documented to require XRCC2 for repair in mammalian cells (19)(20)(21)(22). Several pieces of evidence stress that high levels of expression of The X-ray repair cross complementing group 3 (XRCC3), another member of the RAD51 family of proteins, are correlated with radioresistance and cytotoxic resistance in human tumor cell lines, suggesting that XRCC2 could also play a relevant role in the effects of oncotherapy (23)(24)(25). XRCC3, as we know, is localized on human chromosomes 14q32.325. A coding SNP (T241M, rs861539) has been reported at the 18,067th nucleotide in exon 7 of the XRCC3 gene, resulting in a substitution of methionine (Met) for threonine(Thr) (25). The XRCC3 protein is involved in the joining of single-strand DNA breaks and the joining of double-strand DNA breaks (26). As a member of the Rad51 DNA repair gene family. It functions in the HRR pathway by repairing double-strand breaks. XRCC3 helps the assembly of the nucleofilament protein and its selection and interaction with the appropriate recombination substrates (12). Likewise, XRCC3 controls HR fidelity and is essential to stabilize heteroduplex DNA in HRR. Furthermore, a mutation in XRCC3 generates severe chromosomal instability. The XRCC2 and XRCC3 genes are necessary for HRR and are required for the formation of RAD51 focus (27,28). In recent studies, common variants of XRCC2, particularly the encoding SNP of exon 3 (Arg188His), have been identified as potential cancer susceptibility sites, although in this case, the association with breast cancer susceptibility remains unclear. Earlier studies have shown that the XRCC3 Thr241Met polymorphism has long been regarded as a risk factor for many cancers. We examined whether polymorphisms in these three genes involving homologous recombination with DSB were associated with the risk of breast cancer. Search strategy and data extraction All studies investigating the association between polymorphisms in the RAD51 gene and paralog genes, such as the XRCC2 & XRCC3, and the risk of breast cancer, were identified by comprehensive computer-based searches of the PubMed, Embase, Web of Knowledge, and Cochrane Library databases(the last search update on September 2022). The search was carried out using various combinations of keywords such as ('RAD51 gene' OR 'RAD51 recombinase gene' OR 'XRCC3 polymorphism' OR 'XRCC3 Thr241Met polymorphism' OR 'XRCC2' OR 'XRCC2 Arg188His polymorphism') AND ('polymorphism' OR 'variant' OR 'variants'). Eligibility criteria and selection process Inclusion criteria Studies included in our meta-analysis needed to have met the following criteria: 1)published in public, full text only; 2) case-control study; 3) sufficient data (genotype distributions for cases and controls) to calculate an odds ratio (OR) with its 95%CI; 4) studies published in English; 5) genotype distribution of the control population consistent with the Hardy-Weinberg Equilibrium (HWE). Data extraction Two authors independently extracted information from all eligible publications according to the inclusion criteria listed above. Disagreement was resolved by evaluating a third reviewer and discussing until a consensus was reached. The following characteristics were collected from each study: first author, year of publication, country, ethnicity, methods in experiments, source of control groups and genotype frequencies in case and control groups, and the value of HWE. Duplicated primary studies were deleted and only one version of duplicated documents was kept. Data collection The transcriptome data and clinical information of BC patients were obtained from The Cancer Genome Atlas (TCGA) database (https://cancergenome.nih.gov/). In total, 903 patients with BC were selected from the TCGA cohort. For the transcriptome data from TCGA-BRCA, we download their series files. Some important clinical characteristics including age, pathologic stage (I, II, III, IV, V, and NA), and pathology stage (T, N, M) are available. The datasets listed in Table 6 are used to discover and verify prognostic factors of BC patients. We assessed the association of each gene with overall survival by univariate and multivariable Cox proportional hazard regression analysis. All statistical tests were two-sided. The Cox proportional hazard model, including several important factors, was employed to estimate the hazard ratio (HR) and 95% CI for each gene for breast cancer survival. We use normalized P values of <0.05 to define statistical significance. This part of statistical tests was performed using the R software. Statistical analysis We first analyze HWE in the controls for each study using a goodness-of-fit test (chi-square or Fisher's exact test) and the departure of HWE genotype frequency among control subjects was determined by P <0.05. Crude odds ratios (OR) with 95% confidence intervals (CI) were used to assess the strength of the association between the RAD51 gene and its paralog polymorphisms and breast cancer susceptibility. The pooled ORs for the RAD51 G172T polymorphism were performed under the dominant model (GG vs. TT+GT), recessive model (TT vs. GG+GT), homozygote model (TT vs. GG), and allelic genetic model (T vs. G). T and G represent the minor and the major alleles, respectively. The same methods were applied to the analysis of other polymorphisms. Stratified analyzes were performed on ethnicity and source of control. A Q-test was performed to assess statistical heterogeneity among studies. The pooled OR was calculated using a fixed effect model if the result of the p-value of the Q test<0.1 indicated significant heterogeneity according to the previous study(Davey and Egger,1997) (29, 30). If the result of the Q test was P>0.1, which indicated that the heterogeneity between studies was not significant. Otherwise, a random-effects model was used. Given the potential heterogeneity among studies with different ethnicities and sources of control, the random-effects model was adopted (30). Sensitivity analysis was carried out by removing each study at a time to evaluate the stability of the results under either genotypic models or the allelic model. In addition, the Begg test and Egger's linear regression test by visual inspection of the funnel plot were carried out to address the potential publication bias, and P <0.05 was considered an indicator of significant publication bias (30,31). Cox regression was used to analyze the impact of genes on the prognosis of BC patients and its value in prognostic diagnosis The Newcastle-Ottawa Scale (NOS) was applied to assess the quality of all studies. The NOS checklist includes three parameters of quality: (i) selected population, (ii) comparability of groups, and (iii) assessment of either the exposure or outcome of interest for case-control studies. The studies scored greater than or equal to 7 were considered to be high quality articles. Studies included in the meta-analysis According to our first database search, 272 items were identified ( Figure 1). An initial literature search through the PubMed, Embase, Web of Science, and Cochrane database databases yielded 265 published articles after duplicates were removed. When reviewed by titles or abstracts,187 records did not meet the inclusion criteria, leaving 88 potentially relevant studies that were reviewed in full text. The flow diagram of the literature search and the selection of the study. Among the remaining 88 articles, 2 were reviews, 16 were metaanalyzes and 2 were meeting conferences; these publications were also excluded. Left 58 publications were left, 2 were insufficient data, 4 were overlapping data, and 8 were not in HWE (Tables 1-3). Finally, a total of 44 publications were included in the meta-analysis, among which 9 case-control studies from 9 publications with 4111 cases and 2669 controls for the RAD51 G172T polymorphism, and 20 casecontrol studies from 11 publications with 20183 cases and 20321 controls for the XRCC2 R188H polymorphism and a total of 47 studies from 38 publications with 26667 cases and 27912 controls for the XRCC3 T241M polymorphism were eventually included in our meta-analysis. We checked the symmetry of the Begg funnel plot and the results of Egger's test to assess publication bias. All statistical analyzes were performed with STATA version 16.0. Meta-analysis result Among these 9 case-control studies from 9 publications with 4111 cases and 2669 controls for the RAD51 G172T polymorphism (32)(33)(34)(35)(36)(37)(38)(39)(40). The combined results showed that there was no significant correlation between the G172T polymorphism and breast cancer susceptibility in all genetic models except the homozygote model Additionally, ethnic-based analysis showed that SNP was associated with breast cancer risk in Arab populations in homozygous models (OR=3.52, 95% CI=1.13-11.0, P= 0.003) ( Figure 3). It suggests that the G172T polymorphism may be associated with an increased risk of breast cancer in the Arab population in some cases. When stratified by the source of controls, our results found evidence of an association between cancer risk and the G172T polymorphism in population-based controls in the recessive model (OR=0.25, 95% CI=0.07-0.85, P= 0.027), suggesting that it is marginally related to the population-based group. For the polymorphism in XRCC3 Thr241Met, a total of 47 studies of 38 articles with 26667 cases and 27912 controls were eventually included in our meta-analysis (32,37,38,44,45,47,48,. A significant increase in cancer risk was observed only in the allelic genetic model (homozygote model: OR = 1.08, 95% CI = 0.98-1.20; dominant model: OR = 1.05, 95% CI = 0.99-1.12; recessive model: OR = 0.92, 95% CI = 0.84-1.01; allelic genetic model: OR = 1.05, 95% CI = 1.00-1.11) (Figure 4). In addition, ethnic-based analysis showed that SNP was associated with breast cancer risk in Asian populations in dominant genetic (OR = 1.36,95% CI= 1.11-1.66, P = 0.003) and allelic genetic models (OR = 1.32,95% CI 1.07-1.64, P = 0.01) (Tables 4, 5). Table 6 depicts the pooled results from the univariable and the multivariable analyses of OS in BC patients (HR). Univariate and multivariate Cox regression analysis was performed to determine whether gene expression is an independent prognostic model of OS in breast cancer patients. As shown in Figure 5, the p values of T, N, M, Stage, and Age were less than 0.05. The results of the univariate Cox regression analysis of OS showed that pathology stage, age, and stage could effectively predict survival in BC patients. Then, we took these factors into the multivariate Cox regression analysis. Furthermore, after the multivariate analyses ( Figure 6), the results showed that stage (HR =2.15; 95% CI, 1.42−3.26), age (HR = 1.04; 95% CI, 1.02-1.05) remained independent prognostic factors with an adjusted P value < 0.0001. Sensitive analysis Given the significant heterogeneity between studies for the polymorphisms, the random-effect model was used to calculate the pooled results if the heterogeneity was significant. Meanwhile, we also performed a sensitivity analysis to assess the effects of each study on the pooled ORs by omission of individual studies. The sensitivity analysis showed that, for each polymorphism, no single study qualitatively changed the pooled ORs, suggesting that the results of this meta-analysis were statistically stable and reliable. Publication bias diagnostics We further identify potential publication biases of the literature using the Egger test and funnel plot. In all studies, no funnel plot asymmetry was found. The results of Egger's test for the RAD51 G172T polymorphism did not show any evidence of publication bias. For the homozygote model, the funnel plot p-value was 0.47, and Egger's test p-value was 0.185. In the dominant model, Begg's test results of the R188H P value were 0.67, and Egger's test P value was 0.319. Begg's test result of the allelic genetic model in XRCC3 T241M P = 0.65 and Egger's test result showed P = 0.52, suggesting no publication bias. All P-values> 0.05, suggesting that there was no publication bias. Discussion Screening for some frequent polymorphisms has improved our understanding of the critical roles that inheritance plays in BC susceptibility. To date, associations between genetic variants in HRR genes and BC development have been investigated, but the results remain unexplained to the best of our knowledge. However, new discoveries in drug research aimed at these gene mutations are always innovative. Some experiments suggest that the inhibition of HR will be selective against breast tumor cells. Inhibitors of HR proteins can be used in combination with radiotherapy or chemotherapy to sensitize the cells [5]. A more intriguing possibility would be to use anti-HR agents alone, avoiding the toxicity of DNAdamaging agents. Such a strategy has been applied to selectively kill BRCA2-deficient cells using poly-ADP-ribose-polymerase inhibitors (PARP). The first phase III clinical study of PARP inhibitor for adjuvant treatment of early breast cancer, OlympiA study, aims to evaluate the efficacy and safety of olaparib compared with placebo in the adjuvant treatment of early breast cancer with clinically and pathologically high-risk, HER2-negative, BRCA1/2 mutation. And randomized phase II GeparOLA study showed olaparib plus paclitaxel (PO) in early HER2-negative homologous recombination deficiency (HRD) breast cancer. In conclusion, germline BRCA 1/2 status and HRD predict a higher pathological complete response (pCR) rate in the neoadjuvant treatment (81). The molecular FIGURE 2 Meta-analysis of the RAD51 G172T polymorphism and risk in breast cancer (homozygote model, TT vs. GG). mechanism of breast cancer is very complex. Therefore, in the post-PARP inhibitor era, there is a great clinical need to find therapeutic targets and analyze prognostic factors to benefit patients, which is conducive to drug development and expansion of new indications and provides the possibility of individualized treatment for breast cancer. Our analysis demonstrated the importance of recombination repair processes for the fidelity of chromosome segregation and reinforce the functional connection between genes involved in HRR and those that predispose to breast cancer. We also found that patients in our prediction models tended to be older, have an advanced-stage disease, and have a poorer prognosis. Current literature varies widely in experimental methods, stage of disease, family history of cancer, patients with the type of tumor therapy, and the duration since cancer diagnosis, all of which can lead to inconsistent results in case-control studies. Additionally, most of the studies did not specify the immunohistochemical indicators of breast cancer that are relevant to determine which factors can exert a dominating effect. Some research data indicate that double-strand break damage is the most fatal lesion observed in eukaryotic cells because it can cause cell death or create a serious threat to cell viability and genome stability. It has the potential to permanently arrest cell cycle progression and endanger cell survival [10]. Due to the fact that DNA repair mechanisms are crucial to preserving genomic stability and functionality, DNA repair defects can result in the development of chromosomal aberrations that can lead to increased susceptibility to cancer (4,82,83). A Japanese study showed that Rad51 gene polymorphisms were found in two patients with bilateral breast cancer (10). It proves that germline mutations in the RAD51 gene may modulate the risk of breast cancer. Previous meta-analyses evaluated the effect of the Rad51 G135C polymorphism on the risk of breast cancer and other cancers. Some experts performed relevant meta-analyses of the analysis and concluded that the Rad51 G172T polymorphism may play a protective role in the development of head and neck cancer, but no significant correlation was found between the Rad51 G172T polymorphism and breast and ovarian cancer (84). It is inconsistent with our conclusion and hypothesized that it was related to inadequate inclusion of the sample size, neglecting gene-gene and gene-environment interactions for some reason. However, there were some approvals on the connection of polymorphism in XRCC2 R188H and the risk of breast cancer before, which has not been confirmed in two population studies in the United States and Poland and several case-control experimental studies (39, 42-44, 50, 51, 68). Moreover, an experiment conducted by RafiiS was hardly replicated in the latest BCAC study (41). Several studies describe a marginally protective effect for rare allele carriers (188His) (64,85). Interestingly, Silva suggested that the potential protective role of the variant allele of XRCC2, in women who have never breastfed, could be related to a more efficient DNA repair activity (37). On the other hand, Han described a protective effect for women with high plasma a-carotene levels. However, current evidence shows that in most studies the XRCC2 R188H polymorphism is considered to have little relationship with the risk of breast cancer. According to our meta-analysis of breast cancer, we did not find a significant association between this polymorphism and breast cancer susceptibility, which is consistent with the previous meta-analysis. In previous studies, a relevant study reported their results with significant unexplained heterogeneity (Ph = 0.014) (86). Furthermore, studies that depart from the Only one mutated sample in XRCC3, but there was no survival information, it was rounded off in the analysis. P value < 0.05 was considered significant. The asterisk ( * ) indicates p < 0.05 ; two asterisks ( * * ) represent p < 0.01, and four asterisks ( * * * * ) represent p < 0.0001. Hardy-Weinberg equilibrium (HWE) were included in the metaanalysis, which may lead to potential bias. Current evidence suggests that XRCC2 R188H polymorphism is considered to have a weak protective effect against breast cancer development in most studies, but the association did not reach statistical significance. As we mentioned above, since this effect is very weak and R188H may serve as a positional marker for other potentially functional SNPs or haplotypes, it is not surprising that this SNP is not associated with breast cancer, or even in an inverse relationship. Therefore, limited by the above factors, the interpretation of the results of previous research should be cautious. A common polymorphism in the XRCC3 gene is at nucleotide 1,8607C/T which results in the substitution of the amino acid threonine for methionine at codon 241 (Thr241Met) of exon 7 of the XRCC3 gene, which may affect the function of the encoding enzyme or/and its interaction with other proteins involved in DNA repair. Inheritance of functional polymorphisms in DNA repair genes may influence the capacity of the DNA repair process, thus leading to increased cancer risk. Due to a C18607T transition at exon 7 of the XRCC3 gene, the substitution of amino acids Thr241Met is functionally active, as it is associated with an increase in the number of micronuclei in human lymphocytes exposed to ionizing radiation (59,67,72,87,88). The variant allele (241Met) is associated with high levels of DNA adducts in lymphocyte DNA, which could be associated with reduced DNA repair capacity (88). A case-control study in Pakistan found that homozygous (TT) and heterozygous (TM) genotypes of the T241M polymorphism were associated with an increased risk of breast cancer compared to controls (47). Similar results have previously been observed in different studies, suggesting an association between Met allele variants and breast cancer in Caucasian and Asian populations (63, 65). Interestingly, Rajagopal Univariate Cox regression analyses of OS in BC patients. Multivariate Cox regression analyses of OS in BC patients. found that heterozygous genotype (TM) and homozygous mutant genotype (MM) were not significantly associated with breast cancer risk when it comes to the role of the T241M variation in XRCC3 (48). Chai performed a meta-analysis of 23 case-control studies on the association of XRCC3 SNPs with the risk of breast cancer in the above SNPs and the general population and the Asian population in both recessive and homozygous models (89). Our results based on racial stratification analysis are consistent with their observed correlations in Asian populations, but not the same with their associated models. Although they found an association between this SNP and the risk of sporadic breast cancer, based on the conclusive results obtained, we believe that this association is not accurate enough. Although other studies have not shown an association between T241M polymorphism and the risk of breast cancer (52,54). Therefore, more studies are needed to confirm these associations. Compared with studies before, our study has some improvements. First, Our study had the advantage of including higher numbers of cases and controls. Second, these polymorphisms in RAD51 and paralog genes were analyzed and associated with the risk of specific cancer, breast cancer. Third, we provided a more comprehensive analysis of the data by calculating four different genetic models and performing a subgroup analysis by ethnicity, and source of controls (population or hospital-based). Finally, we excluded studies in which the distribution of genotypes in the control group was inconsistent with HWE because they might influence the results. The results of this study further revealed the correlation between the polymorphism in these genes and the occurrence and development of breast cancer, providing a direction for the study of molecular mechanisms of cancer in the future. The main limitations of our meta-analysis are: 1) This metaanalysis only searched published studies in English, ignoring some unpublished studies or studies in other languages that may also meet the inclusion criteria. 2) Some studies did not provide enough clinical data such as patient family history, ER/PR, HER-2 hormone receptor status, tissue type, and tumor grade, leading to failure to conduct a comprehensive subgroup analysis to explore the source of heterogeneity. 3) Gene-gene and gene-environment interactions were not considered in current meta-analyses. Possible gene-gene and gene-environment interactions between Rad51 gene polymorphism and cancer susceptibility need to be further studied. 4) some patients were chosen from hospital-based groups, and these women may have benign breast disease, corresponding to an increased potential risk of breast cancer. 5) Most of the patients in our study were Caucasian, which may limit the general application of our results. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
2023-01-24T14:31:14.120Z
2023-01-24T00:00:00.000
{ "year": 2023, "sha1": "75b10a016849a46aabf3082b0d2594445f72c4f7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "75b10a016849a46aabf3082b0d2594445f72c4f7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
260611061
pes2o/s2orc
v3-fos-license
BEAM: The Modeling Framework for Behavior, Energy, Autonomy&Mobility This report outlines the concepts, mechanisms and inner dynamics of the BEAM (Behavior, Energy, Autonomy, and Mobility) modeling framework. BEAM is an open-source large-scale high-resolution transportation model that harnesses the principles of the actor model of computation to build a powerful and efficient agent-based model of travel behavior. It allows a detailed microscopic view of how people make travel choices and interact with the transportation system, enabling more accurate simulations of human mobility and urban transport networks. It also allows the analysis of numerous spatially defined but interacting layers, and integrates them into a cohesive representation of a regional transportation system. This integrated picture provides invaluable insights to policy makers and other stakeholders about how changes to the transportation system result in changes to traffic congestion, mode share, energy use, and emissions throughout a modeled region. These capabilities are demonstrated with a case study of New York City that showcase BEAM's application in a very large and intricate urban transportation system, without relying on existing travel demand models. The unique ability of BEAM to simulate individual behaviors, integrate with other models, and adapt to different real-world scenarios underscores its importance in the rapidly evolving field of transportation and emphasizes its potential as a valuable proof-of-concept tool to contribute to more informed and effective policy and planning decisions. Executive Summary This report provides a comprehensive description on the BEAM modeling framework, focusing on its design and structure, as well as a case study of its implementation in New York City. Conceptual Overview BEAM (Behavior, Energy, Autonomy, and Mobility) is a large-scale high-resolution transportation modeling framework that harnesses the principles of the actor model of computation to build a powerful and efficient agent-based model of travel behavior. It allows a detailed microscopic view of how people make travel choices and interact with the transportation system, enabling more accurate simulations of human mobility and urban transport networks. BEAM allows the analysis of numerous spatially defined but interacting layers ( Figure ES-1), and integrates them into a cohesive representation of a regional transportation system. This integrated picture provides invaluable insights to policy makers and other stakeholders about how changes to the transportation system result in changes to traffic congestion, mode share, energy use, and emissions throughout a modeled region. (Horni, Nagel, & Axhausen, 2016), with extensive modifications to allow for multithreaded within-day simulation of interacting agents. This report details the required and optional model inputs that can be used to define a scenario, including the road network, transit system, synthetic population, and input agent plans. Many of these inputs can be exported from existing MATSim models and read directly into BEAM, or they can be provided in simpler and more flexible formats. The report then describes the mechanics of BEAM's within-day AgentSim simulation, including features such as on-demand mobility, mode choice, parking selection, and discretionary trip planning. Finally, it summarizes BEAM's adaptation of MATSim's traffic network simulation and between-iteration replanning capabilities, the combination of which allow for BEAM to approximate dynamic user equilibrium across all of the choice dimensions offered its agents. BEAM has always been designed as open-source software intended to benefit from an evolving ecosystem of related models and products. This report describes several key linkages to other, related models in the transportation and energy domain. These include activity-based travel demand models, which allow for a more sophisticated treatment of agents' pre-day travel planning, vehicle energy consumption models, metrics for a summarizing transport system's accessibility and efficiency, and integration with power grid models. While BEAM is designed to flexibly and efficiently link with other models of transportation system components, it also includes the capability to build a BEAM implementation from scratch while relying only on free and publicly available data processed through several widely-used and opensource tools. This report describes these input data sources in detail and the steps required to turn them into a BEAM model that can be run locally or on cloud computing resources. Case Study Finally, a New York City case study is provided to showcase BEAM's application in a very large and intricate urban transportation system. Starting from scratch without relying on existing travel demand models, BEAM was employed to simulate the travel behavior of about 13 million inhabitants across five boroughs and nine counties. Utilizing tools like SynthPop and OpenStreetMap, the study carefully calibrated and validated the model to create a baseline scenario of typical weekday travel patterns in the New York City metro region. This model is responsive to real-world scenarios, such as adapting to the unprecedented changes in travel behavior and demands during various recovery phases of the COVID19 pandemic. The New York City study shows BEAM's ability to produce consistent and realistic outcomes given limited input data and fine-tuning, producing a flexible testbed for exploring major mobility shifts and policy impacts in complex urban environments. Conclusion BEAM's varied applications, flexibility, and integration capabilities make it a significant and valuable tool for various stakeholders in urban planning and transportation system analysis. Its unique ability to simulate individual behaviors, integrate with other models, and adapt to different real-world scenarios underscores its importance in the rapidly evolving field of transportation and emphasizes its potential as a valuable proof-of-concept tool to contribute to more informed and effective policy and planning decisions. Context This report describes the BEAM model, an open-source agent-based transportation system model developed at Lawrence Berkeley National Laboratory. BEAM simulates the travel patterns of up to millions of individuals in a metropolitan area. The model simulates the travel of each agent across any travel mode available in a regional transportation system, including personal vehicles, public transit, new shared mobility services (such as ridehail, shared bikes and escooters), walking, and personal bikes. It simulates realistic interactions between agents and approximates a user equilibrium outcome, where the decisions made by each agent reflect both their own personal preferences and a realistic accounting of the impacts of the choices of all of the other agents on system outcomes. BEAM is designed to allow users and policymakers to understand the detailed operational and systemwide outcomes of different behavioral assumptions and scenarios in a richly detailed and realistic simulated transportation system. As a platform, BEAM is structured to balance detail, performance, and customizability in a behaviorally and operationally realistic manner. Despite the complexity of the integrated mobility systems being simulated, BEAM is designed with user-friendliness and accuracy in mind, allowing users to run meaningful simulations of tens of thousands of individual travelers (i.e., agents) on a personal computer, or deploy models of millions of agents in an HPC environment. BEAM offers a pipeline to allow users to build a model using publicly available data, and it also allows users to run scenarios defined by widely used open-source transportation modeling software. In simulating an integrated multimodal transportation system, BEAM models a wide range of interacting systems as individual travelers progress through their day. It explicitly considers the daily travel patterns and preferences of individual travelers, and how their choices affect the performance of the road and transit networks that comprise a regional transportation system. In addition to conventional private vehicle travel, BEAM also simulates ride-hail fleet operations using either human-driven or centrally-managed automated vehicles (or both), personal or shared connected and automated vehicle (CAV) scheduling and use, and electric vehicle (EV) refueling requirements. BEAM incorporates these factors into a model of transportation network performance that can incorporate validated mesoscopic traffic network congestion modeling, the impacts of coordinated adaptive cruise control (CACC) and similar technologies on road network capacity, and traffic-dependent vehicle efficiency. By simulating all of these interacting factors at once, BEAM approximates an equilibrium outcome that captures the complicated constraints associated with personal schedules and preferences, the transportation network and vehicle congestion on it, and operational realities of different modes, producing a picture that allows evaluation of the feasibility of different potential futures, and a better understanding of the directionality and relative strength of the relationship between different technology and policy developments and systemwide outcomes. By focusing on the travel of individual agents, BEAM also allows the analysis of the distributional impact of different scenarios on subgroups of the overall population, such as differentiating by housing location, demographic characteristics (such as income), and employment industry. Other Models For decades, researchers and planners have relied on the four-step modeling framework for simulating spatially and temporally resolved transportation networks (McNally, 2007). The four steps in this model-trip generation, trip distribution, mode choice, and (typically static) trip assignment-provide a robust and widely-used system for accounting for the expected impacts of small or gradual changes to the transport system such as population growth, highway expansion, or extension of a public transit system. The four-step model can be reasonably well suited to modeling changes that are focused on commute trips and do not deviate far from assuming privately owned cars as the primary mode of transport. Although it has proven less successful at capturing more nuanced behavioral factors and non-car-based modes. Several integrated models and modeling frameworks have emerged that capture both travel demand (in terms of trip, mode, and route choices) and network supply characteristics (in terms of car travel times and features of other modes, such as wait time, crowding, and number of transfers). In particular, the SimMobility Midterm (Lu, et al., 2015) and POLARIS (Auld, et al., 2016) models both offer a high-performance, agent-based representation of supply and demand that integrate a pre-day activity-based travel demand model with dynamic traffic assignment and traffic simulation, plus a within-day replanning module for unexpected discrepancies not accounted for in the pre-day choices. These models have been used to study the impacts of transit disruptions (Adnan, et al., 2017) and shared autonomous vehicles (Gurumurthy, de Souza, Enam, & Auld, 2020) (Nahmias-Biran, et al., 2019), and they offer a great deal of promise for future work. The integrated and closed structure of these models, often written in C-language, prioritizing performance over modularity, and using closed-source code, means that these models are typically not used (and almost never modified or adapted) without the participation of the initial model developers. As a contrast, the MATSim modeling framework (W Axhausen, Horni, & Nagel, 2016) is designed to achieve maximum uptake via easy extensibility and modularity. MATSim, developed in Java, integrates easily with many open-source libraries and boasts an active developer community that regularly introduces new features, extensions, and plugins built around a core, open-source codebase. The core MATSim functionalities include modules to define and load a population of agents, containing their input "plans" that determine the activities and trips each agent intends to participate in over the course of a day. MATSim uses an iterative co-evolutionary algorithm to approximate stochastic user equilibrium, where every iteration the agents seek to improve on their whole-day utility by either choosing a plan that has performed well for them in previous iterations or by following a new plan with different behaviors. Through several extensions, including mode and destination choice, the equilibrium plans in MATSim can approximate the constraints and feedbacks of activity-based modeling without relying on a formal, utility-based pre-day planning module (Horni, Scott, Balmer, & Axhausen, 2009) (Rakow & Nagel, 2023. MATSim also contains extensions that allow for the simulation of electric vehicles and on-demand mobility (Zwick & Axhausen, 2020) and electric vehicles (Waraich & Bischoff, 2016). The BEAM model seeks to take advantage of many of the innovative aspects of MATSim, while making some structural changes that allow for BEAM to be better applied at scale to answer questions that can now only be answered using integrated travel demand models, and additional questions that cannot be answered at all using existing tools. In particular, BEAM focuses on improving the computational efficiency of MATSim, allowing simulations with more actors to be completed using fewer iterations and less run time. BEAM also directly incorporates within-day traveler choices and system responses into its behavioral simulations rather than accounting for these responses in between-iteration replanning, allowing for greater fidelity and faster convergence when modeling features, such as on-demand pooled mobility and shared vehicles, that require substantial interactions between the actors in the transportation system. Finally, BEAM is designed to ease direct integration with activity-based travel demand models, while still retaining the co-evolutionary structure that allows its agents to incorporate iteration-to-iteration learning that cannot be captured in such models. In addition, BEAM is designed to allow the implementation of its open-source modeling code quickly in other metropolitan regions using only publicly available data, and allowing users superior flexibility in defining nuanced behavioral and policy scenarios, such as teleworking, capacity limitations on public transit, and the impacts of the COVID-19 pandemic on mode choice behavior. BEAM Overview Individual travelers, or "agents", in BEAM seek to maximize their experienced travel utility both by making decisions on-the-fly during a simulated travel day and by using an evolutionary algorithm to iteratively seek better transportation options over the course of several model iterations. This evolutionary algorithm (and much of the underlying Java codebase) is based on the one used in the popular MATSim software (Horni, Nagel, & Axhausen, 2016). In fact, BEAM has been developed on top of MATSim, while replacing most of its components with a new software architecture, discrete event simulation engine and operation models. After usually 10 to 20 iterations, BEAM reaches an approximate user equilibrium, where each agent's experienced travel outcomes roughly match her expectations, and where no agent can expect to significantly improve her outcomes by making a different choice. This equilibrium tends to be faster than in base MATSim because agents can have access to more real-time information when making their within-day choices, and (like in some MATSim extensions like in (Rakow & Nagel, 2023) agents can use a utility-maximizing approach while making choices rather than relying on uninformed or simplified choice structures. As shown in Figure 1-1, BEAM is composed of several sub-components, three of which are considered the cornerstones of the framework: AgentSim, PhysSim, and the Replanning Module. AgentSim loads a synthetic population of households and individuals, from home/work locations in US Census data, and daily travel patterns derived from the National Household Travel Survey, and then models their behavior as they make their travel decisions at different times throughout the day and locations across the transportation network. In addition, AgentSim hosts the operation models of different ridehail service providers that matches drivers to trip requests in order to minimize wait time, as well as fleets of Autonomous Vehicles (AV). This capability can also be leveraged to model autonomous on-demand shuttles as first and last mile solutions for public transit (Poliziani et al., 2023). It also simulates micromobility and carsharing services that places and repositions shared bikes scooters and cars at locations to meet expected demand at different times of the day. AgentSim relies on a Router, typically the R5 routing engine (Pereira, Saraiva, Herszenhut, Braga, & Conway, 2021), or alternatively Graphhopper routing engine (Graphhopper, 2023), to produce a set of possible modes and routes for each traveler to choose among, based on the dynamic state of the transport network. PhysSim simulates travel times on the road network using the Java Discrete Event Queue Simulator (Waraich, Charypar, Balmer, & Axhausen, 2009) from MATSim. The Replanning Module revises the choices made by travelers from the previous day, offering them the opportunity to engage in a variety of optional activities, experiment with new modes of transport, or reiterate selections that proved successful in past iterations. Actor System BEAM relies on parallel asynchronous computation which is performed concurrently across CPU cores. BEAM is written primarily in the Scala language and uses the Akka library (Agha, 1986). Akka implements the Actor Model of Computation, which simplifies the process of utilizing high performance computing resources -in our case for deploying transportation simulations at full scale. In BEAM, most aspects of the transportation system are defined as actors. These actors can represent individuals, such as Person Agents (travelers) and Ride Hail Agents (drivers of ride hail vehicles); they can represent resource managers that control aspects of the system, such as the Ride Hail Manager (that controls matching and repositioning of ride hail agents) and the Charging or Parking Manager (that matches vehicles with specific charging points or parking spots); and they can represent abstractions such as the Router (that provides agents with directions) and the scheduler (that centralizes the management of the simulation clock). These actors can operate independently, relying on no shared memory, and they communicate with each other through a tightly controlled message passing interface (Barker, 2015). Special trigger type messages have been defined, which are often sent by the scheduler to notify agents that it is time for them to perform a new action. An actor can also send a request message to another actor to ask for a specific set of information such as a route from origin to destination, which is provided through a response message. Some types of actors, specifically Person Agents and Ride Hail Agents, are BeamAgent actors. In the interest of conciseness, we will likewise refer to them as "agents". BeamAgent inherits the Akka FSM trait which provides a domain-specific language for programming agent actions as a finite state machine (Biermann & Feldman, 1972). These actors can exist only in a pre-specified set of states, and they transfer between states based on messages they receive, such as a request to pick up a passenger as seen in Figure 2-1. Because actors in BEAM can operate asynchronously, the rules for transitions between states and message handling must be robust to misaligned operations, such as two household members choosing the same vehicle at the same time. However, if misalignment occurs the agent logic in BEAM is designed to adapt to changing circumstances; for example, if an agent chose to use a bus but it is full; BEAM requires the agent to repeat the mode choice process and either wait for the next scheduled bus or choose a different mode. It is important to note that BeamAgent actors do not have direct access to any internal clock. Instead, they rely on triggers passed to and from a BeamAgentScheduler actor to ensure a correct sequencing of events. For instance, if an agent begins participating in a planned activity that has an end time of 5:00 pm, it sends a message to the BeamAgentScheduler that schedules an ActivityEndTrigger that the BeamAgentScheduler will send back to the agent when the main simulation clock reaches 5:00 pm. The BeamAgent will eventually send a CompletionNotice back to the BeamAgentScheduler when it has completed all of its relevant actions in response to the trigger -in this case participating in mode choice and departing on the trip. The Scheduler advances the simulation clock in a rolling fashion -it maintains a time window of open triggers and only advances the time window forward when it has received CompletionNotices for all triggers in the portion of the time window being closed. When a sequence of events happens such that a trigger never gets completed, the BEAM simulation becomes stuck and cannot progress forward. This is a sign that some actor has entered an unplanned or unexpected state, a sign of a bug in the code that must be fixed. The version of BEAM released with this report has been tested thoroughly on large simulations and is thought to be less likely to contain such bugs, but the asynchronous nature of BEAM's actor system means that stuckness and other unexpected behaviors are common during the development and testing of new features. What makes BeamAgents special is that they exhibit agency; i.e., they don't just change state but they have some degree of control or autonomy over themselves or other Agents. For example, a Ride Hail Manager can manage a fleet of actors as non-human drivers of autonomous vehicles. While travelers, fleet managers, or the Charging/Parking Network Managers are all Actors in BEAM, individual charging points, and parking stalls are treated as tools used by the BeamAgents, and are not considered agents or actors in BEAM. Vehicles can be either personal vehicles, bikes, shared cars, shared bikes/scooters, transit vehicles, or ride hail vehicles. Preferred Format The typical format for importing various BEAM inputs such as population (households, persons, plans), vehicles (& vehicles types) and parking infrastructure is usually a CSV format. Except for the street network that relies preferably on Protocolbuffer Binary Format (PBF) and the General Transit Feed Specification (GTFS) for transit lines and schedules. Legacy MATSim Format BEAM can also ingest input population data in the same xml format typically used by MATSim. MATSim population inputs require two xml files: a households file and a persons' file. The MATSim households file lists the income, location, and vehicles for each household (this format allows for a listing of the specific vehicle IDs for each household, rather than just the number of vehicles). The MATSim persons file contains information about both the person's attributes, such as age and sex, and their plans. Information on the storage and scoring of plans is given in Section 2.5 below. Street Network Spatial events in BEAM are associated with a street network. The street network in BEAM is represented by a graph where road links are modeled as edges and road intersections as graph nodes. Each street can be accessed by a specified set of modes (walk, car, bike, or bus). Furthermore, each link in the street network is associated with a capacity (its maximum throughput in terms of vehicles per time unit), a length, and a free flow speed. The preferred method with which to load a street network into BEAM is through the Protocolbuffer Binary Format (PBF) typical to OpenStreetMap (OSM) 1 . OpenStreetMap networks can also be downloaded and simplified using the OSMnx tool 2 , which improves the computational performance of routing algorithms by removing unnecessary nodes and links. For compatibility to MATSim, BEAM also offers utilities to convert a road network in the MATSIM ".xml" format to a compatible PBF file that can be read into BEAM. In addition to the road network, BEAM represents the rail and ferry network for public transit vehicles that do not traverse the road network but operate on dedicated right-of-way (e.g., subway, commuter rail, light rail, etc.). The combined transportation network of links, intersections, and transit stations are aggregated into Traffic Analysis Zones (TAZ); in many implementations of BEAM, TAZs are associated with census block groups, which are typically home to around 1,000 residents, spanning as few as several city blocks in dense urban locations and entire neighborhoods in less dense areas. Transit Schedules Transit operations in BEAM are simulated to match schedules that are input in the General Transit Feed Specification (GTFS) format. A GTFS archive contains a representation of every transit vehicle "trip" scheduled in a day, including the location, sequence, and timing of every stop along each trip, as well as information on fares and the locations of transfers. The R5 router in BEAM automatically synchronizes the OSM street network with the GTFS transit network, associating transit stations with walk access links and associating on-road mode legs such as bus with road links. Population Three types of population data, households, persons, and activity plans, are imported into BEAM to create a synthetic population that matches the aggregate socio-economic characteristics of the population in the modeling domain. The location, income, and number of vehicles for each household; the age, gender, and employment industry for each person; and the trips and activities that each person has planned for the travel day, are all fed into BEAM. Person and household ids are maintained so that activity plans can be assigned to each person, and persons can be aggregated into households. Each person's plans alternate between having a type "activity" and a type "leg." Activities have a type (such as "work" or "shopping"), are associated with specific locations, and have an end time. Trips do not need any additional information, but they can specify a trip mode, in which case the specified mode choice is treated as fixed in the first iteration of AgentSim. These input files can be generated using any census survey data such the National Household Travel Survey (NHTS) or from an existing implementation of ActivitySim (Galli, et al., 2009) in the area being studied. Their spatial resolution would depend on the granularity of the available data. Therefore, the resolution can be defined as an input to a BEAM simulation and can be at either the Census Block Group (CBG) or Tract level, a Traffic Analysis Zones (TAZ), H3 index (Uber, 2023) or customized spatial resolution. Vehicles and Vehicle Types Each vehicle simulated in BEAM must be associated with a given vehicle type, which determines various physical characteristics of that vehicle such as fuel type, energy storage capacity/range, passenger capacity, maximum speed, and energy consumption parameters -each of which is defined in a mandatory vehicle types input file. The type of individual vehicles can either be assigned probabilistically (with uniform probabilities, or probabilities that vary by household income), or the type of each vehicle can be enumerated directly through an input file called "vehicles.csv" that maps each vehicle to the corresponding vehicle type and initial state of charge. If the latter is not specified, BEAM will rely on a configuration parameter (meanPrivateVehicleStartingSOC) that indicates the mean for state of charge distribution. Primary/Secondary Fuel Consumption In Joule Per Meter Average fuel consumption in joule per meter. It is used most of the time for estimation and can be used also for actual consumption. Primary/Secondary Fuel Capacity In Joule Tank or battery size, expressed in Joule Primary/Secondary Vehicle Energy File Relative path to a CSV file describing fuel consumption Automation Level Levels from 1 to 5. All vehicles that have a level of 3 and above will affect traffic flow using a CACC model. Vehicle Types input file, whose fields are detailed in Table 2-1, contains a vehicle type identifier and all relevant characteristics about the vehicle's size, energy consumption characteristics, and automation level. Optionally, users can also define an exhaustive list of all of the household vehicles to be simulated in BEAM in a "vehicles.csv" input file (Table 2-2), which is a useful way of defining the specific vehicle types owned by individual households as well as the initial state of charge distribution of the fleet. If this vehicles file is not included, vehicles are assigned to households with a probability that are defined in the configurations file. Transit vehicle types vehicles are assigned based on GTFS route type (e.g., bus, subway, rail) while agency-specific information about vehicle capacity and per mile fuel use can be input if available using the "TransitVehicleTypesByRoute.csv" file, which allows users to specify, for example, which routes use higher or lower capacity buses or trains, or assign vehicles with different fuel types and emissions characteristics to different routes. On-Demand Fleets On-demand fleets can either be defined explicitly or procedurally. When defining a fleet explicitly, the user uploads a .csv input containing one row for each on-demand vehicle. Each row contains a vehicle ID, a vehicle type, a starting location, a starting state of charge, and optionally a geofence beyond which the vehicle cannot operate. Currently, geofences can only be defined as circles with a predefined center and radius. Procedural input requires the user to specify a fleet size, defined as a ratio to the number of household vehicles defined in the input populationthus, entering a relative on-demand fleet size of 0.02 yields a scenario with one on-demand vehicle for every 50 household vehicles. These vehicles are initialized randomly into different TAZs with probabilities proportional to the number of home and work activities in each TAZ. The type of each vehicle is also chosen probabilistically, with probabilities as specified in the vehicleTypes.csv input file. Parking and Charging Infrastructure Parking and charging infrastructure are considered jointly in BEAM -each charging plug is associated with a single parking stall, but each parking stall does not necessarily have a plug. Parking infrastructure in BEAM is determined at the TAZ or road link level, where each TAZ (or link) can have any number of parking stalls with any set of characteristics. A parking stall's characteristics include its type (residential, workplace, or public), its pricing scheme (hourly or fixed), its cost (per hour or fixed), its charging plug type and power (if any), coordinates (optional) for link level resolution, the parking manager in charge of allocating its capacity in the format of MANAGER_TYPE(MANAGER_ID), and any time restrictions on its use in the format of VEHICLE_CATEGORY|HH:MM-HH:MM. When parking infrastructure is defined at the TAZ level, any number of parking stalls can be created in a TAZ with any set of characteristics. When a vehicle requests a stall within a TAZ, the parking manager creates a stall and assigns it to the user, with the distance between the requested destination and the parking stall's location inversely proportional to stalls availability. As a result, parking requests in TAZs with substantial parking availability will lead to very short walking legs between the stall and the destination, but in heavily-subscribed TAZs users may face long walks. When a vehicle leaves a stall, the stall is deleted and the unused parking availability in its TAZ is increased by one. When parking stalls are defined at the link level rather than TAZ level, parking stalls are created upon the initialization of the simulation (rather than upon request) and remain persistent throughout the simulation. These persistent stalls are allocated across road links within their TAZ, where they can be chosen, reserved, and released by agents over the course of the simulation. Activities, Trips and Mode Choice The fundamental purpose of AgentSim is to model the outcomes when agents attempt to execute their stored daily travel plans, taking into consideration agents' origins and desired destinations; time constraints; access to different travel modes; the cost and duration, including access/egress and wait time, of different travel modes; and the constraints imposed on the transportation system of all the other agents in the simulation. This process is quite complex, as it contains much of the modeling work that is typically done in MATSim's replanning module, as well as many additional new features that have been developed from the ground up and not extended from MATSim, such as the parking choice and vehicle energy calculations. Two of the primary actors that mediate the actions of the different traveler agents in the simulation are the Scheduler and the Router, defined in more detail below. Scheduler The Scheduler's purpose is to determine when each agent is scheduled to end an activity and then, when the simulation reaches that time, send a trigger to that agent telling it to begin its next trip. The scheduler also handles timing and triggers for other agents, such as transit driver agents and ride-hail driver agents. Router The router calculates routes between locations on the travel network, for two purposes. First, to generate travel times and costs using a skim lookup table which is a matrix of the duration, distance, and cost of travel between a pair of origin and destination TAZ (see Section 2.6.3). Agents use the skim table to estimate the utility of different modal options when choosing their preferred travel mode; the ride hail manager uses the skims table to estimate the time for available ridehail vehicles to reach travelers' ride requests). The skim table is also used to generate detailed routes for vehicles, which can then be fed into PhysSim to estimate traffic congestion levels on individual links and the overall road network given those trips. In BEAM, person agents always travel in vehicles, and the trip between two activities can consist of legs associated with one or several unique vehicles. For the purposes of routing, a "body" is the vehicle type used for walk travel legs. For example, a personal vehicle trip can consist of an access walk leg from an activity location to the vehicle's parking location, a vehicle leg from the initial parking location to a parking location near the destination, and an egress walk leg from the parking location to the destination. Similarly, a transit trip includes either a walk, personal vehicle/bike, ridehail, or shared bike leg to access and egress a transit station/stop (if a household's personal vehicle is used to access a transit station, it is not available to egress the station at the end of the transit trip). To build complete paths that consider multiple possible modes and a discrete set of potential access, trip, and egress vehicles, BEAM relies on the Conveyal R5 Routing Engine 3 an opensource tool implemented in Java. For every available mode, R5 searches over all available vehicles and routes to determine one or several optimal or nearly-optimal sequences of vehicle legs, as well as specific link-by-link routes for every leg, given a tradeoff between cost and travel time. Walk Walking is a default mode available to all agents on all trips. Walkers are assumed to follow a configurable fixed speed and are routed along a network of walkable links. Elevation can be optionally considered when routing any type of vehicles including walking, by mapping the average gradient percent to each or some of the links in a CSV file indicated by the parameter "linkToGradePercentFilePath" in the BEAM configuration file. Personal Vehicle When constructing a route alternative, R5 takes as input the set of personal vehicles accessible to the traveler, including any unused personal vehicles belonging to the traveler's household. R5 then calculates two routes for the nearest available car-a walking access leg from the agent's current location to the car, and a driving leg from the car's location to the trip's destination. In addition, the traveler sends a request to the parking manager to estimate the cost and walk access distance associated with parking at the trip's destination, given current parking availability. These components-the walk access distance, the driving time and toll cost, and the parking cost and egress distance-are all included in the agent's mode choice process. Because BEAM allocates personal vehicles to parking stalls at the beginning of the day and most households are often assumed to have dedicated parking stalls, BEAM assumes that most, but not all, walk legs from home to access a personal vehicle (either a light-duty vehicle or a personal bike) have a distance and duration of zero, when vehicles are parked at close enough distance to the home activity location. And if an agent takes a personal vehicle to a location, the return trip has to be made in that personal vehicle and not another mode. If the car mode is chosen, the vehicle is reserved for the agent and becomes unavailable for other travelers. The agent then begins a walk leg to the car's location, boards the car, and then begins a car trip to the destination. Once the agent reaches a configurable distance from the trip destination, the agent enters the parking choice process, sending out another query to the parking manager. Upon entering the parking choice process, the agent sends another request to the parking manager, which returns a set of available parking stalls to the driver. BEAM includes several parking managers types, including one that assumes ubiquitous parking, another at link level where the parking stall is assigned, and a third that generates parking stall locations whose distance from the driver's destination is sampled based on the overall supply and demand of parking spaces at the TAZ level. This choice set of parking stalls can include stalls with different distances from the destination, different parking types (e.g., residential, workplace, or public), different prices, and different types of electric vehicle charging infrastructure. The agent chooses a parking stall through a configurable logit model that weighs monetary cost, walking access time, preference to home charging and charging requirements. Ride Hail BEAM can simulate one or several ridehail fleets of human-driven vehicles, similar to Lyft or Uber, or one or several centrally-managed fleets of autonomous vehicles, similar to Cruise or Waymo. A ride hail fleet gives travelers the option to request an on-demand vehicle to complete a trip. The user can request a solo trip, for which an empty vehicle will drive from its current location to the user's location, pick up the user, and take the user directly to a requested destination; or a user can request a pooled trip, where the matched vehicle could already have other riders, and any pooled user's trip can be diverted to pick up additional passengers. These solo and pooled options can be assigned different pricing models, and travelers can have configurable preferences and attitudes about them. To properly account for wait times, operational efficiency, energy consumption, and refueling/charging patterns, BEAM models the operation of these fleets in great detail. During AgentSim, BEAM models the shift timing and location of each vehicle, the matching between customers and vehicles, the queuing at the charging stations for autonomous ride-hail fleets, and the energy consumption and refueling behavior of on-demand vehicles. If the traveler chooses the pooled shared ride option, a faster and sub-optimal version of the Alonso-Mora algorithm (Alonso-Mora, Samaranayake, Wallar, Frazzoli, & Rus, 2017) is used to pair that trip with other requests for pooled trips if possible (see Section 2.6.3 for more details). When they finish a trip in an area with low expected demand, ridehail vehicles follow one of various integrated repositioning heuristics to move to areas of higher anticipated demand. Assumptions regarding the timing and duration of ridehail driver shifts, and individual driver repositioning behavior between ride requests, can be refined to more accurately simulate ridehail driver behavior of a single, or even multiple, ridehail providers. Transit When a traveler requests a transit route for a trip, BEAM's implementation of R5 uses a modified version (Pereira, Saraiva, Herszenhut, Braga, & Conway, 2021) of the McRAPTOR algorithm (Delling, Pajor, & Werneck, 2015) to quickly construct a set of possible Pareto optimal and slightly suboptimal routes on transit vehicles from origin to destination given a trip's departure time. The possible solutions are evaluated considering arrival time, total fare, and number of transfers. Each feasible transit route considers transit schedules defined by agency GTFS directories, ensuring that the returned itineraries are assigned to specific transit vehicle trips with feasible transfers. The cost of each possible transit itinerary is calculated considering a default set of fare rules that can be customized for an individual transit provider's unique system. In addition, BEAM can allow for non-transit vehicles to be used as access and/or egress legs for transit trips. By default, users consider any available personal vehicles or bicycles for access or egress, as well as ride-hail and (if enabled) shared vehicles, bikes, or scooters. Possible transit itineraries are grouped into a mode classification by the access and egress modes used: WALK_TRANSIT, BIKE_TRANSIT, DRIVE_TRANSIT and RIDE_HAIL_TRANSIT. Within these sets of itineraries, a multinomial logit model is used to choose the best itinerary for each mode classification, in approximation of a nested logit process. This transit route choice process allows for modelers to consider additional factors, such as preference for rail transit over bus, in transit routing even if they are not directly considered by the Router. The best itinerary for each mode classification is then returned to the requesting traveler for use in the mode choice process. Individual transit vehicles and drivers are treated as BeamAgents, so that, at any given point on a vehicle's GTFS schedule between stops, the number of passenger agents on that vehicle can be calculated. If an agent tries to board a transit vehicle that is full of passengers, they will be denied access to that vehicle and have to replan their trip, either by waiting for the next transit vehicle or choosing a different mode. Currently BEAM uses a default maximum capacity for transit vehicles, by vehicle type (i.e., commuter or light rail, bus, etc.); however, BEAM can be configured to assign a different capacity by transit agency, vehicle type (e.g., number of subway, light rail, or commuter rail cars per train, or 40-foot vs 60-foot bus), route, or even run (e.g. commute vs. non-commute hours), based on detailed information provided by local transit agencies. In BEAM, transit vehicles move between stations/stops based on the GTFS schedule. Bus schedules are not directly affected by traffic congestion on the road network, although the GTFS schedule could be revised to reflect expected bus arrival and departure times based on heavy congestion. CAV Households in BEAM are modeled to maximize the utilization of any privately owned, as opposed to fleet operated, CAVs to meet their daily travel requirements. Each household follows a greedy heuristic to schedule the activity of any CAVs it owns in order to maximize the number of household members' trips that each CAV can cover. These household CAV schedules can involve empty trips between household members' activity locations, waiting for the next household member to depart from its activity to pick them up; when trips cannot be matched or congestion prevents a CAV from serving its scheduled trip, agents fall back on other modes to meet their travel needs. Trip Mode Choice Model BEAM gives users a rudimentary mode choice model that can be used by travelers in real time to choose the mode for upcoming trips based on the vehicles available to them and the real-time trip attributes returned to them by the router. Before agent departs for a trip, the utility of each available itinerary is calculated based on the cost ! , duration ! , and number of transfers ! in that itinerary: where ! is the alternative-specific constant for mode , !,# is the agent's value of time for that trip, and β $%&'()*% is the utility parameter of a single transfer. A value of time can be defined for each agent in the scenario input; by default, BEAM infers the value of time for each mode based on the agent's household income. This agent-specific value of time can be configured to vary based on the mode of travel, and it can be tuned based on scenario requirements (for instance, it can vary by the level of crowding on a transit vehicle or the level of automation of a personal vehicle). Rather than taking the standard practice of fixing the standard deviation of the Gumbel distributed error term as one, instead we fix the cost coefficient at 1, setting the value of one unit of utility equal to one dollar and requiring that the magnitude of the error term, ϵ, be specified as an input parameter. Thus, the probability of a user choosing alternative is: Tour Mode Choice Model Optionally, BEAM also allows for travelers to participate in a tour mode choice process that considers the characteristics of all trips in an upcoming sequence of trips when determining the mode of the first trip in the sequence. In particular, this process lets the traveler choose between a BIKE_BASED or CAR_BASED tour (when a private vehicle is taken for the first trip and must be returned back to the tour's origin location at the end of the tour) or a WALK_BASED tour (where no private household vehicle is used but all other modal options are available for each trip). Tours by default are defined as sequences of trips that start and end when a traveler is at home, but trips in the input plans file can optionally be labeled with a "tour_id" identifier, allowing for more complicated (potentially nested) tour structures. Tour mode choice algorithm itself operates as a nested multinomial logit, where agents choose a tour mode based on the expected utility associated with choosing the best available trip mode for each trip, given the constraints set by the choice of tour mode: $,# = ϵ 5 6 8 6 The expected maximum utility for agent choosing tour mode is defined as the sum of that tour mode's expected maximum utility for each trip on that tour. For a given trip, the expected maximum utility is the of the utility of that trip via all trip modes that will be available given tour mode . For instance, a CAR_BASED tour would only have the CAR mode available, but a WALK_BASED tour would typically have WALK, RIDE_HAIL, and WALK_TRANSIT available. Note that if the traveler has access to a personal car at the beginning of the tour, the DRIVE_TRANSIT mode will be available for only the first and last trips in a WALK_BASED tour (similar for bikes and BIKE_TRANSIT), and if shared vehicles are available BIKE or CAR modes can be available for WALK_BASED tours as well. The tour mode is chosen based on a multinomial logit with respect to the tour mode utilities, and the choice of tour mode becomes a constraint on the available trip modes for each trip on the tour. To avoid overwhelming the router with routing requests, the utilities of each trip during tour mode choice are filled in based on the TAZ skims (see Section 2.6.3 below). To preserve backwards compatibility, the tour mode choice capability is currently preserved in a separate branch of BEAM (with both branches receiving regular updates). In future releases of BEAM these two branches will be merged into a single branch with the ability to turn on or off tour mode choice. Parking/Charging Choice Model BeamAgents that are in a state requiring them to park a personal vehicle can request a parking stall from the Actor managing the parking infrastructure network. BeamAgents driving a plug-in battery (BEV) or hybrid (PHEV) electric vehicle send an inquiry to the Actor responsible for the parking and charging infrastructure. The received parking inquiry will go through the multinomial logit parking/charging choice model that samples different types of parking infrastructure and weighs each one of them based on the BeamAgent's predefined sensitivities towards parking cost, distance from destination, range anxiety (for electric vehicles), preference to residential parking, and detour distance for enroute charging. These sensitivities are input parameters to BEAM and can be tweaked to calibrate the electric load by charger type and the charging behavior against observed data. Ride-hail Fleet Management The central manager for the ridehail module in BEAM is the Actor RideHailManager, which is responsible for storing data and skims (more details about skims in Section 2.6.3), and keeping track of the state of the entire ridehail fleet; it creates the ridehail driver agents and their vehicles, and initializes ridehail vehicles remaining range (state of charge for EVs), initial locations, and driving shift timing and durations. It manages the routing requests and responses, the charging decision using the module DefaultRideHailDepotParkingManager, the repositioning strategies with the module RideHailRepositioning and the different customer matching algorithms with RideHailMatching. The RideHailManager also calculates the fares, handles customer inquiries and reservations, passenger vs empty (referred to as deadhead) VMT, idle vehicles, and stuck agents. Inquiries and reservations are two different types of customer requests for ridehail. An agent uses an inquiry to evaluate cost and wait/travel time for ridehail versus other modes, as a basis for their initial mode choice. After an agent has decided to use ridehail, they send a reservation request to be picked up by a ridehail vehicle. If a ride-hail passenger is unable to proceed to their next planned activity, it is considered "stuck" at the current activity and is removed from the simulation to avoid stalling the other agents. A large number of stuck agents is usually a sign of a bug in the computer code, a workflow problem, or unexpected inputs. RideHailAgent, which is a BeamAgent representing a ridehail vehicle, can be of two different types: with a human driver, or an autonomous vehicle in the case of centrally-managed autonomous fleets. There are two main differences in the workflow between these two types of agents. Human drivers behave exactly as any Person Agent in the BEAM simulation; they are initially placed according to expected (or observed) demand; after completing a requested ride they either park or relocate to an area with expected future demand to await their next ride request; and they decide about when and where to refuel or charge their vehicles when necessary. At the end of their shift, they are removed from the simulation (that is, their commute to start or end their driving shift is not included). In contrast, autonomous vehicles can be dispatched, removed, and relocated between ride requests, based on expected or observed demand, and their charging decision is optimized by RideHailManager, which dispatches them to specific depots for refueling/charging. BEAM can simulate several independent ridehail service providers simultaneously, with human drivers vs autonomous vehicles, different pricing structures, etc. The input parameters for customizing each of the ridehail services (such as defaultCostPerMile, pooledCostPerMile and rideHailManager.radiusInMeters) can be found in the BEAM configuration file under beam.agentsim.agents.rideHail. Currently the RideHailMatching treats solo trips and pooling trips as two separate processes. To pool customers, BEAM encompasses different matching algorithms that are based on AlonsoMora but have different tradeoffs in terms of runtime and level of optimality of the results. These matching algorithms can be calibrated using three main input parameters, which are "maxWaitingTimeInSec", "maxExcessRideTime" and "maxRequestsPerVehicle". The first two parameters put constraints on the maximum waiting time and the maximum acceptable detour time from a direct trip with a solo passenger. The last parameter speeds up the execution of the algorithm by reducing the solution space into a smaller chosen set of requests to match with a ridehail vehicle. Currently three ridehail matching algorithms can be utilized in BEAM: • "MIP_ASSIGNMENT" which offers optimal vehicle-customer matching but the runtime is exponential, rendering it impractical for significantly large simulations. • "ASYNC_GREEDY_ASSIGNMENT" which Is a greedy version of the AlonsoMora algorithm with parallelization of certain steps in the algorithm and represents the closest to the original algorithm i.e., "MIP_ASSIGNMENT". • and "VEHICLE_CENTRIC_MATCHING" which is significantly different from AlonsoMora algorithm; while it maintains similarities in terms of how the graph of matching are created, it is the fastest pooling algorithm in BEAM, which parallelizes the process of every vehicle matching independently from others. The assignment of ridehail vehicles to solo ridehail trip requests happens after all customers requesting a pooled ridehail trip have been matched; a rudimentary approach is used to match the closest available vehicle to the customer requesting a solo ridehail trip. This approach favors pooling over solo requests by first matching vehicles for pooling requests before matching solo requests. In addition to the matching, BEAM also comes with different reposition algorithms under RideHailRepositioning, which contains several different repositioning strategies the user can select and adjust their sensitivities using input parameters under "rideHail.repositioningManager", such: • "DEFAULT_REPOSITIONING_MANAGER", which does not reposition the vehicles (that is, ridehail vehicles park after their last completed ride and await a subsequent ride request). • "DEMAND_FOLLOWING_REPOSITIONING_MANAGER" which identifies the imbalance between simulated demand and supply, and redistributes idle vehicles to locations where ridehail demand is higher than supply. • "REPOSITIONING_LOW_WAITING_TIMES" which relocates vehicles to locations where simulated waiting times are longer. • and "INVERSE_SQUARE_DISTANCE_REPOSITIONING_FACTOR" which serves the same purpose as the demand following algorithm, but instead of relying on clustering, this strategy is sensitive to the inverse square law of the distance between simulated supply and demand in deciding where to relocate the vehicles. The pros of this algorithm is its faster runtime, and especially it reduces significantly the number parameters one needs to calibrate repositioning behavior. Vehicle Sharing The vehicle sharing module in BEAM is free floating, meaning that there are no fixed vehicle hubs; this can be used to simulate one-way shared vehicle service, such as GIG Carshare in the Bay Area, which allows vehicles to be parked in any legal parking space, or dockless bike/escooter sharing. Any limitations on where vehicles can be picked up or dropped off are handled through parking limitations; for instance, for a docked shared bike system, the only parking spots available to the bikes are at docking stations, or for a round-trip vehicle sharing service (such as GetAround or ZipCar), where vehicles must be returned to the location where they were picked up. The module is generic enough that it can handle all kinds of shared vehicles, including roundtrip and one-way carsharing, and docked and dockless shared bikes and scooters, and can simulate multiple independent vehicles sharing services simultaneously. At initialization the vehicles are parked at stalls or at charging points depending on their power trains and availability of stalls. The module offers three types of vehicles sharing strategies that the user can select: • fixed-non-reserving-fleet-by-TAZ: The initial locations of vehicles are either provided as input to BEAM or is distributed over the zoning level adopted by the simulation, such TAZs or CBGs. • fixed-non-reserving: The initial locations of vehicles are distributed based on the distribution of in the home location of the population. • inexhaustible-reserving: offers unlimited supply of vehicles, which can be used to estimate the maximum potential demand for vehicle sharing, as well as debug the vehicle sharing module in BEAM. For docked bikeshare programs, bikes are initially located in docks at actual docking station locations, and can only be returned to a docking station that has an unoccupied dock. For dockless micromobility programs, bikes and scooters can be left as close to the agent's destination as possible. JDEQSim BEAM utilizes the traffic network model from MATSim (called JDEQSim), an event-driven queuebased microsimulation of traffic flow where, instead of advancing in time steps, links are treated as finite-capacity queues. Vehicles are transmitted from one link to a subsequent link, and the transmission is treated as a discrete event, with the time between successive transmission events for a vehicle determined by the number of vehicles and capacity of each link. This framework allows for the efficient simulation of traffic flow on a large network while still capturing effects such as spillover and gridlock that are difficult to capture with simpler link-based methods (Waraich, Charypar, Balmer, & Axhausen, 2009). Modifications to JDEQSim Under different simulation scenarios, the roadway capacity of specific links can be increased, to simulate different portions of vehicles with cooperative adaptive cruise control (CACC, level 3 or above) on the road, or reduced, such as by removing a general travel lane and rededicating it to only transit vehicles. Since BEAM is based on a polynomial regression a la (Liu, Kan, Shladover, Lu, & Ferlis, 2018) interactions between human driven and autonomous CACC enabled vehicles can be simulated. Figure 2-2 shows a comparison between the original CACC model and simulation of CACC in BEAM. BEAM also allows for developers to add other features that can impact the travel time on a link. For example, BEAM has a (currently unused) PickUpDropOffHolder class that counts the number of TNC vehicle boarding or alighting events on each link at different time periods and gives JDEQSim access to this value. Thus, with a user-defined empirical relationship, BEAM is capable of capturing the impact of TNC pickup/dropoff behavior on traffic congestion. This capability could be easily extended to account for the impacts of parking (or double parking), freight loading, or transit operations on vehicle flow as well. Clearing Modes and Routes As is typical for MATSim simulations, after an iteration a portion of agents are randomly selected to modify their plans before the next iteration of AgentSim. In cases where agent mode choice is treated as fixed (for instance, in cases where activity plans and mode choices are imported as exogenous from a travel demand model), this replanning step means that some portion of agents will need to re-do route choice -with the rest of agents choosing a favorably performing plan from their library of past plans and taking the exact routes defined there. In cases where travel behavior is not treated as fixed, some agents are allowed to clear their chosen modes and/or discretionary activities (see Section 2.5.2) from their plans as well, re-making those choices in response to updated travel times and skims. Letting agents replan their route, mode, and/or activity choices allows the system to move towards user equilibrium, but limiting the number of agents who can replan in a given iteration reduces the risk of the system oscillating between nonoptimal solutions. When an agent's route is cleared, in the next iteration they make the same mode choice for each trip but, for any personal vehicle trips in their plans, they chose an updated lowest-utility (combining travel time and cost) route calculated given the current iteration's link speed table (which itself has been updated based on the traffic network simulation based on the path traversals from the previous iterations). This updating of routes, along with the ability to add random noise to the link speeds used for routing, is intended to spread travelers across many possible routes from an origin to a destination with similar travel times, rather than assigning all travelers between two points to the same optimal route. After agents in personal vehicles reroute their trips, all agents with cleared modes are allowed to make a new mode choice for every trip, based on updated traffic network speeds for auto-based modes, as well as any additional information (such as parking availability, transit vehicle crowding, and ridehail wait time) that may have changed from the previous iteration. This updating of modes is also intended to avoid having the system oscillate between non-optimal solutions; for example, one where drive mode share is very low (and speeds are reported as fast for the next iteration) and one where drive mode share is high (and speeds are reported as slow for the next iteration). Discretionary Activity Choice BEAM also includes a rudimentary destination choice model that allows for agent activity plans, including the location, type, and timing of some activities, to change in response to system conditions. For the purpose of these simulations, BEAM divides activities into mandatory (work and school) activities, with fixed exogenous locations and start/end times, and discretionary (all other purposes, including leisure and shopping), which have the option of being updated endogenously. When the discretionary activity choice model is turned on, some agents are selected each iteration to clear all discretionary activities from their plans, keeping only mandatory activities. When the next iteration of AgentSim starts, these agents will be given the option of adding new discretionary activities to their chosen plan. For every mandatory activity, a single blank subtour (an outbound trip, an activity, and a return trip, with trip destination and timing left blank) is added to the plans. For agents without any mandatory trips, two more blank subtours are added (one in the morning, one mid-day, and one in the afternoon, returning home in between each), ensuring that every agent can make at least three discretionary subtours. The presence of these blank subtours in an agent's chosen plan signals that agent to begin their discretionary activity choice process during the next iteration. When the person agents are loaded into AgentSim for the next iteration, details on any blank subtours are filled in. This discretionary activity choice process happens probabilistically based on the multimodal accessibility of potential trip destinations. The various choices involved in determining a discretionary subtour are modeled in a nesting structure, with the accessibility of each mode informing the favorability of each destination, and the favorability of each destination informing the decision of whether to take the trip at all. Allowing agents to replan their discretionary activities several times allows them to try different activity types and durations and be more likely to select successful plans. A schematic for this nested decision structure is shown in Figure 2-3. Filling in details of subtours involves defining an activity type, activity timing and duration, and destination choice set for each blank discretionary subtour. The start time of the discretionary activity for each blank subtour is chosen from between the start and end time of the surrounding mandatory activity (padded by 30 minutes on either end), with the relative probability of each activity type/start hour combination defined by the input "activityIntercepts" file. The process of generating a subtour activity type and start time is defined in pseudocode below in Figure 2-4. Given the chosen time bin, the precise starting time (in seconds from midnight) is sampled uniformly from the chosen time bin. Given the chosen activity type, the activity duration is sampled from an exponential distribution with each activity's mean duration defined as an input in the activityParams input file. The destination choice set for each discretionary tour is determined by sampling uniformly from all TAZs within a given distance of the tour origin, with both the number of potential destinations TAZs and the maximum sampling distance defined in the scenario configuration file. This sampling is done with a fixed seed so that the discretionary activity location choice set for each potential discretionary tour is the same for a given agent from one iteration to the next 4 . At this point, the agent has chosen the activity type and activity start time for every blank subtour, and BEAM has generated a subsample of TAZs offering possible locations for this activity. The destination is chosen based on the expected maximum utility of traveling to and from that destination via each available mode. These utilities are estimated by looking up travel times and costs from BEAM's origin/destination skims (see Section 2.6.3) and then converting those values to a utility value based on the agent's utility functions; a penalty term is added if the return trip cannot be completed before the agent's next trip is scheduled to begin. The expected maximum utility for each potential tour is calculated slightly differently depending on whether tour mode choice is enabled in BEAM. If your mode choice is enabled, the expected maximum utility for a given tour destination is calculated as the logsum of the tour mode utilities as defined in Section 2.3.4. If tour mode choice is not enabled, the expected maximum utility of each destination is calculated as the of the utility associated with using each mode available to a household. In either case, the inclusive value of the of tour mode utilities (concretely, the benefit of having multiple good modal alternatives to a given destination) a configurable parameter. The expected maximum utility of making a subtour at all is calculated as the of these destination , with a separate configurable parameter, plus the estimated utility of participating in the activity. This participation utility is defined as a constant plus an activityspecific parameter multiplied by the activity duration. This constant is defined in the scenario configuration file, and the activity-specific value of time parameters are defined in the activityParams input file. The actual nested choices are then calculated for each subtour. The decision whether to take a trip at all is a binary logit comparing the expected maximum utility of taking a tour to the utility of not taking a tour. The utility of taking a tour is determined by adding a constant β 0,&1$ to the expected utility of participating in the activity (the mandatory activity's time utility coefficient β $,&1$ multiplied by its duration &1$ ) plus the expected utility associated with traveling to and from the activity 9 &1$ . By convention, the utility associated with not taking a trip is assumed to be zero. The expected utility associated with travel 9 &1$ is taken for all modes over a sample of possible destinations and is calculated using a : : &5$ = λ 1*($ @ 6 + 7 2 /8 234/ 1∈1*($( A where λ 5*($ is the nest correlation and 9 5 is the expected maximum utility of travel to and from destination d across all modes. This destination-specific utility is calculated as the of the utilities of travel to and from a given destination for each mode , 5,! : : 1 = λ @ 6 These mode and destination specific utilities are calculated using the same utility structure as the mode choice model, using expected travel times, costs, and transfers taken from the skims (see Section 2.6.3). If the alternative to take a tour is selected, the destination is chosen based on the utilities associated with the different destination alternatives. These choices-discretionary activity participation, location, and timing--are added to the agent's plans, and the simulation is then started 5 . The mode for each trip in the plans is determined on the fly by the typical ChoosesMode activity. Model Outputs When BEAM is run it produces disaggregate outputs sufficient to reproduce every action taken by each agent in the simulation, as well as several aggregated outputs that can be used to calculate relevant performance metrics and to serve as inputs for additional BEAM runs or for other models. Events The most disaggregated outputs returned by BEAM are in the form of an events file. BEAM is an event-based simulation where, rather than advancing in discrete time steps, the simulation allows agents to change state at any time, with a scheduler enforcing the sequencing of these state changes. Each of these state changes is recorded in the form of a simulation event, which can be collected and output at the end of a simulation. BEAM allows for many different types of events associated with different behaviors, adding attributes to the core set of MATSim events (such as path traversals and events associated with agents entering and exiting vehicles) as well as defining new ones, including those associated with parking and refueling. Table 2-3 summarizes the outputs included in the events file. These events can be analyzed to give aggregate system wide outcome measures. For instance, the distance field of path traversal events can be summed in order to give a measurement of the total vehicle miles traveled in the simulation, and if weighted by the number of passengers present it can instead give the total passenger miles traveled throughout the system. Outputs can also be aggregated across individual vehicles as they travel through specific TAZ and census tracts/block groups, in order to evaluate the spatial and temporal concentration of criteria pollutant emissions relative to the populations exposed to those emissions. These measures can be calculated separately for different modes, vehicle types, time periods, and demographic characteristics. Mode choice events can be aggregated to give the mode split across all trips or disaggregated by location, time of day, or trip distance. Linkstats The routable road network in BEAM is represented by a set of connected links with attributes governing their congestion characteristics (such as free flow speed and capacity, and optionally other parameters of the Bureau of Public Roads volume delay function) and the modes allowed on them. During an AgentSim iteration, the router uses a static representation of the travel speed in order to produce shortest path routes for potential trips on the network. This static representation of the routable street network allows for speeds to vary across a predefined set of time periods, defining a travel time for each link for each time period of the day. This representation is static in that the speeds within a time period remain fixed during an AgentSim iteration regardless of agents' decisions. Instead, link speeds are updated by PhysSim after an AgentSim iteration completes, and the timestep-averaged speeds produced by PhysSim for each link serve as the static input travel speeds for the next iteration of AgentSim. This static representation of time-varying link travel times is output by PhysSim as a linkstats file. This linkstats file contains the congested travel time of each link for each time period used by the router for the next AgentSim iteration, as well as other information such as the link volume (the number of vehicles exiting the link during the time window) for both heavy-duty and light-duty vehicles. It is important to note that individual and aggregate travel times can differ between the AgentSim travel times reported in the events file and the PhysSim, as the AgentSim travel times reflect congestion patterns given the previous iteration's travel demand. In a simulation where a state approaching dynamic user equilibrium has been reached these differences will be small, but in a simulation that has not reached relaxation or has extreme pockets of unresolved congestion the differences between these measures can be substantial. Indeed, one useful measure for the extent to which a simulation has run for enough iterations is whether the gap between AgentSim and PhysSim speeds (or the difference between successive PhysSim speeds) has stabilized at a small value. Skims Some components of BEAM (such as pre-day activity planning module) and some linked models (such as ActivitySim) require a lightweight and fast way of accessing the expected travel time and other characteristics of a potential trip but do not require the level of detail produced by the router. These approximate trip characteristics are provided by the skims, a set of lookup tables indexed by trip mode, origin, destination, and time of day that estimate trip travel time, cost, and other relevant variables based on an average of those values for similar trips in the previous one or several iterations. Because the skimmer is structured as a lookup table, it can be accessed substantially more quickly than the time taken to calculate a new route, but because the geographic resolution is at that TAZ level, the results are less accurate. At the end of each AgentSim iteration, the travel time, cost, and other relevant characteristics of all trips sharing the same origin TAZ, destination TAZ, mode, and start time bin are averaged and saved into the skimmer for the next iteration. This aggregated lookup table can also be output at the end of an iteration as a long .csv file, allowing skims to be used as a "warm start" for a subsequent BEAM run or to inform a travel demand model such as ActivitySim. BEAM also provides the ability to output trips that are indexed by a single (origin or destination) TAZ rather than an origin/destination pair. In particular, BEAM allows for the generation of separate parking skims and ridehail skims that further aggregate across all trips of a given mode in a given time period arriving to (departing from) a given TAZ regardless of trip origin (destination). The parking skims summarize average parking cost and inbound walk distance for a given destination TAZ at a given time of day, and the ridehail skims summarize average wait time, average per-mile cost, and portion of requests that cannot be successfully matched with a ridehail vehicle for a given origin TAZ and time of day. Debugging plots and summary tables To ease quick initial analysis of model performance, BEAM automatically produces a directory of figures and tables showing a rough picture of the simulation outcomes. These plots and tables tend not to be of high enough quality for publication directly, but give users an initial view into many of the high-level metrics that need to be verified to confirm that the model is operating as expected. These products include summaries of mode split, activity participation, parking and charging activity, energy use, ridehail service quality, road network speed, and model runtime. Figure 2-5 is an example of a figure produced by BEAM showing the number of completed trips by mode for each iteration of a simulation of 200,000 agents in the New York City metropolitan area. Note that the total number of trips taken sharply drops and then gradually increases after the first iteration as agents naively attempt many discretionary tours, experience poor results, and then gradually learn more favorable activity patterns. Table 2-4 is an example table produced by BEAM for the same New York run, showing several selected performance metrics and how they change from iteration to iteration. Population Synthesis BEAM has been developed to read in population files generated by SynthPop (Ye, Konduri, Pendyala, Sana, & Waddell, 2009), an open-source Python implementation of PopGen that generates a synthetic population for a given region. SynthPop relies on data from the U.S. American Community Survey (ACS) and Census Public Use Microdata Sample (PUMS) that can be directly downloaded using U.S. Census APIs. SynthPop defines a synthetic population of individuals, including demographic characteristics, and assigns them to households, which can be given attributes such as number of vehicles owned and household income. The final set of sociodemographic variables used can be chosen from those available in ACS and PUMS to fit those needed by the behavioral models used in BEAM and elsewhere. Pre-day Planning In order to run its internal discretionary activity choice models, BEAM relies on agents already having mandatory trips defined. This can be accomplished for a Synthpop population using a lightweight tool developed at UC Berkeley called ActivitySynth (UrbanSim, 2023), which implements a model of workplace location and departure time choice estimated on California Household Survey data. This tool outputs plans for all workers in the synthetic population that give the location and start/end times of the primary work activity. Mandatory trips serve as the starting point for BEAM's internal activity modeling described in the section Discretionary Activity Choice. If a fully featured activity-based model is required, BEAM can assign agents' plans developed by the ActivitySim model. ActivitySim allows for the generation of plans using rich behavioral logic, including joint tours, mandatory trips to schools, and intermediate stops. An implementation in the San Francisco Bay Area that has been modified to directly produce output plans in a format readable by BEAM has been developed at UC Berkeley (Galli, et al., 2009). When BEAM loads ActivitySim plans, it can be configured either: to keep ActivitySim's mode choices for all simulated trips (thus only simulating downstream choices, such as route, parking, and charging choices); to keep each agent's planned activities but rely on BEAM's internal models for mode choice; or to start with ActivitySim's discretionary activity and mode choices and subsequently update both during the replanning phase. The relative benefits of each method depend on the degree of behavioral sensitivity a user intends to simulate. For studies focusing on charging behavior or the operations of small on-demand fleets, for instance, it is likely appropriate to treat activity and mode choices coming from ActivitySim as fixed. For studies where small but systematic changes to mode choice are expected (for instance, increasing the size or changing the price of an ondemand fleet or transit service), it is likely appropriate to allow BEAM to update agent mode choices while retaining discretionary activity choices. For a situation involving substantial changes to travel behavior (such as introduction of an entirely new travel mode or the onset of a global pandemic), or in a situation where outputs from an activity-based travel demand model are not available at all, it likely makes sense to use BEAM's discretionary activity model. Energy Consumption Vehicles in BEAM are initialized with a primary fuel type, energy consumption per unit distance, and fuel capacity. Vehicles can be assumed to begin an iteration fully fueled, or their starting fuel level can be drawn from a uniform distribution with configurable parameters. Optionally, a vehicle can also be given a secondary fuel type, energy consumption rate, and fuel capacity. In these cases, such as for plug-in hybrid electric powertrains, vehicles are assumed to consume their primary fuel type when there is any available, and then use the secondary fuel type after the primary fuel type has been depleted. In the simulation, a vehicle's current state of charge or fuel level influences parking location choice, the timing of any required refueling, and the amount of fuel put into the vehicle during refueling. Performance Metrics Because it produces link-based speeds as well as parking costs and wait times and costs for ondemand modes, BEAM produces outputs that are sufficient for the calculation of the Mobility Energy Productivity (MEP) metric, a measure of the proximity of a given location, and energy required to reach, a collection of activities (Hou, Garikapati, Nag, Young, & Grushka, 2019); as well as the Individual Experienced Utility-based Synthesis (INEXUS), which is complementary to, but distinct from, the location-based MEP, and calculates the travel utility of each individual agent under baseline conditions and different scenarios (Garikapati, 2023). Power Grid Co-Simulation Charging battery electric vehicles in BEAM is centralized in the Actor ChargingNetworkManager. The latter implements HELICS (Panossian, et al., 2023) a co-simulation framework that allows BEAM to run simultaneously with other simulators and exchange states in real time. Under this structure, BEAM is able to interact with a power grid model, by communicating observed loads and expected physical bounds for load management. Model Deployment For any simulation model it is important to provide pipelines to increase the efficiency of model scenario setup and deployment. The BEAM code base provides a variety of tools to prepare the scenario data (network, population, households, vehicles, etc.). Typically, models are run with a baseline model and alternative scenarios in mind. The differentiation between the models is either in the value of parameters e.g., road capacities, supplies of ride-hail vehicles, etc. or is some new feature which has been implemented specifically for the scenario under consideration. Due to the size and runtime of a single simulation, it is often not possible to run larger simulations locally on laptops or desktops. Therefore, multiple tools and pipelines have been developed which allow BEAM simulations to be deployed either at a high-performance computing cluster such as on the National Energy Research Scientific Computing Center (NERSC) 6 or on commercial cloud computing services such as Amazon Web Services (AWS) or Google Cloud Engine (GCE). With a single command, the simulation can be deployed using the specified configuration file. Based on scenario size, memory or computational resources can be specified and which code to execute with what data from which repository and version. Once a simulation is started, the simulation output can be accessed and analyzed using tools such as JupyterLab 7 , which allows quick adaptations to the simulation execution plan. Furthermore, various data and visualizations are automatically generated from the simulation output files, facilitating the monitoring of additional simulations. After a simulation is completed, the outputs are automatically archived before the simulation machine is shut down. Case Study: New York City This section summarizes the steps taken to calibrate and validate the baseline BEAM model for the New York City region in January 2020 (pre-COVID pandemic). This is an example where a BEAM implementation was developed from scratch, without an existing activity-based travel demand model to provide plans and using only minimal employment data to generate agents' mandatory plans. It is a proof-of-concept that BEAM can simulate consistent, realistic transportation system outcomes and serve as a testbed for simulating major mobility shifts in a way that is sensitive to congestion, multimodal accessibility, and changes in activity patterns. While these results have been validated against high level metrics, they are not intended to replace detailed travel demand models of the sort developed by Metropolitan Planning Organizations that take years of data collection, calibration, and validation. The application of this model to COVID recovery scenarios, which required capturing all of the endogenous behavior change described above, will be described in a forthcoming report. This report presents the baseline scenario to illustrate BEAM's capabilities in practice. First the simulated baseline is described, followed by a detailed description of the calibration and validation of the baseline. The baseline is intended to simulate the travel during an average weekday in the New York City region; the five boroughs plus nine outlying counties (Essex, Union, Hudson, Middlesex, Monmouth, Bergen, Rockland, Westchester, and Nassau). The covered area (see Figure 5-1), includes a total population of about 13 million inhabitants. The travel of all agents is simulated across 3890 Traffic Assignment Zones (TAZs) defined for the modeling domain as seen in the left map of Figure 5-2. Parking availability was estimated using publicly available datasets on street parking and off-street garages, and the road network was taken from OpenStreetMap and simplified using the OSMnx tool. Scenario Definition The baseline contains a total of about 36.5 million trips, with an average of about 4 trips per day per agent, and a total of about 44 million vehicle miles traveled, including both personal, transit, and shared vehicles, consuming about 84 terajoules of energy, and a total of about 73 million personal miles traveled. The relative intensity of the demand is depicted by the heatmap of the most visited locations in Figure 5-2. The initial population was generated using the SynthPop tool, based on Public Use Micro (PUMS) data from the U.S. Census for the New York metro area, with its default configurations modified to include information about workers' work industry in the population outputs and to include aggregate information on workers by job category in the block group marginals being targeted. Given each worker's home location and work industry, workplace locations are determined by sampling from the American Community Survey commuting flows. Work departure and arrival times are sampled from the NHTS, with the distribution taken across the study area. Calibration Once the population and activity plans were created, to make the process more time-and costefficient, the New York City scenario calibration was performed on a smaller (4%) sample of the active travelers, and then applied to a larger sample (10%), with the capacity of each transit vehicle reduced to 13% of it's rated capacity (with the extra 3% added to account for sampling variability). The calibration started using configuration parameters similar to other calibrated scenarios, like the San Francisco Bay Area scenario, and some of those have been varied to match specific local observed data. In particular, for this scenario, the calibration process mainly focused on the mode choice model and aimed at matching the mode split for the New York City CBSA in the 2017 National Household Travel Survey (NHTS), as well as the ratio of 2019 average weekday subway vs bus ridership provided by the New York Metropolitan Transportation Authority (NYMTA). In addition, the process to generate discretionary activities in BEAM was optimized to reproduce the average number of trips per person for specific purposes, again provided by NHTS. The parameters used to match the simulated data with the observed data were mainly the intercepts and value of time of the multinomial logit model used for the mode-choice, and the discretionary activity generation parameters. The calibration process consisted of an iteration of four main steps: 1) Running the simulation: 2) Calculating the values to be compared with the observed data; 3) Comparing the simulated data with the observed data; 4) Based on the comparison results, adjusting the parameters to be used for the next iteration. Figure 5-3 shows how the simulated values were approaching the observed data every new iteration of the calibration process; in the figure the simulated data have been linearly scaled from the 4% sample to represent the full population. The mode split and ratio of subway to bus ridership were fully calibrated after 11 calibration runs. Note that while the simulated modal split gradually approached the observed data, the subway versus bus ratio was close to observed even during intermediate calibrations, showing how a transportation system is complex and output values are correlated: changing certain parameters during the calibration process can cause collateral effects on other variables. Table 5-1 shows the results of the calibration process in terms of simulated versus observed values of the modal split and activity split for the full population, after the ten subsamples have been simulated with the calibrated parameters and their results merged together, confirming the transferability of the results from the ten subsamples to the full population. Regarding the ratio of subway to bus ridership for NYMTA, the final calibration value of 2.5 is exactly as observed. Validation Once the scenario baseline was calibrated, other observed data were used to validate the baseline: 2019 average weekday transit ridership on bus, subway, and commuter rail routes provided by NYMTA, Port Authority of New York and New Jersey (PATH), and New Jersey Transit (NJ Transit), as well as the overall modal split for different ranges of trip distance. The data used for validating the baseline are not correlated with the data used for calibration because even if the overall modal split is correct, it does not necessarily follow that the modal split is also correct across different trip distances. And if the transit share and transit vehicle splits from the calibration exercise are correct, it does not necessarily follow that the absolute ridership for each agency matches observed ridership. Figure 5-4 shows the modal split for each distance range for the travel modes with the most trips, confirming that the simulated trips correspond quite well to the observed data, thus indicating that the mode share model is well calibrated and correctly considers the travel distance cost for different modes of travel. Figure 5-5 shows that the simulated transit ridership of the main agencies, after scaling to the 100% population, changed with each calibration iteration until the observed NHTS value was achieved, thus confirming the validity of the baseline based on the observed data. Note that the transit ridership values used to validate the scenario are absolute ridership, thus confirming that the population sample and the created activity and travel plans are of a comparable magnitude. Finally, Table 5-2 shows the fully calibrated simulated ridership data from the full population of agents, compared with the observed ridership data, confirming the validity of the full sample.
2023-08-07T06:42:35.487Z
2023-08-03T00:00:00.000
{ "year": 2023, "sha1": "853fd05485b77f9d6f572c058cf91ec9f79d81e1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "853fd05485b77f9d6f572c058cf91ec9f79d81e1", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
56059965
pes2o/s2orc
v3-fos-license
Glacial  13 C decreases in the western South Atlantic forced by millennial changes in Southern Ocean 10 Abstract. Abrupt millennial – scale climate change events of the last deglaciation (i.e., Heinrich Stadial 1 and the Younger Dryas) were accompanied by marked increases in atmospheric CO 2 presumably originated by outgassing from the Southern Ocean. However, information on the preceding Heinrich Stadials during the last glacial period is scarce. Here we present stable carbon isotopic data (  13 C) from two species of planktonic foraminifera from the western South Atlantic that reveal major decreases (up to 1‰) during Heinrich Stadials 3 and 2. These  13 C decreases are most likely related to millennial – 15 scale periods of intensification in Southern Ocean deep water ventilation presumably associated with a weak Atlantic meridional overturning circulation. After reaching the upper water column of the Southern Ocean, the  13 C depletion would be transferred equatorward via central and thermocline waters. Together with other lines of evidence, our data are consistent with the hypothesis that the CO 2 added to the atmosphere during abrupt millennial – scale climate change events during the last glacial period also originated in the ocean and reached the atmosphere by outgassing from the Southern Ocean. The 20 temporal evolution of  13 C during Heinrich Stadials in our records is characterized by two relative minima separated by a relative maximum. This “w – structure” is also found in North Atlantic and South American records, giving us confidence that such structure is a pervasive feature of Heinrich Stadial 2 and, the Brazil Basin (Stramma et al., 1990;Peterson and Stramma, 1991). Around 38° S the BC encounters the northward flowing Malvinas Current (MC) (i.e., Brazil/Malvinas Confluence), where the opposing flows turn south-east and flow offshore. The offshore region is characterized by intense mesoscale variability. After collision and considerable mixing the warm-salty BC fractions flow eastward as the South Atlantic Current (Olson et al., 1988;Peterson and Stramma, 1991), while the majority of the cold fresh MC waters veer southeastward to rejoin the Antarctic Circumpolar Current. 5 The BC transports Tropical Water (TW) and South Atlantic Central Water (SACW). TW occupies the mixed layer, i.e., the upper ca. 100 m of the water column, with a mean temperature of 20 °C and mean salinity of 36 psu (Tsuchiya et al., 1994). TW originates in the tropics-subtropics transition region by subduction, creating a subsurface salinity maximum capping the central waters (Memery et al., 2000;Tomczak and Godfrey, 2003) (Fig. 1). SACW occupies the permanent thermocline from ca. 100 to 500 m water depth. Its temperature ranges from 6 to 20 °C and its salinity spans from 34.6 to 36 psu (Memery et 10 al., 2000). Two types of SACW have been identified (Stramma et al., 2003). The low-density type of SACW which is mainly found in the South Atlantic subtropical gyre is formed by subduction of a low-density type of Subantarctic Mode Water (SAMW) along the southern edge of the gyre (Gordon, 1981;Stramma and England, 1999). A denser variety of SACW originates in the South Indian Ocean and is brought into the South Atlantic by the Agulhas Current (Sprintall and Tomczak, 1993) (Fig. 1). 15 Just below the permanent thermocline, Antarctic Intermediate Water (AAIW) occupies the water column from ca. 500 to 1200 m water depth (Stramma and England, 1999). AAIW is characterized as a cold and low salinity water mass (Piola and Georgi, 1982;Tomczak and Godfrey, 2003). Around the southern tip of South America, AAIW originates by subduction of cold and fresh Antarctic Surface Water across the Antarctic Polar Front, and by contribution of a dense type of SAMW that originates from deep winter convection in the Subantarctic Zone (Molinelli, 1981;Naveira Garabato et al., 2009). AAIW is 20 advected eastward through the Drake Passage by the Antarctic Circumpolar Current and turns northwards with the MC into the South Atlantic (Piola and Gordon, 1989). Since AAIW circulation follows the anticyclonic flow of the subtropical gyre the majority of the northward flow occurs in the eastern basin (McCartney, 1977;Stramma and England, 1999;Tomczak and Godfrey, 2003). However, intense mixing in the Brazil/Malvinas Confluence also leads to direct northward influence in the western South Atlantic that can, to some extent , influences the formation region of SACW (e.g., Piola and Georgi, 1982) 25 ( Fig. 1 and 2). In the modern South Atlantic, the distribution of dissolved inorganic carbon  13 C ( 13 C DIC ) allows the identification of its major water masses. TW and SACW show high  13 C DIC values of ca. 2‰. AAIW presents  13 C DIC values of ca. 0.7‰. NADW derives from the North Atlantic and shows  13 C DIC values of ca. 1‰. The NADW layer is surrounded by Upper and Lower CDW which present  13 C DIC values of ca. 0.4‰ (Kroopnick, 1985). Since planktonic foraminiferal  13 C reflects the 30  13 C DIC of the ambient seawater, we use it as a proxy for the past oceanic carbon system Spero, 1992). Changes in upper ocean properties and circulation patterns are also closely associated with changes in the atmospheric circulation. Positive sea surface temperature (SST) anomalies in the western South Atlantic, likely associated to changes in Clim. Past Discuss., doi:10.5194/cp-2016-59, 2016 Manuscript under review for journal Clim. Past Published: 20 June 2016 c Author(s) 2016. CC-BY 3.0 License. the strength of the AMOC (Knight et al., 2005), have been correlated with positive anomalies in the strength of the SAMS and, consequently, with the increase of precipitation over SESA (Chaves and Nobre, 2004). The SAMS and its main componentsthe ITCZ, the South Atlantic Convergence Zone (SACZ), and the South American Low Level Jet (SALLJ)are the main atmospheric drivers of the hydroclimate of tropical and subtropical SESA to the east of the Andes (Garreaud et al., 2009). The ITCZ is a global convective belt in the equatorial region, and the SACZ is an elongate NW-SE convective 5 belt that originates in the Amazon Basin and extends southeastward above the northern portion of SESA and the adjacent subtropical South Atlantic. The SALLJ is a NW-SE humidity flux from the west Amazon Basin to the subtropical region of SESA (Zhou and Lau, 1998;Carvalho et al., 2004;Schneider et al., 2014). This southward water vapour flux is a crucial source of precipitation to the Plata River drainage basin (Berbery and Barros, 2002), which is a source of continental borne sediments to our core site. 10 Marine sediment core We investigated sediment core GeoB6212-1 (32.41° S, 50.06° W, 1010 m water depth, 790 cm core length) (Schulz et al., 2001) collected from the continental slope off SEAS where the upper water column is under the influence of the BC, and thus the TW and SACW (Fig. 1). This gravity core was raised at the Rio Grande Cone, a major sedimentary feature in the 15 western Argentine Basin. As our focuses here are HS3 and HS2, we analysed a section from the bottom of the core (768 cm core depth; ca. 33 cal ka BP) up to 290 cm core depth (ca. 20 cal ka BP). Visual core inspection provided evidence for the presence of sand lenses at 330 and 368 cm core depth (Schulz et al., 2001;Wefer et al., 2001). Therefore we did not sample these depths. The section of interest of GeoB6212-1 was sampled every 2.5 cm with syringes of 10 cm 3 . All samples were wet sieved, oven-dried at 50 °C and the fraction larger than 150 m was stored in glass vials for subsequent analyses. 20 Age model The age model of core GeoB6212-1 is based on 14 AMS radiocarbon ages from planktonic foraminifera (Table 1, Fig. 3). For each sample, we hand-picked under a binocular microscope around 10 mg of planktonic foraminifera shells from the sediment fraction larger than 150 m. Samples were analysed at the Poznan Radiocarbon Laboratory, Poland, and at the Beta Analytic Radiocarbon Dating Laboratory, USA (Table 1). All radiocarbon ages were calibrated with the calibration curve 25 IntCal13 (Reimer et al., 2013) with the software Bacon 2.2 (Blaauw and Christen, 2011). A marine reservoir correction of 400 years was applied (Bard, 1988). All ages are reported as calibrated years before present (cal a BP; present is 1950 AD). To construct the age model we used Bayesian statistics in the software Bacon 2.2 (Blaauw and Christen, 2011). Default parameter settings were used, except for mem.mean (set to 0.4) and acc.shape (set to 0.5). Ages are modelled as drawn from Clim. Past Discuss., doi: 10.5194/cp-2016-59, 2016 Manuscript under review for journal Clim. Past Published: 20 June 2016 c Author(s) 2016. CC-BY 3.0 License. a t-distribution, with 9 degrees of freedom (t.a=9, t.b=10). 1,000 age-depth realizations were used to estimate mean age and 95 % confidence intervals at 0.5 cm resolution (Fig. 3). Stable carbon isotope analyses Around 10 tests of G. ruber white sensu stricto (Wang, 2000) within the size range 250-350 m and 8 tests of G. inflata non-encrusted with 3 chamber in the final whorl (Groeneveld and Chiessi, 2011) within the size range 315-400 m were 5 hand-picked under a binocular microscope every 2.5 cm from 290 to 768 cm core depth. While the first species records the conditions at the top of the mixed layer (down to ca. 30 m) (Chiessi et al., 2007;Wang, 2000), the second species records the conditions at the permanent thermocline (ca. 350-400 m) (Groeneveld and Chiessi, 2011), allowing the reconstruction of the  13 C signal of the TW and the SACW, respectively. The  13 C analyses were performed on a Finnigan MAT 252 mass spectrometer equipped with an automatic carbonate preparation device at the MARUM -Centre for Marine Environmental 10 Sciences, University of Bremen, Germany. Isotopic results are reported in the usual delta-notation relative to the Vienna Peedee belemnite (VPDB). Data were calibrated against the house standard (Solnhofen limestone), itself calibrated against the NBS19 standard. The standard deviation of the laboratory standard was lower than 0.05‰ for the measuring period. Age model 15 Our age model covers the period between 32.6 and 5.7 cal ka BP (Table 1, Fig. 3). Sedimentation rates change markedly during this time interval with values ranging from 3.8 to 111 cm ka -1 . Three main peaks in sedimentation rate were identified at ca. 26, 23 and 15 and one minor peak at 11 cal ka BP. The two oldest sedimentation peaks occur within our period of interest (i.e., from ca. 33 until 20 cal ka BP), and received special attention due to the higher sedimentation rate which provides increased temporal resolution (Fig. 3). The mean temporal resolution of our  13 C records is ca. 90 yr with values 20 ranging from 28 and 195 yrs. Stable carbon isotopes analyses The G. ruber  13 C record shows two long-term decreases, from ca. 32.6 to 28.5 cal ka BP with amplitude of ca. 1‰ and from ca. 26.5 to 24.8 cal ka BP also with amplitude of ca. 1‰ (Fig. 4a). These two negative long-term trends are separated from each other by an abrupt increase of ca. 1.3‰ ending at ca. 27 cal ka BP. Both long-term decreasing slopes were 25 interrupted by brief positive excursions, one from 30.6 to 30.4 cal ka BP with amplitude of ca. 0.7‰ and other from ca. 26.2 to 25.8 cal ka BP with amplitude of ca. 1‰. After the second long-term decrease, the  13 C values of G. ruber varied around 0.7‰. Both long-term negative excursions determine a pattern we refer to as "w-structure". The G. inflata  13 C record shows four negative excursions departing from a baseline of ca. 0.8‰ (Fig. 4b). The first occurs from ca. 32.5 to 30.6 cal ka BP with an amplitude of ca. 0.5‰, the second from ca. 29.8 to 28.3 cal ka BP with the same amplitude, the third from ca. 26.5 to 26.4 cal ka BP with an amplitude of ca. 0.8‰, and the forth from ca. 25.8 to 24.4 cal ka BP with an amplitude of ca. 0.9‰. Also in the  13 C record from G. inflata two w-structures are present and are defined by the previously described negative excursions. 5 The w-structures for both species as well as the  13 C minima are synchronous (Fig. 4). Discussion The synchronous w-structures present in the  13 C records of both planktonic foraminiferal species analysed here occur in consonance with the millennial-scale events HS3 and HS2 (Sarnthein et al., 2001;Goni and Harrison, 2010) (Fig. 4). Based on modern conditions, we expect our core site not to be influenced by significant changes in the nutrient content of the upper 10 water column since the region is dominated by the oligotrophic BC, characteristic of western boundary currents and is far from upwelling cells (Brandini et al., 2000). Thus, it is unlikely that changes in our  13 C records are associated with local productive events driven by nutrient-cycle processes . A pervasive feature of planktonic foraminiferal  13 C records in the Indo-Pacific Ocean (Spero and Lea, 2002), Southern Ocean (Ninnemann and Charles, 1997), and South Atlantic Ocean (Oppo and Fairbanks, 1989) is a negative excursion 15 during HS1. Ninnemann and Charles (1997) suggested that the source for this signal is in the Southern Ocean. They further proposed that the anomaly is related to the transfer of a preformed  13 C signal from the Southern Ocean via SAMW and/or AAIW. A low-density type of SAMW actually contributes to SACW that spreads into the South Atlantic (Stramma and England, 1999). Additionally, AAIW also influences SACW through vigorous eddy mixing at the Brazil/Malvinas Confluence (Piola and Georgi, 1982). Thus, SACW represents a potential conduit for the  13 C signal from the sub-Antarctic 20 region to the subtropical South Atlantic ( Fig. 1 and 2). Therefore, we propose that the negative excursions in our  13 C records are related to the transfer of a preformed  13 C signal from the subantarctic zone to the western South Atlantic via central and thermocline waters. The reduced AMOC would decrease the sub-tropical heat transport towards the north, leading to rising temperatures in the circum-Antarctic region (EDML, 75° S, 0° E, EPICA) (EPICA Community Members, 2006) (Fig. 5i). Furthermore, during phases of weak AMOC the Southern Hemisphere westerlies are stronger and shift southward strengthening CDW upwelling 5 (Anderson et al., 2009;Denton et al., 2010). Increased upwelling would supply the surface of the Southern Ocean to the south of the Antarctic Polar Front with more low- 13 C waters and with a higher concentration of Si(OH) 4 (Anderson et al., 2009;Hendry et al., 2012). Since upwelled CDW is hypothesized to be the dominant source of the upper and intermediate waters that leave the Southern Ocean (i.e., SAMW and AAIW) (Fig. 2), increased upwelling would transfer the low  13 C signal as well as the positive Si(OH) 4 anomaly northward into the adjacent subtropical gyres (Oppo and Fairbanks, 1989;10 Ninnemann and Charles, 1997;Spero and Lea, 2002;Anderson et al., 2009;Hendry et al., 2012). These signals would then propagate through the thermocline (i.e., SACW) of the South Atlantic, and be transferred to the mixed layer by vertical exchange process (i.e., TW) (Tomczak and Godfrey, 2003). However, we cannot exclude the possibility that the upwelled low- 13 C respired CO 2 could have been first outgassed from the Southern Ocean, and then re-dissolved into the ocean via air-sea exchanges at the formation regions of SACW and TW, eventually reaching the upper water column at our core site. 15 Higher concentrations of Si(OH) 4 were described in benthic organisms at intermediate water depths (i.e., 1048 m water depths) of the western South Atlantic (ca. 27° S) close to our core site during abrupt millennial-scale climate change events (Hendry et al., 2012), suggesting that the preformed signal from the Southern Ocean indeed reached subtropical latitudes in the South Atlantic. Millennial-scale changes of the Southern Ocean temperature and deep water ventilation also led to the increase in CO 2atm 20 (Spero and Lea, 2002;Ahn and Brook, 2008;Ahn and Brook, 2014;Gottschalk et al., 2015). During HS3 and HS2, positive (Fig. 5j). However, the CO 2atm peaks occur ca. 1 ka later than the initiation of the  13 C decrease in our records. Spero and Lea (2002) also observed a similar offset between the increase in CO 2atm and the decrease in Pacific Ocean planktonic foraminifera  13 C during HS1, and attributed this apparent offset to uncertainties in the age models of their records. An alternative explanation relates to a possible time lag between the weakening of the AMOC and 30 the increase in CO 2atm (Ahn and Brook, 2014). Thus, our records are consistent with the hypothesis that the increase in CO 2atm during abrupt millennial-scale climate change events of the last glacial period is originated by ocean processes (Smith et al., 1999;Ahn and Brook, 2008;Bereiter Clim. Past Discuss., doi:10.5194/cp-2016-59, 2016 Manuscript under review for journal Clim. Past Ahn and Brook, 2014) and is most likely related to a weak AMOC and associated strengthened Southern Ocean upwelling. Continental responses Paleoclimate records from South America have shown marked hydrological changes during abrupt millennial-scale climate events (Arz et al., 1998;Peterson et al., 2000;Baker et al., 2001;Cruz et al., 2006;Stríkis et al., 2015). Reconstructions of 5 the SAMS activity suggest its strengthening during HSs (Cruz et al., 2006;Kanner et al., 2012;. Changes in speleothem oxygen isotopic composition from the western Amazon Basin (NAR-C, Cueva del Diamante cave, northern Peru, 5.4° S, 77.3° W) as well as changes on gamma radiation records from the Bolivian Altiplano (Salar de Uyuni, 20.3° S, 67.5° W) (Baker et al., 2001) (Fig. 5f, g) suggest increased precipitation during HS3 and HS2. To the north of the equator, a reflectance record from the Cariaco Basin (off northern Venezuela, MD03-2621, 10.7° N, 65° W) 10 (Deplazes et al., 2013) suggests decreased precipitation during the same millennial-scale events (Fig. 5e). The opposite behaviour of these sites reflects the interhemispheric anti-phase response of tropical precipitation during HSs (Wang et al., 2007;. Importantly, during HS3 and particularly HS2 the three above mentioned records (Fig. 5e, f, g) show a w-structure similar to the one observed in our  13 C records. Stríkis et al. (2015) reported a similar w-structure during HS1 related to two distinct hydrologic phases within HS1. 15 Periods of intensified SAMS would have strengthened the discharge from the Plata River drainage basin (Chiessi et al., 2009), increasing the delivery of terrigenous sediments to the Rio Grande Cone (Lantzsch et al., 2014), our coring site. We show for the first time increased sedimentation rates during a HS off SESA. Thus, the increased sedimentation rates during HS2 in our records corroborate the suggestion of Chiessi et al. (2009). Furthermore, GeoB6212-1 sedimentation rates also show a w-structure during HS2 (Fig. 3), hinting for a sensitive response of the Plata River drainage basin to the increase in 20 activity of the SAMS. The increased continental runoff that led to increased delivery of terrigenous sediments to our core site could have also enhanced the nutrient availability and the local primary productivity, affecting our planktonic foraminiferal  13 C records. However, we discard this possibility because the expected signal of stronger local primary productivity on planktonic foraminiferal  13 C would be opposite to the one observed at GeoB6212-1 . 25 The occurrence of a similar w-structure in North Atlantic records, in South American records and in our  13 C and sedimentation rate records gives us confidence that such w-structure is indeed a feature of HS2, and possibly also HS3. Conclusions Our mixed layer and permanent thermocline  13 C records from the western South Atlantic show in-phase millennial-scale decreases of up to 1‰ during the HS3 and HS2. We hypothesize that the source of the low  13 C signal can be explained by 30 millennial-scale changes in the Southern Ocean deep water ventilation. A weak AMOC during HS3 and HS2 would produce stronger Southern Ocean upwelling that in turn, would supply the surface of the Southern Ocean with more low- 13 C waters as well as promote increased outgassing of this old low- 13 C respired CO 2 . The low- 13 C waters at the surface of the Southern Ocean would be subducted into the central and thermocline waters and transferred equatorward via the South Atlantic subtropical gyre circulation towards our core site. Together with other lines of evidence, our data are consistent with the hypothesis that the CO 2 added to the atmosphere during abrupt millennial-scale climate change events of the last glacial 5 period originated in the ocean and reached the atmosphere by outgassing of the Southern Ocean. Moreover, the occurrence of a similar w-structure during HS2 (and possibly HS3) in North Atlantic and South American records as well as in our planktonic foraminiferal  13 C and sedimentation rate records gives us confidence that such w-structure is a pervasive feature that characterizes HS2 (and possibly HS3). Data availability 10 The data reported here will be archived in Pangaea (www.pangaea.de).
2018-12-11T02:27:16.221Z
2016-06-20T00:00:00.000
{ "year": 2016, "sha1": "018ef127fca80c1e4cc74e8ff51e6ffbae9eeda9", "oa_license": "CCBY", "oa_url": "https://www.clim-past.net/13/345/2017/cp-13-345-2017.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "6aadd773f6f86a18942ccf54d3fde36ad5088ccd", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geology" ] }
231667387
pes2o/s2orc
v3-fos-license
RANK Signaling in the Differentiation and Regeneration of Thymic Epithelial Cells Thymic epithelial cells (TECs) provide essential clues for the proliferation, survival, migration, and differentiation of thymocytes. Recent advances in mouse and human have revealed that TECs constitute a highly heterogeneous cell population with distinct functional properties. Importantly, TECs are sensitive to thymic damages engendered by myeloablative conditioning regimen used for bone marrow transplantation. These detrimental effects on TECs delay de novo T-cell production, which can increase the risk of morbidity and mortality in many patients. Alike that TECs guide the development of thymocytes, reciprocally thymocytes control the differentiation and organization of TECs. These bidirectional interactions are referred to as thymic crosstalk. The tumor necrosis factor receptor superfamily (TNFRSF) member, receptor activator of nuclear factor kappa-B (RANK) and its cognate ligand RANKL have emerged as key players of the crosstalk between TECs and thymocytes. RANKL, mainly provided by positively selected CD4+ thymocytes and a subset of group 3 innate lymphoid cells, controls mTEC proliferation/differentiation and TEC regeneration. In this review, I discuss recent advances that have unraveled the high heterogeneity of TECs and the implication of the RANK-RANKL signaling axis in TEC differentiation and regeneration. Targeting this cell-signaling pathway opens novel therapeutic perspectives to recover TEC function and T-cell production. INTRODUCTION The thymus supports the generation of distinct T-cell subsets such as conventional CD4 + and CD8 + T cells, Foxp3 + regulatory T cells, gd T cells, and invariant natural killer T cells (iNKT). The development of these different T-cell subsets depends on stromal niches composed of thymic epithelial cells (TECs). TECs control T-cell development from the entry of T-cell progenitors to the egress of mature T cells. According to their anatomical localization and functional properties, TECs are subdivided into two main populations: cortical TECs (cTECs) and medullary TECs (mTECs). cTECs support the initial stages of T-cell development, including T-cell progenitor homing, T-cell lineage commitment, the expansion of immature thymocytes, death by neglect of thymocytes that do not recognize peptide-MHC complexes and positive selection of thymocytes into CD4 + and CD8 + T cells. By contrast, mTECs control late stages of T-cell development, mainly the induction of selftolerance characterized by the clonal deletion of autoreactive thymocytes and CD4 + thymocyte diversion into the Foxp3 + regulatory T-cell lineage. Conversely, thymocytes control TEC expansion and differentiation. These bidirectional interactions between thymocytes and TECs are termed thymic crosstalk (1)(2)(3). TEC HETEROGENEITY IN MOUSE AND HUMAN Historically, cTECs and mTECs were identified by histology using distinct markers such as cytokeratin 8 for cTECs and cytokeratin-5 and -14 for mTECs (4). TEC identification by flow cytometry on enzymatically-disaggregated thymus has greatly aided in studying TEC heterogeneity and functionality. TECs are non-hematopoietic cells, which express the Epithelial Cell Adhesion Molecule (EpCAM), and are generally identified as CD45 -EpCAM1 + . TECs can be further segregated into cTECs and mTECs based on the detection of Ly51 and reactivity to the lectin Ulex Europaeus Agglutinin 1 (UEA-1), respectively. cTECs and mTECs have distinct phenotypic and functional properties. Recent advances based on single-cell transcriptomic analyses have highlighted that TECs constitute a more diverse and dynamic population than previously thought. FEATURES OF CORTICAL TECs cTECs express several molecules that govern the initial stages of T-cell development. They express CXCL12 and CCL25 chemokines that guide the homing of T-cell progenitors into the thymus (5,6). cTECs also express the NOTCH ligand Delta-like 4 (DLL4), which induces the engagement of progenitors into the T-cell lineage (7,8). Moreover, they express IL-7 and stem cell factor (SCF) cytokines that promote the survival and proliferation of immature thymocytes (9). They are equipped with protein degradation machineries important for the positive selection of CD4 + thymocytes such as the lysosomal endopeptidase cathepsin L (encoded by Ctsl) and the thymus-specific serine protease TSSP (encoded by Prss16) that contributes to MHC class II-associated self-peptide generation (10). They also express the thymoproteasome subunit b5t (encoded by Psmb11), which produces MHC class I-associated self-peptides required for the positive selection of CD8 + thymocytes (11). cTECs are heterogeneous based on the expression level of MHCII, CD40, DLL4, and IL-7. Intriguingly, a cTEC subset specific of the perinatal thymus termed perinatal cTECs has been identified by single-cell transcriptomics (12). These cells, representing one-third of all TECs at 1 week of age, are highly proliferative and express synaptogyrin 1 (Syngr1) and G proteincoupled estrogen receptor 1 (Gper1) in addition to classical cTEC markers. Furthermore, by enveloping many viable double-positive (DP) thymocytes, a fraction of cTECs can form multi-cellular complexes called thymic nurse cells (TNCs) (13). TNCs likely provide a microenvironment favorable to secondary TCRa rearrangements in long-lived DP thymocytes, thereby optimizing TCR repertoire selection (14). Although TNCs remain poorly characterized, they exhibit a distinct gene expression profile characterized by high expression of CXCL12 and TSSP. TNCs thus constitute a cTEC subpopulation with distinct morphological and functional properties. Given that cTECs ensure multiple functions such as i) lymphoid progenitor homing, ii) T-cell lineage commitment, iii) immature thymocyte expansion, and iv) positive selection of thymocytes, it is likely that cTECs contain discrete functional subsets. Further investigations are required to clarify cTEC heterogeneity. Their development is regulated by signals provided by developing thymocytes. Human CD3ϵ transgenic mice (tgϵ26 mice), in which T-cell development is blocked at the early DN1 stage, have a disorganized cortex with cTECs arrested at the CD40 -MHCII lo stage (15,16). However, the transplantation of tgϵ26 recipients with bone marrow cells from Rag2 -/mice, exhibiting a subsequent block at the DN3 stage, restores the cortical organization (4,17). Furthermore, cTECs with a CD40 + MHCII hi phenotype develop in the thymus of Rag1 -/mice (16). Thus, cTEC development requires signals from thymocytes beyond the DN1 stage. Nevertheless, the cell-signaling pathways responsible for their development remain to be determined. FEATURES OF MEDULLARY TECs Compared to cTECs, mTECs are better characterized, likely because they are more abundant. mTECs have the unique ability to express up to 85%-90% of the genome and virtually all protein-coding genes (18). This promiscuous gene expression program is induced by the autoimmune regulator (Aire) and the transcription factor Fez family zinc finger 2 (Fezf2) (18,19). mTECs contain two main subsets identified on MHCII and CD80 cell surface expression levels: MHCII lo CD80 lo (mTEC lo ) and MHCII hi CD80 hi (mTEC hi ) (20). These two subsets are heterogeneous based on distinct markers and functional properties. mTEC lo contain mTEC hi precursors expressing alpha-6 integrin (Itga6) and Sca1 (Ly6a) (21)(22)(23). They also comprise CCL21 + mTECs implicated in the migration of positively-selected thymocytes into the medulla (24). Cell fate mapping studies have identified that mTEC lo contain post-Aire cells characterized by the loss of Aire protein and low surface levels of MHCII and CD80 molecules (25)(26)(27). Another subset of terminally differentiated mTECs closely resembling the gut chemosensory epithelial tuft cells are also present in mTEC lo (28,29). These cells express the doublecortin-like kinase 1 (Dclk1) marker and the transcription factor Pou2f3. Thus, the mTEC lo compartment is particularly heterogeneous, containing not only mTEC hi precursors but also CCL21 + , post-Aire and tuft-like mTECs. The mTEC hi compartment is also diverse, containing Aire -Fezf2 + and Aire + Fezf2 + subsets. Single-cell transcriptomic analyses have identified dozens of TEC subsets, including perinatal cTECs, mature cTECs, mTEC progenitors, Aire + , post-Aire and tuft-like mTECs (12,28,30). Among them, two other minor subsets termed neuronal and structural TECs have been identified based on their expression signatures associated with neurotransmitters and extracellular matrix such as collagens and proteoglycans (12). Further investigations are required to define their anatomical localization and function. Interestingly, a subset of proliferating mTECs expresses substantial levels of Aire, suggesting that it corresponds to a maturational stage just before Aire + mature mTECs (12,30). In humans, cTECs and mTECs are defined as EpCAM int CDR2 hi and EpCAM hi CDR2 -, respectively (31). AIRE and FEZF2 are also expressed in human mTECs, indicating a conserved mechanism for the regulation of tissue-restricted self-antigens (19,32,33). Recent single-cell transcriptomic analyses across the lifespan showed a largely conserved TEC heterogeneity in humans (34). cTECs are more abundant during early fetal development, then a population with cTEC and mTEC properties appears in the late fetal and pediatric human thymus and lastly mTECs are dominants. Interestingly, two rare TEC subsets expressing MYOD1 and NEUROD1 genes that resemble myoid and neuroendocrine cells, respectively, were also identified. Although these subsets are preferentially located in the medulla, their respective function remains to be studied. RANK-RANKL AXIS IN MTEC EXPANSION AND DIFFERENTIATION The tumor necrosis factor receptor superfamily (TNFRSF) member, receptor activator of nuclear factor kappa-B (RANK; encoded by Tnfrsf11a) and its cognate ligand RANKL (encoded by Tnfsf11) play a privileged role in mTEC expansion and differentiation. During embryonic development, RANK gradually increases and is expressed by Aire + mTEC precursors (35). In the adult, RANK is expressed by subsets that reside within mTEC lo and mTEC hi , including CCL21 + and Aire + cells (36). Importantly, the RANK-RANKL axis activates the classical and non-classical NF-kB signaling pathways that control the development of Aire + mTECs (37). In the embryonic thymus of RANK-or RANKL-deficient mice, Aire + mTECs are absent, indicating that this axis governs the emergence of Aire + mTECs (37,38). At this stage, RANKL is provided by CD4 + CD3lymphoid tissue inducer (LTi) cells and invariant Vg5 + dendritic epidermal T cells (DETC) (39). Nevertheless, other hematopoietic cells might be implicated since few Aire + mTECs are still detected in the embryonic thymus of mice lacking both LTi cells and DETC. In the postnatal thymus, the absence of RANK or RANKL leads to a partial reduction in Aire + mTECs, showing that other signal(s) are involved in mTEC differentiation after birth (37,40). Although Cd40 -/and Cd40lg -/mice show subtle defects in Aire + mTECs, these cells are further decreased in Tnfrsf11a -/-× Cd40 -/double-deficient mice compared to Tnfrsf11a -/mice, showing that RANK and CD40 cooperate to induce mTEC differentiation after birth (37). In the postnatal thymus, whereas CD40L is exclusively provided by CD4 + thymocytes, RANKL is higher in CD4 + than in CD8 + thymocytes and detected in iNKT cells (40)(41)(42) (Figure 1A). The contribution of LTi, DETC and iNKT cells in the adult might be limited due to their paucity compared to the large numbers of CD4 + thymocytes. This assumption is corroborated by the fact that mice deficient in CD4 + thymocytes have a dramatic reduction in Aire + mTECs and an underdeveloped medulla (41,43). RANKL is primarily synthesized as a membrane-bound trimeric complex that can be cleaved into its soluble form by proteases (44). A recent study showed that mice lacking soluble RANKL have normal numbers of Aire + mTECs, indicating that membranebound rather than soluble RANKL induces their differentiation (45). Accordingly, RANKL and CD40L signals are delivered by CD4 + thymocytes in the context of antigen-specific TCR/MHCIImediated interactions with mTECs (41,43,46). This is well illustrated in Rip-mOVAxOTII-Rag2 -/mice, in which the Rip-mOVA transgene drives the expression of membrane-bound OVA in mTECs allowing high affinity interactions with OVA-specific OTII CD4 + thymocytes. Aire + mTECs develop in these mice in contrast to OTII-Rag2 -/mice. RANKL in CD4 + thymocytes is likely regulated by TGFbRII signaling (47). Mice lacking TGFbRII in ab thymocytes at the early DP stage (Cd4-cre x Tgfbr2 fl/fl mice) have reduced RANKL levels in Helios + autoreactive CD4 + thymocytes. Conversely, the stimulation of purified autoreactive CD4 + thymocytes with TGF-b increases RANKL expression. This upregulation is prevented by MAPK pathway inhibitors, indicating that TGFbRII signaling induces RANKL by its SMAD4/TRIM33-independent pathway. Similarly, TGF-b stimulation was shown to increase RANKL in TCR-activated Tcell hybridoma (48). RANK signaling is regulated by the soluble decoy receptor for RANKL, osteoprotegerin (OPG; encoded by Tnfrsf11b), which inhibits RANKL interaction with its receptor RANK. OPG deficiency leads to an increased mTEC cellularity resulting in enlarged medulla with an enrichment in Aire + mTECs (49). Mice harboring a Tnfrsf11b deletion in mTECs have increased numbers of total and Aire + mTECs, similarly to Tnfrsf11b -/mice (50). Thus, OPG produced locally by mTECs rather than serum OPG regulates mTEC cellularity and differentiation. RANK activates Aire expression by the NF-kB signaling because Aire contains in its upstream coding region a highly conserved noncoding sequence 1 (CNS1) with two NF-kB binding sites (51,52). CNS1-deficient mice consequently lack Aire expression in mTECs and show many characteristics of Aire -/mice including reduced Aire-dependent tissue-restricted self-antigens. Noteworthy, the RANK-RANKL axis does not only induce Aire by itself but also controls mTEC cellularity and differentiation. In addition to Aire + mTECs, Tnfsf11 -/mice show reduced numbers of mTEC lo and mTEC hi (37). Conversely, Tnfrsf11b -/mice have increased numbers of CCL21and CCL21 + mTEC lo and Aireand Aire + mTEC hi (36,49). Accordingly, the stimulation of 2-deoxyguanosine-treated thymic lobes with RANKL show increased mTEC cellularity including Aire + mTEC hi , which is further augmented by the addition of CD40L protein (43,53). Furthermore, in vivo anti-RANKL blockade results in a severe depletion of around 80% of mTECs with a substantial loss of mTEC lo and Aire + mTEC hi (49). In addition to control Aire + mTECs, RANK signaling therefore regulates the overall mTEC cellularity. In humans, scRNA-seq data indicate that RANK is expressed by Aire + mTECs (34). Interestingly, the stimulation of primary human mTECs with RANKL leads to the upregulation of AIRE mRNA, suggesting a conserved role for RANK signaling (54). Given the implication of RANK-RANKL axis in bone resorption, a monoclonal antibody specific of human soluble and membranebound RANKL, Denosumab, has been developed to inhibit osteoclast development and activity. Denosumab is now used in therapy to treat osteoporosis, primary bone tumors and bone metastases (55,56). Nevertheless, considering the importance of RANK-RANKL axis in Aire + mTEC differentiation, it remains to be defined whether this treatment could affect central tolerance and increase the risk of autoimmunity. SENSIBILITY OF TECs TO MYELOABLATIVE CONDITIONING REGIMEN Myeloablative treatments such as radiation and chemotherapy deplete hematopoietic cells and in particular DP thymocytes that are extremely sensitive. These treatments also impair the recruitment of circulating T-cell progenitors and induce damages on TECs (Figure 2). Consequently, the generation of newly produced naïve T cells is reduced. Since TECs dictate the size of stromal niches, TEC injury contributes in a delayed T-cell reconstitution upon bone marrow (BMT) or hematopoietic stem transplantation (HSCT). In humans, allogeneic HSCT survivors are immunodeficient in T cells for at least 1 year, a period of high susceptibility to opportunistic infections, autoimmunity or tumor relapse, increasing the risk of morbidity and mortality (57,58). Although innate cells and antibodies may limit viral infections, cytotoxic CD8 + T cells and helper CD4 + T cells are essential in viral clearance and the prevention of recurrent infections. T-cell recovery thus protects from lethality after BMT or HSCT. Importantly, T-cell immunity relies on the regeneration of the thymus and its capacity to produce naïve T cells. Total body irradiation (TBI) leads rapidly in a profound reduction of the cortex due to the loss of DP thymocytes and a substantial decrease of the medulla (59). Both cTECs and mTECs are radiosensitive (60,61). Among mTECs, Aire + mature mTECs are lost upon TBI and treatment with the chemotherapy agent cyclophosphamide or the immunosuppressant cyclosporine A, used to prevent allograft rejection (61,62). However, the effects of such treatments on the recently identified dozens of TEC subsets remain to be investigated. Remarkably, the injured thymic tissue retains potent regenerative capacity. Targeting the pathways implicated in endogenous TEC regeneration is expected to improve thymicdependent T-cell recovery. Potential strategies based on keratinocyte growth factor (KGF), IL-22 or Bone Morphogenic Protein 4 (BMP4) have been reviewed in (58,63,64). Strategies based on FOXN1 protein or cDNA administration also improve TEC regeneration both in the context of HSCT and aging (65,66). A novel role for the RANK-RANKL axis in TEC regeneration and T-cell recovery is highlighted below. RANK-RANKL AXIS IN TEC REGENERATION RANKL is upregulated in radio-resistant LTi cells and CD4 + thymocytes during the early phase of thymic regeneration after total body irradiation (TBI) (61,67,68). Although LTi cells are rare in the thymus, they express a higher level of RANKL than CD4 + thymocytes after TBI (61). Interestingly, the administration of a neutralizing anti-RANKL antibody impairs TEC regeneration, emphasizing an important role for RANKL in endogenous TEC recovery. Conversely, RANKL protein administration increases TEC numbers at a level close to unirradiated mice. RANKL enhances cTEC and mTEC numbers, including Aire + mTEC hi and TEPC-enriched cells, likely by stimulating their proliferation and survival. These observations are in agreement with a previous study indicating that RANKL increases in vitro the proliferation of cortical and medullary TEC cell lines (69). Of clinical relevance, RANKL administration upon BMT boosts not only the regeneration of several TEC subsets but also increases T-cell progenitor homing ( Figure 1B) (61). This latter effect could be explained by an enhanced cellularity of endothelial cells upon RANKL administration although further investigations are required. Consequently, this treatment ameliorates de novo thymopoiesis and peripheral T-cell reconstitution. Noteworthy, a single course of RANKL after BMT boosts thymic regeneration at least during 2 months, indicative of a lasting effect. This therapeutic strategy is also efficient in aged individuals in whom T-cell recovery upon BMT is less efficient and delayed (70). Agerelated thymic involution results in a disrupted thymic architecture with a reduced TEC cellularity, which alters T-cell production (71). RANKL treatment could be thus of special interest to the elderly, although further studies are required. Mechanistically, RANKL upregulates another TNF family ligand, lymphotoxin a (LTa; encoded by Lta), expressed as a membrane anchored LTa1b2 heterocomplex, in LTi of recipient origin ( Figure 1B) (61). Conversely, the RANK-Fc antagonist fully blocks LTa1b2 upregulation. Noteworthy RANKL also induces LTa1b2 expression in LTi cells during lymph node formation (72). Likewise RANKL, LTa is upregulated during the early phase of thymic regeneration. Since CD4 + thymocytes upregulate RANKL and since LTi cells express both RANK and its ligand, RANK signaling may be triggered in LTi in an autocrine and paracrine manner. Given that LTi cells upregulate RANKL, LTa1b2, IL-22, IL-23R, and RORgt after thymic injury (61,68), these cells are likely in a quiescent stage at steady state and activated after irradiation to repair the injured thymic tissue. Accordingly, the depletion of ILC3, comprising LTi cells, in an experimental model of graft-versus-host disease (GVHD) results in impaired thymic regeneration (73). Interestingly, LTbR is also upregulated in cTECs, mTECs, and TEPC-enriched cells after TBI, suggesting that the LTa1b2-LTbR axis is implicated in TEC regeneration (61). At steady state, Lta -/mice show normal numbers of TEC subsets. In contrast, cTECs, mTECs including Aire + mTEC hi and TEPC-enriched cells are substantially reduced in these mice upon BMT. These observations indicate that the mechanisms implicated in TEC regeneration are distinct from those used at steady state. Furthermore, these mice show reduced numbers of early T-cell progenitors (ETPs) because LTa controls the homing capacity of circulating T-cell progenitors by regulating the expression of CCL19 and CCL21 in TECs and ICAM-1, VCAM-1, and Pselectin in endothelial cells, all implicated in T-cell progenitor homing (61,74). Similarly, Ltbr -/mice have an altered recruitment of T-cell progenitors after sublethal TBI (75). In agreement with defective TEC regeneration and T-cell progenitor homing, BMtransplanted Lta -/mice have impaired thymic and peripheral Tcell reconstitution. These beneficial effects induced by RANKL depend on LTa since they are essentially lost when RANKL is FIGURE 2 | Pre-transplantation conditioning regimen alters thymic-dependent T-cell production. In contrast to the physiological condition, the homing ability of circulating T-cell progenitors is reduced after pre-HSCT or BMT conditioning regimen. Furthermore, T-cell production is also reduced, notably due to TEC damages induced by myeloablative regimens. Consequently, the output of newly generated naïve T cells is diminished. administered in Lta -/recipients. RANKL administration thus constitutes a novel therapeutic strategy to improve T-cell function recovery after thymic injury. Interestingly, RANK and LTbR expression is conserved in the human thymus, opening potential therapeutic perspectives (34). Besides applications linked to myeloablative conditioning regimen, these in vivo findings open new avenues to treat patients whose thymus has been severely damaged by aging, viral infections, or malnutrition. AUTHOR CONTRIBUTIONS The author confirms being the sole contributor of this work and has approved it for publication. FUNDING This work received funding from the Agence Nationale de la Recherche (grant ANR-19-CE18-0021-01, RANKLthym to MI) and was supported by institutional grants from Institut National de la Santéet de la Recherche Medicale, Centre National de la Recherche Scientifique and Aix-Marseille Universite.
2021-01-22T14:06:41.594Z
2021-01-22T00:00:00.000
{ "year": 2020, "sha1": "c3f113dd7fca0ee272ff53d6c42b370eb35a649d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.623265/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c3f113dd7fca0ee272ff53d6c42b370eb35a649d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
198396298
pes2o/s2orc
v3-fos-license
The effect of first step holding time of low high austempering heat treatment to the mechanical properties of Austempered Ductile Iron (ADI) A low-high austempering process can be referred to as the application of undercooling condition in the austempering process of Austempered Ductile Iron (ADI). Undercooling condition promote more grain nucleation and producing finer grains with good mechanical properties. This study aims to investigate the influence of the first stage of lower temperature austempering holding time on its mechanical properties and find the best holding time which can produce the highest toughness of ADI materials. The experiment began with the heating of the nodular cast iron sample up to 927 °C and held for 120 minutes. Then it quenches to a liquid salt bath with a temperature of 260 °C as a lower temperature of the first austempering stage with a variation of holding time at 30, 60 and 90 minutes. Then it was transferred to a second salt bath medium of 400 °C as the second austempering stage with a holding time of 120 minutes then it was given the air cooling. The sample was tested by hardness, tensile, impact, and metallographic examination. The results show that the best toughness value in this research is obtained at 30 minutes holding time on first austempering stage with mechanical properties is tensile strength 1273 MPa, yield strength 1162 MPa, elongation 4,87% and impact value at 54.15 J. Value modulus of toughness is 5.93x107 J/m3. Introduction Austempered Ductile Iron (ADI) has long been used as an engineering material. If SG Iron is given an austenitization process and then followed by an austempering process, it will produce a mixed microstructure consisting of ausferrite, ferrite, and austenite with graphite nodules dispersed in it. This unique microstructure produces mechanical properties in ADI equivalent to steel forging [1]. ADI also has excellent fatigue strength, [2] high fracture toughness [3], and excellent wear resistance [4]. The interesting properties of ADI are related to its unique microstructure consisting of ferrite (α) and high carbon austenite (γHC). The ADI microstructure is different from forged steel where the bainite microstructure of steel consists of ferrite and carbide. Because of these differences, austempering products in SG iron are often referred to as ausferrite rather than bainite. The ausferrite is free from carbide which makes it has better toughness than bainite structure. The substantial amount of silicon present in SG iron suppresses carbide formation during the austempering reaction and retains large amounts of stable high carbon austenite (γHC) [5][6][7]. Conventional Austempering Process ADI's conventional austempering process can be implemented first with the SG Iron austenitizing process to a temperature of 871-927 o C and held for 2 hours; then SG Iron is given a quench process to temperatures between 260 and 400 o C. The SG Iron is then isothermally held at this temperature for about 2 hours and finally cooled by air. During the austempering process, a phase transformation reaction occurs in two stages [8,9]. In the first stage, austenite (γ) decomposes into ferrite () and high carbon austenite (γHC): If the casting is held at austenitizing temperature for too long, then a second reaction occurs, where the high carbon austenite (γHC) further decomposes into ferrite () and carbide () This excessive reaction is undesirable because it produces brittleness in ADI because the presence of carbides can eliminate the ductility and toughness. Therefore, for the successful production of ADI, SG Iron must be processed austempering in the bainite transformation region before the end of the bainite transformation. The other problem that some alloying element such as Mn and Mo make the austempering reaction becomes slow, and as a result, some unreacted austenite remains, which may turn into martensite during the final cooling process. This unreacted austenite leaves a continuous film layer in martensite in the intercellular region and consequently can trigger crack propagation through this region, and this may result in poor ductility and lower fracture toughness in ADI [10]. Two Step Austempering Due to the limitation of the austempering results above and to obtain a better combination of tensile strength and toughness, the two-step austempering process has been proposed by several researchers [11][12][13][14][15]. The process which is used in this research is the step-up austempering method. In this process (often called a low-high austempering), SG iron, after austenitizing, is given the quench process to a lower austempering temperature and held there for a predetermined period. The second austempering step carried out at higher temperatures either by raising the first salt bath temperature or transferring the casting to another salt bath which is maintained at a higher temperature than the first salt bath. The sample is austempered at this second temperature for the period needed and finally cooled in air. The austempering step-up process scheme is shown in figure 1. Figure 1. Step-up austempering process [10]. Many researchers for ADI material development have applied the step-up austempering process methods. It was found that the step-up austempering process has a significant improvement in the properties of ADI due to improved ferrite properties and higher austenitic carbon content in ADI avoiding the formation of iron carbide bracket structures [16,17]. The step-up austempering process produces a combination of lower and upper ausferrite. The first step of the austempering process is carried out at temperatures above the initial temperatures of martensite (MS) to 330 o C resulting in lower ausferrite, SG Iron has a high strength/hardness but low toughness, while in the second step of austempering holding time is done at 330 o C to 425 o C produce upper ausferrite with opposite behavior. [18,19]. A low-high austempering process can also be referred to as the application of undercooling condition in the austempering process of Austempered Ductile Iron (ADI) [20]. With undercooling, more grain nucleation can be promoted finer grains which are producing good mechanical properties. This study aims to investigate the influence of holding time of the first (lower) temperature of the lowhigh austempering process to the mechanical properties of ADI. How long is the best time for the first austempering process which give the best combination of lower and upper ausferrite structure and also give the finer structure. It was expected to find the good toughness of ADI materials among the variable used. Material The sample for the experiment was made from SG Iron material which was cast in the form of keel Block using ASTM A897 standard with a thickness of 13-38 mm. Keel block castings are cut in a square shape with a size of 22x15x15 mm for hardness test and microstructure examination, 13 x 13 x 58 mm for impact test and cylinders with dia. 10 x 50 mm for the tensile test. The number of samples each is arranged to the variations needed. The chemical composition of SG Iron as-cast material is verified using optical emission spectroscopy and the results as shown in the following Heat Treatment Process The heat treatment process was carried out by heating the material to the austenitizing temperature at 927 o C with holding time for 120 minute, then continued with quenching to the first salt bath medium with a temperature of 260 o C with a holding time variation of 30 minute, 60 minute and 90 minute. Then each sample was rapidly transferred to a second salt bath medium which has a higher temperature of 400 o C with a holding time of 120 minute after the hold time is reached then the sample is taken to be air cooled. Figure 2 shows a graph of the heat treatment experiment on the sample which is arranged in this study: Figure 2. Heat treatment cycle. Salt Bath A salt bath is used for the austempering process. Liquid salt temperature is determined according to the required temperature. Two units of the salt bath are used. The first salt bath for the first austempering process uses salt with a composition of 50% NaNO2 + 50% NaNO3. This salt bath is maintained at 260 o C. The second salt bath is used for austempering the second uses salt with a composition of 95% NaNO3 + 5% NaCl and the temperature set is 400 o C. Mechanical Test and Metallographic Examination All sample variations are tested by tensile testing using a universal tensile testing machine which refers to ASTM E-8M standard and hardness test with Rockwell C with ASTM E-18 standard. The impact test was also carried out with Charpy impact machine and refer to ASTM E-23 standards with unnotches Charpy sample specifications. The metallographic examination is also carried out to analyze the microstructure resulting from each process. Metallographic samples are etched using 3% Nital and observations are done using optical microscopy. To ensure repeatability of the process, each variation was carried out on three samples given similar heat treatment. Thus the average test results from the three samples were obtained. Microstructures The SG Iron as-cast material has a microstructure with the majority of Pearlite (90%) with a small amount of ferrite and spheroidal graphite. Figure 3 shows the microstructure of SG Iron as-cast material. Metallographic examination of low-high austempering results are shown in figure 4 (a), (b) and (c). The matrix of microstructure consists of lower ausferrite, upper ausferrite, ferrite needles, and lightcolored austenite with dispersed globular graphite. The first austempering process is producing lower ausferrite. The ausferrite is consist of ferrite and austenite. Ferrite needles are getting more and more and also look finer or look closer and sharp with longer of holding time on the first austempering. This shows that the longer austempering holding time at 260 o C result in the amount of lower ausferrite much more. The two-step austempering process generally produces finer ferrite [20] which increase the strength and hardness of the material, where the low-high process produces greater supercooling for ferrite nucleation which is increasing and can produce finer ferrite. The result showed that the longer holding time is given at the first austempering, then the value of the modulus of toughness is decreased. The highest value of modulus of toughness is shown by the first sample that is 5.93x10 7 J/m 3 . This modulus of toughness value is correlated with the impact test result. Figure 6 shows the results of impact testing. In the graph, it shows that the highest impact energy is produced by the first sample with a value of 54 J and decreases with increasing holding time on the first austempering. Hardness testing was carried out on the sample, and the results can be seen in figure 7 it shows that the hardness of the holding time at 30 minute is 38 HRC and increases to 40 HRC at the holding time of 90 minute. Figure 7. Hardness rockwell C test of all specimen. The increase in hardness value followed by a decrease in the value of elongation is related to the holding time condition that is too long. If the sample is held at austenitizing temperature for too long, then it is possible to a second reaction takes place; where is the high carbon austenite (γHC) can further decompose into ferrite (α) and carbide (ε) [10]. Thus, if the desired material is obtained that has good ductility, the ADI material must be austempered in the process window. The process window is defined as the time interval between the completion of the first reaction and onset of the second reaction ( figure 8). Thus the formation of carbides can be avoided. Conclusion The experimental results done in this study can be concluded that the longer holding time on first austempering affects ADI's material with increasing of tensile strength, yield strength, and hardness. But on the other hand elongation and impact values decrease. Low-high austempering can produce finer ferrite needle structures that contribute to the increase in ADI material's tensile strength and yield strength. This is also related to the formation of a combination of lower and upper ausferrite structures. The longer the holding time is given at, the lower temperature then, the lower ausferrite is formed more, this can increase the material brittleness. It is also analyses that the formation of carbide (ε) can be possible occurs due to long of holding time which can exceed of the process window in the ADI material ( Figure 8). As the combination of tensile and elongation then the toughness can be calculated to find the best value among the experimental variable. The highest value of modulus of toughness is shown by the holding time of 30 minutes that is 5.93x10 7 J/m 3 . To obtain tougher material, the total holding time at
2019-07-26T11:30:37.159Z
2019-07-03T00:00:00.000
{ "year": 2019, "sha1": "cf6f664358fafe5d80978cf4e4880d38e89963c6", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/541/1/012023", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "28809f458b35f4022d1c65be361dde33ac274151", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
253648642
pes2o/s2orc
v3-fos-license
Graft microvascular disease in solid organ transplantation Alloimmune inflammation damages the microvasculature of solid organ transplants during acute rejection. Although immunosuppressive drugs diminish the inflammatory response, they do not directly promote vascular repair. Repetitive microvascular injury with insufficient regeneration results in prolonged tissue hypoxia and fibrotic remodeling. While clinical studies show that a loss of the microvascular circulation precedes and may act as an initiating factor for the development of chronic rejection, preclinical studies demonstrate that improved microvascular perfusion during acute rejection delays and attenuates tissue fibrosis. Therefore, preservation of a functional microvasculature may represent an effective therapeutic strategy for preventing chronic rejection. Here, we review recent advances in our understanding of the role of the microvasculature in the long-term survival of transplanted solid organs. We also highlight microvessel-centered therapeutic strategies for prolonging the survival of solid organ transplants. Introduction The microvascular circulation comprises vessels that are <150 μm and includes arterioles, capillaries, and venules [1]. Arterioles are small arteries proximal to the capillaries, and in conjunction with the terminal arteries, contribute to the majority of the resistance to blood flow. The wall of the arteriole is made up of three layers: the intima, formed by the endothelial cells (ECs) and the basement membrane, the media, made up of the internal elastic lamina apposed by one or two layers of vascular smooth muscle cells (VSMC) and the adventitia, comprises fibroblasts, collagen bundles, and nerve endings [2]. Compared with arterioles, the walls of capillaries and venules are much thinner and contain only two types of cells: ECs and pericytes. Pericytes are embedded within the endothelial basement membrane and contact ECs directly in areas where the basement membrane is absent [3]. The microcirculation provides nutrition and oxygen supply to tissues and maintains tissue hydrostatic pressure; it is essential for normal tissue function [2]. Indeed, microvascular dysfunction has been shown to be involved in a number of diseases including insulin resistance, kidney fibrosis, and systemic sclerosis [4][5][6][7]. More recently, there is an increasing appreciation that coronary microvascular dysfunction may be a cause of chest pain, indicating that the microvascular system may be a promising therapeutic target for ischemic heart diseases [8]. In solid organ transplantation, chronic allograft vasculopathy in larger vessels has long been recognized as a major limitation for the long-term survival of transplant patients [9]. However, how microvascular injury and the accompanying pathologic remodeling affects the progression of chronic rejection and graft survival is not well known. Several recent animal studies highlight the importance of the microvasculature in solid organ transplantation. In a mouse orthotopic trachea transplantation (OTT) model, our group showed that the loss of a functional microvasculature is a prominent pathology that identifies the airways that are destined to develop fibrosis [10]; in this context, 'functional' means that the vessels are demonstrated to be effectively transporting blood, as opposed to be only being identified histologically. We subsequently demonstrated that enhanced airway microvascular repair during acute rejection delays and attenuates chronic rejection [11]. Protection of the microvascular system from ischemia reperfusion injury (IRI) has also been demonstrated to prevent the development of chronic rejection in a rat cardiac allograft model [12]. Moreover, a number of clinical studies have shown that loss of the microvascular circulation precedes and may predispose allografts to chronic rejection or failure [13][14][15][16][17]. These studies suggest that a functional microvascular system is essential for the health of a solid organ transplant, and preservation of an intact microcirculation may represent a novel therapeutic strategy to prevent or attenuate chronic rejection. The goal of this review is to provide a better understanding of the biology of the microvasculature in solid organ transplantation. We will first review the molecular and cellular mechanisms of vessel formation during development, because many of these events are recapitulated in vascular repair and regeneration in adults [18]. Next, the cycle of injury and repair seen in the transplant microvasculature will be discussed followed by a review of the mechanisms by which these microvessels can be damaged and thrombosed. The perspective will conclude with an exposition on the mechanisms employed by ECs to protect themselves from injury, the processes involved in repair of the microvasculature, and the pathways involved in pathologic remodeling and fibrosis. Based on these clinical and preclinical studies, we propose a neologism, 'graft microvascular disease' (GMVD) to describe microvascular abnormalities that can be observed during rejection. GMVD includes microvascular pathologies that are clearly distinct from the classical chronic graft vasculopathy, which is a diffuse concentric vascular wall narrowing that mainly affects arteries but not the microvasculature [9,19,20]. Overview of developmental vessel formation and remodeling Vasculogenesis, arteriogenesis, and angiogenesis are the major processes by which blood vessels are formed and remodeled [21]. Vasculogenesis describes the de novo emergence of primordial ECs and the vascular plexus during embryogenesis [21,22]. It has been recognized that fibroblast growth factor 2 (FGF-2) and bone morphogenetic protein 4 (BMP4) are two essential molecules required for the specification of mesoderm and its subsequent differentiation into cells of endothelial lineage [22][23][24][25][26]. Vascular endothelial growth factor (VEGF) is another key regulator of embryonic vasculogenesis and acts mainly by promoting EC survival and proliferation [22]. Following its initial formation, the primitive vascular plexus is remodeled into a functional vasculature by the coordinated activation of signaling pathways induced by factors such as VEGF, retinoic acid, and transforming growth factor-beta (TGF-β) [18,22]. Vasculogenesis was previously thought to occur only during embryogenesis. However, because of the discovery of circulating endothelial progenitor cells (EPCs) [27], which have recently been shown to promote vascular repair and improve tissue perfusion [27-29], postnatal vasculogenic activity is now considered possible. Arteriogenesis refers to either the remodeling of an existing collateral artery/arteriole to increase its luminal diameter in response to increased blood flow or, alternatively, to a de novo process that occurs by expansion and arterialization of the capillary bed [21,30,31]. Smooth muscle migration, growth, and differentiation play essential roles in arteriogenesis [30]. One recent study demonstrated that macrophage prolyl hydroxylase domain (PHD) 2 haplodeficiency promoted arteriogenesis in both development and in adult mice, and that following femoral artery ligation, these mice had better perfusion. Further mechanistic studies revealed that PHD2 haplodeficiency polarized macrophages to an M2-subtype, which produced higher levels of stromal cell-derived factor-1 (SDF-1) and platelet-derived growth factor-beta polypeptide (PDGFB). This process, in turn, enhanced vascular smooth muscle cell migration and proliferation and thereby arteriogenesis [32]. Another study demonstrated that developmental and adult arteriogenesis was regulated by synectin, a widely expressed PDZ domain protein involved in intracellular signaling; this regulation occurred in an EC-autonomous manner and suggests that ECs are central to both developmental and adult arteriogenesis [33]. Angiogenesis is a process of vessel sprouting from preexisting ones [34]. Recent studies have provided tremendous insights into the fundamental aspects of vascular sprouting during development as well as in tumor angiogenesis [34][35][36][37]. In a simplified model of vascular branching, hypoxia induces the production of VEGF. VEGF then stimulates ECs to produce dynamic filopodia, which the ECs use to probe environmental cues and guide their migration; these leading cells are termed 'tip cells' [34]. Cells that follow the tip cells are known as 'stalk cells'; these cells produce fewer filopodia and instead, proliferate and establish cell junctions to stabilize the new vessel sprout [35]. VEGF and Notchinduced signaling pathways are the fundamental drivers of vascular patterning and cooperate in an integrated intercellular feedback loop between the tip and stalk cells. In this signaling feedback loop, VEGF, acting through VEGFR2, induces delta-like ligand 4 (DLL4) expression in tip cells; tip cellexpressed DLL4 then activates Notch signaling in the neighboring ECs which downregulates VEGFR2 and neuropilin 1 and upregulates VEGFR1. In this manner, Notch signaling is important for promoting a stalk cell phenotype [34,35]. The canonical Wnt/β-catenin pathway also regulates angiogenesis. This pathway promotes vascular quiescence and stability by upregulating stalk cell expression of DLL4, which subsequently activates Notch signaling in the tip cells and promotes their phenotypic switch to stalk cells [38]. In addition to the classical VEGF-Notch driven branch patterning, it was recently demonstrated that 6-phosphofructo-2-kinase/fructose-2,6biphosphatase 3 (PFKFB3)-regulated glycolysis in ECs also plays a role in vascular sprouting by regulating the behaviors of both the tip and stalk cells [37,39]. Notably, the principle of tip-stalk specification by Notch signaling also controls the branching frequency of tumor vessels [40,41]. Microvascular EC injury in transplantation As ECs are the primary targets for alloimmune attack following transplantation [42-45], we will focus our discussion on injury to ECs of the microvasculature. We will discuss in detail the mechanisms by which immune cells, antibodies, complement factors, oxidative stress, and immunosuppressive drugs induce EC injury. Immune cell-mediated EC injury In immunosuppressed patients, cytotoxic T lymphocyte (CTL)-induced EC apoptosis is the major mechanism of acute cell-mediated rejection [42,46]. In general, CTL induces target cell apoptosis primarily through the cell-cell contactdependent granule exocytosis of effector molecules, mainly granzyme (Gr) B, perforin, and GrA and through the death receptor, FAS/FASL, pathway [47][48][49]. GrB can induce target cell death through generation of an active form of BH3 interacting-domain protein (Bid), which causes increased mitochondrial permeability and subsequent release of cytochrome C and second mitochondria-derived activator of apoptosis (SMAC/Diablo). GrB can also induce cell death through release of the reactive oxygen species (ROS) from mitochondria and through direct cleavage of caspase-3 and nuclear laminin [46]. GrA, also found in CTLs, has been shown not only to directly induce target cell apoptosis [50] but also to promote monocyte production of proinflammatory cytokines such as IL-1β, TNF-α, and IL-6 [51]. These findings suggest that CTLs indirectly induce EC dysfunction or injury by increasing the production of the inflammatory mediators. Finally, while FASL induces cell apoptosis through the FAS-associated death domain protein (FADD)/caspase-8/ 10-mediated extrinsic pathway, it plays an uncertain role in EC death during rejection [43]. Notably, EC death attributed to alloimmunity, CTLs act predominantly through the GrB/ perforin pathway, and the contribution of FAS/FASL death signaling is minimal [52]; this result might be explained by the finding that the expression level of c-FLIP, an inhibitory protein in the death pathway, is high in ECs [53]. However, ECs can be sensitized to the FAS/FASL pathway when FAS and pro-caspase 8 are induced by . Natural killer (NK) cells use similar mechanisms as those utilized by CTLs, namely the granule and death receptor pathways, to kill target cells [55]. In addition, NK cell also kills target cells through antibody-dependent cell-mediated cytotoxicity (ADCC), which may be the primary mechanism for EC death during acute antibody-mediated rejection (AMR) [42]. Macrophages have long been known to be key cells that mediate inflammatory injury in allografts [56,57]. Macrophages have also been shown to induce EC death in several preclinical model systems. Macrophages can induce EC apoptosis through activation of the Wnt pathway in patterning the eye vasculature during development [58]. Macrophages also induce EC apoptosis through the TRAIL signaling pathway during oxygen-induced retinopathy [59]. In addition, macrophages can also induce EC death through the production of hypochlorous acid, inducible nitric oxide synthase (iNOS)derived NO and proinflammatory cytokines such as TNF-α [42, 60,61]. We recently demonstrated that the lipid mediator leukotriene B 4 (LTB 4 ) produced by infiltrating macrophages in pulmonary hypertension lungs induced EC apoptosis via suppression of endothelial nitric oxide synthase (eNOS); LTB 4 was found to induce significant EC apoptotic death in a dose-dependent manner within 24 h of culture [62]. By extension, macrophage-produced LTB 4 may also induce allograft EC apoptosis during acute rejection. On the other hand, monocytes/macrophages have also been shown to promote angiogenesis and vascular regeneration in both transplantation and nontransplantation models [11,63], indicating a notable plasticity in this phylogenetically ancient cell type. Neutrophils are also found in large numbers in allografts undergoing acute rejection and are associated with graft inflammation [64,65]. Neutrophils have been shown to contribute to allograft rejection in various preclinical models [66][67][68]. In the setting of organ transplantation, neutrophils are thought to injure or kill ECs through the production of ROS or degradative enzymes used to kill invading pathogens [42]. However, research from nontransplant models suggest that the neutrophil extracellular trap (NET), which are networks of extracellular fibers, primarily composed of neutrophil DNA, might be a major mechanism by which neutrophils damage the microvasculature [69]. It has been shown that following neutrophil activation by platelets or anti-neutrophil cytoplasmic antibodies (ANCAs), NET formation damages capillary ECs [70,71]. Consistent with the finding that histones are the major mediator inducing tissue injury in sepsis [72], it was recently shown that NETs directly induce EC death, mainly by the activity of NET components such as histones and myeloperoxidase but not elastase [73]. Although no studies have examined the role of NETs in solid organ transplantation, these mechanisms may be involved in episodes of acute rejection. Antibody and complement-mediated EC death and proinflammatory responses Antibody-mediated acute or chronic rejection is a pressing problem in clinical transplantation [74][75][76][77][78][79]. Both donor specific antibodies (DSA) and nondonor specific antibodies (NDSA) have been described in rejection [80,81]. DSAs include anti-donor human leukocyte antigen (HLA) and non-HLA antibodies [82,83] and have long been known to cause profound changes in the ECs of the allograft microvasculature [84]. Anti-donor antibodies recognize HLA class I and II antigens, as well as non-HLA antigens such as angiotensin II type I receptor, vimentin, myosin, perlecan, type IV, V, and VI collagen, MICA, MICB, and ICAM-1 [82,[85][86][87][88][89]. The mechanism by which NDSAs contribute to antibodymediated rejection is thought to be through their crossreactivity with the major HLA proteins, such as HLA-A/B/C or HLA-DR/DQ/DP, mismatches at the allele level, and polymorphic epitopes with multiple targets [76]. Alloantibodies may induce EC death by complementdependent mechanisms [82,90]. Full activation of the complement system and the formation of the membrane attack complex (MAC), C5b-9, directly induce cell lysis [91]. In a rat cardiac transplant model, electron microscopy revealed that MAC-induced-EC lysis was characterized by EC swelling, fragmentation, and dissolution which led to the loss or narrowing of the microvascular lumen [92]. In addition to cell lysis, MAC also induces EC apoptosis [93], through a caspase-dependent process [94]. Similarly, MAC was also shown to contribute to the destruction of the microvascular integrity in lung allografts undergoing acute rejection [95]. Our group has also demonstrated that microvascular perfusion of airway allografts was preserved when grafts were transplanted into C3-deficient recipients. Further, we showed that C3-induced microvascular injury depended on anti-donor antibodies [96]. However, while C3 deficiency generally favored the preservation of the airway microvascular circulation, it also paradoxically enhanced capillary deposition of thrombin, which led to excessive generation of C5a that caused increased vascular leakage [97]. This study illustrates how using transplant microvascular perfusion as a separate metric of therapeutic success has the possibility of revealing surprising results which might not be considered if only histology is considered. We subsequently demonstrated that inhibition of both C3 and C5 resulted in near normal microvascular perfusion during acute rejection even in the absence of T cell suppression [97]. This study is consistent with an earlier finding that showed that thrombin may act as a C3dependent C5 convertase [98]. Other studies have demonstrated that C5a directly induced apoptosis of target cells, such as EC and adrenomedullary cells [99,100]. Thus, it is possible that in synergy with C3 deficiency, inhibition of C5a-induced EC injury will result in enhanced microvascular protection in different forms of solid organ transplantation. While there is tremendous evidence demonstrating that antibody-induced EC injury occurs through complementdependent mechanisms, noncomplement-fixing anti-EC antibodies have also been identified in transplant tissue, suggesting that there are alternative mechanisms for antibodymediated EC injury [87]. Indeed, alloantibodies can induce target cell apoptosis through the low-affinity Fc receptor for IgG, FcγRIII (CD16), on the surface of NK cells and macrophages [101]. In the last few decades, complementindependent antibody-mediated EC injury has been increasingly recognized as a relevant mechanism in allograft rejection, and this complement-independent EC injury is likely the most prominent mechanism in chronic antibody mediated rejection [101,102]. EC exposure to high levels of donor-reactive antibodies usually results in its lysis or apoptosis. On the other hand, low levels of donor-reactive antibodies still lead to activation of complement, but form sublytic levels of MAC. In this situation, MAC rather than directly killing ECs leads to a proinflammatory EC phenotypic change, a process known as EC activation [43, 84] (Fig. 1). Sublytic concentrations of MAC have been shown to stimulate EC expression of the adhesion molecules, ICAM-1, VCAM-1, and ELAM-1 [103]. Complement also induces EC production of proinflammatory mediators such as IL-8, MCP-1, and IL-1α through the activation of NF-κB [104,105], as well RANTES in an IL-1α-dependent manner [106]. In a recent landmark study by Jordan Pober's group, a fascinating finding emerged that while alloantibodyinduced MAC deposition on treated ECs, the MAC itself did not directly cause EC apoptosis but rather enhanced the recruitment of vasculopathic CD4 + T cells via noncanonical NF-κB signaling in ECs [107]. MAC also induces IL-6 production by vascular smooth muscle cells [108], suggesting that activated complement may also promote an inflammatory response by stimulating other types of cell layers in the microvasculature. Anti-HLA class I antibodies can also directly activate ECs in the absence of complement by promoting Weibel-Palade body exocytosis, characterized by the release of Von Willebrand Factor (vWF) and externalization of P-selectin, a molecule that facilitates leukocyte rolling and its trafficking to the tissue parenchyma [109]. Consistent with this finding, anti-HLA class I antibodies were shown to promote macrophage recruitment into cardiac allografts, and that this was dependent on the expression of P-selectin on the EC surface [110]. On the other hand, it was recently demonstrated that complement-fixing antibodies enhanced the recruitment of monocytes compared with noncomplement-fixing antibodies through dual-activating effects on both ECs and monocytes [111]. Collectively, these studies suggest that donor-reactive antibodies can induce EC death either through complementdependent or complement-independent mechanisms or by promoting cell-mediated immune responses. Oxidative stress induced EC damage Oxidative stress can result from an imbalance between the generation and elimination of ROS and can lead to EC dysfunction or death [112]. Accumulation of excessive oxidants have been commonly seen in solid organ transplants and are attributable to a range of factors including ischemiareperfusion injury, posttransplant graft dysfunction, use of immunosuppressive drugs as well as primary disease of the transplanted organ [113][114][115][116][117]. In ischemia-reperfusion injury, ROS is likely produced, initially, by donor vascular EC cells, followed by a second, much larger, burst of production by phagocytic cells such as neutrophils and macrophages [43,118]. In lung transplants with chronic rejection, neutrophils were shown to be a major source of ROS generation [115]. The immunosuppressant, cyclosporine A, induces ROS production in hepatocytes and renal mesangial cells [119,120]. Sirolimus also promotes ROS production by vascular cells and causes vessel dysfunction [121]. Recent studies have elucidated the mechanisms by which ROS cause EC dysfunction or death. Low concentrations of H 2 O 2 increase EC surface expression of ICAM-1 and MHC class I molecules [122]; this finding suggests that low levels of oxidative stress do not cause irreversible injury but instead activate ECs and promote inflammation. Oxidized phospholipids also modulate the inflammatory response of ECs by inducing the unfolded protein response (UPR) [123]. Lastly, in the mouse OTT model, we have shown that ROS production is associated with apoptosis of airway microvascular ECs [124]. ROS induction of EC apoptosis may act through activation of the protein apoptosis signaling kinase 1 (ASK1) [125]. ROS may activate ASK1 by lowering intracellular levels of glutathione and reduced thioredoxin [126,127], releasing ASK1 from its inhibitor, protein 14-3-3 [128] and activating protein kinase D (PKD), which facilitates the oligomerization and phosphorylation required for ASK1 activation [129]. Activated ASK1 then induces EC apoptosis in a JNKdependent or JNK-independent manner [125,130]. Oxidative stress also induced EC apoptosis through NF-κB activation [131]. These studies indicate that ECs of the transplanted organ may be subject to ROS-induced apoptosis through discrete mechanisms. EC damage by immunosuppressive drugs It is now well accepted that many of the immunosuppressive drugs used to prevent rejection can cause EC damage and dysfunction [132]. Studies have shown that different types of One study showed that at therapeutic concentrations, cyclosporine A, rapamycin, and mycophenolic acid all strongly induce oxidative stress in cultured human microvascular ECs and that this stimulation correlated with enhanced EC apoptosis. On the other hand, tacrolimus only slightly induced oxidative stress but led to profound increases in endothelin-1 (ET-1) production. Methylprednisolone causes the least amount of EC dysfunction [133]. Interestingly, another study showed that endothelial wound repair was significantly impaired by methylprednisolone but not by cyclosporine A and azathioprine [134]. Consistent with the in vitro findings, patients with kidney transplants treated with cyclosporine A had impaired NO production at both basal and stimulated conditions compared to patients treated with azathioprine and to healthy controls [135]. Tacrolimus also causes glomerular injury through induction of EC dysfunction by directly upregulating nicotinamide adenine dinucleotide phosphate (NADPH) oxidase activity and promoting ROS production [136]. Additionally, cyclosporine A led to microvascular endothelial dysfunction in patients with heart transplants [137]. Sirolimus (rapamycin) also causes coronary vascular dysfunction in cardiac allografts by upregulating mitochondrial superoxide release and by enhancing NADPH oxidase-driven superoxide production [121]. These preclinical and clinical studies collectively demonstrated that commonly used immunosuppressive drugs induce EC dysfunction, with excessively produced ROS as a prominent downstream effector. Microvascular thrombosis The endothelium is the master regulator of microvascular thrombosis. EC expression of a number of factors is known to be prothrombotic; these factors include procoagulants, such as vWF, tissue factor (TF), thrombin receptor and PAI-1, adhesion molecules, such as ICAM-1, VCAM-1, E-selectin and P-selectin, vasoconstrictors such as ET-1 and platelet activating factor (PAF), and proapoptotic molecules such as Bax, Bad, and CCP32 [138]. Therefore, both the alloimmune response and nonimmune factor-induced EC activation or death predisposes the transplant microvasculature to thrombosis [42,43]. In addition, immunosuppressive drugs such as cyclosporine A, tacrolimus, rapamycin, and antithymocyte globulin have all been shown to enhance thrombus formation [139]. In a clinical study, fibrin was found in the microcirculation in about 50 % of human cardiac transplants 1 month following transplantation and that fibrin deposition was associated with the development of coronary artery disease and graft failure [140]. Moreover, prothrombogenic characteristics of the microvasculature observed in the early posttransplant period in heart transplant patients were persistent in a long-term follow-up of [140,141]. Correspondingly, a rat model of heart transplantation showed that a hypercoagulable microvasculature is associated with the development of coronary artery disease [142]. High-dose treatment with antithrombin III has been demonstrated to induce long-term survival of mouse cardiac allografts [143]. Similarly, platelet inhibition attenuated the development of fibrosis in airway allografts [144]. Thus, in addition to EC apoptosis induced by alloimmunity, microvascular thrombosis can also contribute to compromised transplant perfusion leading to chronic rejection. EC resistance to injury ECs can acquire resistance to injury by upregulating a number of cytoprotective molecules. As stated above, cell-mediated EC injury depends primarily on the GrB/perforin pathway and to lesser degree, the FAS/FASL pathway. Studies from cancer biology have demonstrated that induced overexpression of proteinase inhibitor 9 (PI9), a potent endogenous inhibitor of GrB, protected cancer cells from T cell and NK cell-mediated apoptosis [145,146]. It has also been shown that high PI9 expression in ECs protected these cells against cytolytic cellmediated killing [147]. PI9 expression has been shown to be inducible in ECs by an NF-κB activator, phorbol ester PMA [148]. These studies suggest that EC expression of PI9 may render its resistance to cytotoxic cell-induced apoptosis. ECs may also become resistant to antibody-mediated cell injury, a phenomenon known as accommodation [101] (Fig 1). Expression of anti-apoptotic genes such as Bcl-2, A20, Bcl-X L , and HO-1 has been shown to be increased in ECs of accommodated xenografts [149,150]. Bcl-2, Bcl-X L , and HO-1 expression are also significantly increased in accommodated mouse cardiac transplants and silencing of Bcl-2 abolished the accommodation [151]. Increased expression of Bcl-X L was found in ECs of accommodated human renal transplants with circulating anti-donor antibody [152]. This study also showed that Bcl-X L expression in human ECs can be induced by exposure to low concentrations of anti-HLA antibody. Further studies demonstrated that subsaturating concentrations of anti-HLA class I antibody not only induced high expression levels of Bcl-2, Bcl-X L , and HO-1 but also activated the PI3K/Akt pathway, which facilitated phosphorylation and consequent inactivation of the proapoptotic molecule, Bad [153]. Complement regulation may also be involved in graft accommodation via human complement regulatory factors including CR1, decay accelerating factor (DAF, CD55), membrane cofactor protein (MCP, CD46), and CD59. Mice express complement receptor-related protein (CRRY) but not MCP. CD59 inhibits the MAC and the other factors inhibit the activation of both the classical and alternative pathways at the level of C3 convertase and C5 convertase [101]. A number of studies suggest that upregulation of complement regulatory factors plays a protective role in transplanted organs. EC expression of CD55 and CD59 has been shown to be associated with improved graft function in patients with complement deposition [154,155]. Expression of both CD46 and CD55 is low in human lung transplants with chronic rejection [156]. Donor EC expression of CD46 in pig-to-baboon xenotransplantation is required to limit hyperacute rejection [157]. In vitro, CD55 expression can be induced by proangiogenic factors such as VEGF and FGF-2 [158]. Interestingly, VEGF-induced CD55 expression can be inhibited by cyclosporine A [159]. These studies suggest that proangiogenic factors may promote vascular repair by protecting ECs from complement-mediated injury and that immunosuppressive drugs may also cause EC injury by negatively regulating the complement regulatory factors. IFN-γ, TNF-α, and C5b-9 complex all induce EC expression of CD55, and IFN-γ with TNF-α stimulation reduces complement C3 deposition [160], suggesting a possible physiological feedback mechanism for maintaining the integrity of the microvasculature in the proinflammatory milieu of organ transplants. Nonimmune shear stress was also shown to induce CD59 expression in ECs [161] and is another mechanism by which a complement regulatory factor counteracts vaso-injurious stimuli. Microvascular repair Using a functional mouse orthotopic tracheal transplant model, our group described the microvascular phenotypic change in airway transplants undergoing unmitigated alloimmune attack and the physiologic consequences of this microvascular destruction. Of note, chronic rejection developed in this model manifests mainly as subepithelial fibrosis rather than luminal fibrosis and so does not replicate the obliterative bronchiolitis (OB) lesion found in human lung transplants but is quite similar to the large airway precursor of BOS, lymphocytic bronchitis. The mechanisms associated with airway fibrosis from this model have generally been used to cautiously infer causes of fibroproliferation developing in OB lesions [10,11]. It is possible, and perhaps likely, that more complex solid organ transplants are not revascularized in the same manner as more architecturally simple tracheas; however, use of this airway model has made it possible to divine simple 'rules' of vascular reorganization following rejection, rescue and remodeling. Following transplantation, the graft microvasculature in airway transplants display two general phenotypes during acute and chronic rejection respectively. In acute rejection, allografts maintain a donor-derived circulation which is undergoing both injury and concomitant repair prior to destruction. This first vascular phenotype is characterized by vessels that are relatively permeable to microspheres with evidence of the repair by donor-derived Tie2+ angiogenic cells. Transplants perfused by vessels of this phenotype can be restored to normal with immunosuppression; these allografts are never ischemic and display pseudostratified columnar epithelium without fibrosis. The second vascular phenotype which occurs as a result of chronic rejection consists of a regrown chimeric microvasculature, largely of recipient origin, following destruction of the donor circulatory system. It is likely that, in organs with larger mass than airway allografts, that the degree of chimerism is substantially less than observed in the tracheal model. In the latter model, this vascular phenotype is characterized by new vessels that are structurally and functionally abnormal and perfuse airways now lined by flattened, cuboidal, and nonciliated epithelial cells overlying subepithelial fibrosis [11]. We think these are prototypes of GMVD. In other words, GMVD includes distinct microvascular pathologies that may appear in different rejection phases. Once the airway transplant loses its functional microvasculature, it cannot be rescued by immunosuppressive therapies and progression to chronic rejection is unrelenting [10]. Principles that emerged from this work were that just as microvessel loss following acute rejection predicted a lack of response to immunotherapy, so preventing microvessel loss could prevent chronic rejection. The repair of donor vessels through the augmentation of endogenous cellular repair processes in both the donor and recipient may be key for maintaining a normal transplant. It is now generally accepted that the ECs which contribute to this repair process are derived both from the local vascular bed as well as from the systemic circulation [28,162]. Because of its importance in regulating the control of angiogenesis in hypoxic tissue, we investigated the role of hypoxia inducible factor-1alpha (HIF-1α) in transplant vascular repair. We showed that HIF-1α deficiency in airway transplant donors accelerated microvascular loss, consistent with HIF-1α being an important signaling molecule in microvessel repair. We found that recipient-derived Tie2-expressing cells (i.e., cells with EC, monocyte and pericyte lineages) are present in the donor during acute rejection and that the recruitment and retention of these proangiogenic cells are regulated by donor-expressed HIF-1α and its downstream gene, SDF-1. Overexpression of HIF-1α in the donor promoted enhanced migration of recipient-derived proangiogenic cells and prolonged tissue perfusion, which in turn attenuated the development of tissue fibrosis [11]. We further demonstrated that knockdown of the VHL gene, a negative regulator of HIF, in Tie2 lineage cells of the recipient, promoted microvascular repair in the transplant [163]. This confirms that recipientderived proangiogenic cells contribute to the repair of the donor microvasculature and provides evidence that overexpression of HIF in proangiogenic cells enhances their reparative capacity. Together, these studies suggest that overexpression of HIF-1α in both the donor and recipient promotes allograft microvascular repair and that this enhanced repair may result from an increased expression of proangiogenic factors such as placental growth factor (PLGF), SDF-1 and to a lesser degree VEGF [11,124,163]. Interestingly, while EC VEGF autocrine signaling has been shown to be required for vascular homeostasis [164], excessive VEGF acting on EC in a paracrine fashion often results in immature vasculature [165]. It is therefore possible that locally overexpressed HIF-1α (especially in EC lineage cells) may promote transplant vascular homeostasis in part by inducing EC expression of VEGF, which in turn promotes its survival. Such excessive VEGF signaling may occur secondary to 'leukocyte-induced angiogenesis,' first described in the 1970s [166,167]. As reviewed by Contreras and Briscoe [168], inflammation itself promotes a form of angiogenesis that is ultimately deleterious to the transplant. Early physiologic homeostatic repair of graft microvasculature in the absence of inflammation appears to be an important factor in limiting tissue fibrosis and chronic rejection. By contrast, if VEGF is delivered to the tissue, via exogenous production or by VEGF-producing leukocytes its effects may be nonphysiological and cause abnormal neoangiogenesis and disease. In the case of allograft rejection, delivery of VEGF in this manner results in a maladaptive type of angiogenesis that causes local hypoxia reminiscent of tumor neovascularization (reviewed in [169]). While HIF-1α signaling can promote microvessel integrity, other proinflammatory pathways can foster repair, which as alluded to above may be less functional than vessels repaired in the absence of inflammation. The C5b-9 complex has also been shown to induce EC proliferation and migration in an Akt-dependent manner [170], suggesting a potential feedback mechanism for enhancing microvascular repair following alloimmune-induced inflammation. Other proinflammatory mediators produced by leukocytes may also promote EC activation, proliferation, and angiogenesis [169]. However, these newly produced vessels are abnormal and are not optimized for the delivery of oxygen and nutrition. Therefore, the ideal therapeutic strategy to promote microvascular repair should not only mitigate inflammation but also promote more physiological angiogenesis (such as vascular repair promoted by HIF-1α). Microvascular remodeling and fibrosis Fibrosis is characterized by the excessive production of extracellular matrix constituents and is often a result of chronic inflammation caused by inadequate tissue repair [171,172]. Pathological angiogenesis, also called vascular remodeling, is associated with all fibroproliferative disorders [173]. In a heterotopic mouse trachea transplantation model, CXCR2 ligand/CXCR2 signaling was associated with pathological angiogenesis and disruption of this signaling pathway attenuated late abnormal vascular remodeling [174]. Other proinflammatory mediators such as IL-1α, IL-1β and TNF-α also promote vascular remodeling [175], suggesting that pathological angiogenesis is likely promoted by the proinflammatory microenvironment of the transplanted organs. There is an increasing appreciation that the microvasculature plays an important role in the development of fibrosis and recent studies are beginning to elucidate the mechanisms by which microvascular remodeling promotes tissue fibroproliferation [176] (Fig. 2). Hypoxia has consistently been shown to be involved in the development of lung, cardiac, liver, and kidney fibrosis [177][178][179][180]. In the mouse orthotopic tracheal transplant model, we found that microvascular remodeling starts after the loss of airway vessels. The remodeled vessels are tortuous, smaller in caliber, leaky, have sluggish blood flow, and have lower pO 2 in the surrounding tissue, suggesting that these vessels are both structurally and functionally abnormal. Promotion of vascular repair of the airway allograft by overexpressing HIF-1α early after transplantation diminished late tissue remodeling, resulted in augmented tissue pO 2 and is associated with a lesser degree of fibroproliferation [11,163]. Conversely, insufficient vascular antibody, complement, oxidative stress, and immunosuppressive drugs also induce vascular injury. Damaged microvasculature can be repaired and reversed to normal through local production of angiogenic factors, proliferation of resident vascular progenitor cells, as well as recruitment of recipient-derived proangiogenic cells. Insufficient microvascular repair leads to its remodeling. Both injured and remodeled microvasculature are functionally abnormal and results in tissue hypoxia followed by tissue fibroproliferation. In addition, vascular remodeling enhances both the endothelial cell to mesenchymal and pericyte to mesenchymal transition, both of which promotes fibrosis. Abbreviations: EC endothelial cell, PC pericytes, CTL cytotoxic T lymphocyte, NK natural killer repair followed by remodeling causes prolonged tissue hypoxia which may subsequently act as a promoter of tissue fibrosis. These findings suggest that tissue hypoxia due to lack of perfusion may be a leading cause of fibrotic remodeling. Recent work has also provided ample evidence that both ECs and pericytes may differentiate into myofibroblasts and contribute to the production of extracellular matrix proteins [181,182]. Therefore, microvascular remodeling may promote tissue fibroproliferation by multiple discrete mechanisms. Concluding remarks Research over the last few decades has established that ECs are a primary target for alloimmune responses. There is also an increasing recognition that a functional microvasculature is an important determinant of the long-term health of transplanted solid organs. Given that extensive microvascular injury with insufficient repair leads to pathogenic angiogenesis and subsequent fibrosis, preservation of a healthy microvasculature by inhibiting pathways that lead to microvessel injury, increasing EC resistance to injury, or promoting vascular repair during acute rejection may represent an effective and novel therapeutic strategy for attenuating or even preventing chronic rejection. Inhibition of complement activation, oxidative stress, and thrombosis pathways may represent potential therapeutic targets for promoting microvascular health. Also, careful selection of immunosuppressive drugs is required and will be helpful in preventing unwanted EC injury. Another strategy for maintaining a healthy microvasculature is to induce EC-specific overexpression of cytoprotective molecules such as Bcl-2, Bcl-X L , HO-1, PI9, and complement regulatory proteins such as CD55, CD46, and CD59, all of which have been shown to promote resistance to cell-and/or antibodymediate injury. Additionally, promotion of physiological microvascular repair such as by enhancing HIF-1α expression, especially in cells of EC lineage, during acute rejection may also be effective in preventing the development of chronic rejection; effectiveness of this approach will likely be enhanced by limiting leukocyte-driven angiogenesis (i.e., giving increased immunosuppression). Lastly, once pathological angiogenesis and accompanying fibroproliferation has started, blockade of this nonproductive vascular remodeling may also be of therapeutic efficacy. Toward this end, a better understanding of angiogenesis gained from developmental models may help to discover other effective targets for intervention. GMVD may display distinct forms during acute and chronic rejection phases. During acute rejection, GMVD can be reversed to normal by appropriate immunosuppression with potential benefit from adjuvant therapies which promote physiological vascular repair. During chronic rejection, an emerging therapeutic goal appears to be attenuating pathological microvascular remodeling. Of note, both forms of GMVD may coexist in a transplant when different parts of the organ are in different rejection phases. Identification of the forms of GMVD within a transplant is therefore essential for optimizing new effective therapeutic interventions.
2022-11-19T15:11:45.006Z
2014-06-01T00:00:00.000
{ "year": 2014, "sha1": "dc13975dac3a282dcbbc90c6305336057ba3e32e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00109-014-1173-y.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "dc13975dac3a282dcbbc90c6305336057ba3e32e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
1724379
pes2o/s2orc
v3-fos-license
Reduction of Motion Artifacts and Improvement of R Peak Detecting Accuracy Using Adjacent Non-Intrusive ECG Sensors Non-intrusive electrocardiogram (ECG) monitoring has many advantages: easy to measure and apply in daily life. However, motion noise in the measured signal is the major problem of non-intrusive measurement. This paper proposes a method to reduce the noise and to detect the R peaks of ECG in a stable manner in a sitting arrangement using non-intrusive sensors. The method utilizes two capacitive ECG sensors (cECGs) to measure ECG, and another two cECGs located adjacent to the sensors for ECG are added to obtain the information on motion. Then, active noise cancellation technique and the motion information are used to reduce motion noise. To verify the proposed method, ECG was measured indoors and during driving, and the accuracy of the detected R peaks was compared. After applying the method, the sum of sensitivity and positive predictivity increased 8.39% on average and 26.26% maximally in the data. Based on the results, it was confirmed that the motion noise was reduced and that more reliable R peak positions could be obtained by the proposed method. The robustness of the new ECG measurement method will elicit benefits to various health care systems that require noninvasive heart rate or heart rate variability measurements. Introduction The electrocardiogram (ECG) is a set of voltage signals that is generated by heart activity [1]. The R wave of ECG can be used to calculate the heart rate (HR) and its variability (HRV), which contain important information on the body's physiological condition. This data can be used to diagnose diseases like multiple sclerosis, stroke, ischemic heart disease, and myocardial infarction, or can also be used to provide information on the autonomic nervous function [2,3]. Therefore, measuring HR or HRV in daily life can be beneficial to humans, because it can serve as a regular check of cardiac electrical activity. In addition, if an ECG system can measure the HR or HRV of a human in a vehicle during his driving route, it can detect abrupt possible cardiac abnormalities or physical changes in the driver, such as fatigue, drowsiness, and stress [4,5]. This can make driving safer and provide comfort to drivers by making it possible to take proper action in advance or by providing the necessary information when an abnormal condition is detected. However, the conventional ECG measurement method is not practical for use in daily life. It requires Ag-AgCl electrodes that must be attached to the body of the tested subjects directly. This method is rather uncomfortable and disturbs daily activities or driving. that the R peaks can be detected in a more stable manner using the proposed method. Furthermore, because its computational complexity is not high, the near-real-time monitoring of HR or HRV by the detected R peaks is possible using this method. Measurement System In conventional works, two or more cECGs were used as electrodes, and they were attached to a chair or a driver's seat to measure the ECG during daily life activities or driving [25,26]. Furthermore, conductive fabric or an additional cECG was placed on the bottom of the seat for the reduction of the common noise component of the measured signals using a driven-right-leg circuit (DRL) [27]. Our measurement system is designed based on these prior designs. It has two cECGs attached to a seat without any covers, and the conductive fabric is placed on the bottom of the seat for the DRL. However, in our system, an additional cECG is added near the each cECG used to sense ECG as shown in Figure 1. This is to measure the motion information around the cECG for ECG measurement and to obtain a reference signal for ANC. In the figure, R and L are the right and left sensors for ECG measurement, and aR and aL are additional right and left sensors to get the motion information. The signals measured by the four cECGs are filtered using a low-pass filter with a 40 Hz cutoff frequency to remove high-frequency noise. They are then sampled to digital signals at 360 Hz through an NI 9205 from National Instruments using 16 bits for analog-digital conversion. Let the signal measured by cECG i be sig i . The measured ECG (ECG m ) is obtained using the limb lead 1 method as where BPF means the use of a BPF with a 0.05-35 Hz cutoff frequency [28]. In addition to these two signals, r L and r R are calculated as They are used to construct the reference signal for ANC, and its detailed description will be presented in Section 2.3, which discusses the ANC using the reference signal. Data Acquisition Both indoor and outdoor experiments were conducted for data acquisition. In indoor experiment, our measurement system was installed in a chair where subjects sat to measure the ECG. The subjects were one 29-year-old, two 25-year-old, and one 28-year-old healthy males with no heart-related medical history. All subjects wore short sleeved t-shirts made of 100% cotton during the experiments. Five-minute ECG recordings were acquired, three times per subject. The subjects randomly moved their bodies to make a significant amount of artificial motion noise during the experiment. The reason for this motion noise is to verify the performance of our system in a severe environment. In addition to our measurement system, conventional contact electrodes were also used to obtain the true R peak position. For the outdoor experiment, our system was integrated in the driving seat of a vehicle, and the ECG was measured for six people for a period of 15 min for each, during driving. The subjects were healthy males aged 27-32. Among them, subject 1, a male aged 29, and subject 6, a male aged 28, also participated in the indoor experiment as subject 1 and 4. The subjects wore same clothes used in the indoor experiment. The driving course was a road network that surrounds Pohang University of Science and Technology. Its total length is approximately 2 km, and it includes corners, an uphill grade of 311 m, downhill roads of 120 m, and eight speed bumps, in addition to straight lines. The subjects drove the course anticlockwise and repeatedly during the experiment at speeds lower than 50 km/h. The ECG was also recorded independently using conventional contact electrodes. This was to get the positions of the true R peaks, and detected peaks were inspected manually to correct any errors. The true R peaks were used to evaluate the accuracy of the tested R peaks. All R peaks were detected by the Pan and Tompkins (PT) algorithm because it is simple and its performance has been well studied in many publications [29][30][31]. The PT algorithm uses a BPF of 5-15 Hz to remove unnecessary signal parts. Then, it applies derivative, squaring, and moving-average operations to the filtered signal. The R peaks are detected by a threshold-based method for the filtered and moving averaged signals at the algorithm. Figure 2 is the basic structure of ANC [32][33][34]. In this structure, ANC separates the noise signal n from d, the sum of the original signal s and noise n, using the adaptive filter and reference signal n . For this operation, it is assumed that n is correlated with n but uncorrelated with s. The difference between d and the output of the adaptive filter y can be expressed as the error signal e according to ANC and Proposed Reference Signal If s and n are uncorrelated and n satisfies the former assumption, the mean square of e is as follows: The adaptive filter updates its filter weights to minimize the value of E[e 2 ]: The filter output y approximates the noise n, and the estimated s can be obtained from the value of e. In our case, s, n, and d represent the original ECG, motion noise, and ECG m , respectively. For this ANC scheme, it is important to design the proper reference signal. It must have a low correlation value with respect to the original ECG and a high correlation with motion noise. To obtain the reference signal that has a high correlation with motion noise, r L and r R which are constructed by the difference of adjacent cECG signals are used. This is because it is assumed that there is little difference between the cECG signals when a subject is in a static condition, and that the difference increases when the subject is in motion. The verification of its suitability as the reference signal for ANC will be analyzed in Section 4.2. To use r L and r R in our method, the two signals are combined as a reference signal n . When the length of the weight vector or tap length for the adaptive filter is 2 · L, L elements in r L and r R are used for n , as shown in Figure 3. The tap length must be large enough to model relation between n and n , but too large tap length increases computational complexity. In the adaptive filter, the APA is used as the weight-updating algorithm because it converges faster than normalized least-mean-squares and it is less complex than the recursive-least-squares algorithm. Its weight vector updating equation is as follows [23]: U k = [n k n k−1 ... n k−P+1 ] T (8) where η is the step size, is the regularization parameter, and I is a P × P unit matrix. η determines the convergence speed of the adaptive filter. A large η increases the convergence speed, but it also increases misalignment. is set to a small constant. Then, I becomes a diagonal matrix having as diagonal entries. It is used to prevent the case that the inverse of U k U T k does not exist in Equation (10). P is the projection order of APA, and it means the number of input vectors used to update the weight vector. P decides the convergence speed and complexity of the algorithm. A larger P can increase the convergence speed, but at the same time the complexity of the algorithm will be increased because the dimension of U k U T k is increased. In our method, P is set to 2. w k is the weight vector of the adaptive filter for the k th sample, and it is updated recursively. Then, the k th sample of y is calculated by w k and reference signal n k as follows: For data having M samples, the output of ANC (ECG ANC ) can be obtained as e, The effect of ANC can be seen in Figure 4. The figure represents a part of the data from subject 5 of our outdoor experiment. In the figure, motion noise occurred from 2.7 to 4 s of ECG m , and the incorrect R peak was detected at approximately 2.7 s owing to the noise. If r L and r R include the motion information, the noise can be removed. In the example, r R was highly correlated with the motion noise. According to this, the noise could be reduced, and all R peaks could be detected well after ANC, as seen in the second graph in the figure. Post-Processing Even though the proposed reference signal is effective in most cases of motion noise, exceptional situations could happen under real circumstances. These phenomena mainly occur when the additional sensor includes a noise component that does not appear in the ECG m . That is, if a measurement or subtle motion noise occurs only at the additional sensors for the reference signal, the reference signal will contain a noise component that is uncorrelated with noise components in the ECG m . This could cause an error when ANC is conducted because the assumption of high correlation between the reference signal and a noise component in the ECG m is not satisfied. Figure 5 shows an example. In the figure, a noise that is not included in the ECG m was measured by r L at 0.5 s. By this noise component, a peak was wrongly detected after ANC, and the result became more erroneous than that of the ECG m . Therefore, for a robust and practical algorithm, this phenomenon must be prevented, and a post-processing is needed to correct the erroneous result. In our method, ECG ANC and ECG m are evaluated by two rules that are related to the change of peak interval and signal power. When ECG ANC is considered to be abnormal by the rules, ECG ANC is not used and is replaced by ECG m . This process can be considered as a selection process to decide whether ECG ANC is used or not. To test two signals, the first rule checks the normality of detected R peaks. For this, the R peaks of each signal are detected by the PT algorithm mentioned in Section 2.2. Then, the validity of detected R peaks is tested by the standard to check whether the range of acceleration or deceleration of instantaneous HR is normal. The standard uses an inequality introduced in [35]: where t i , t i+1 , t i+2 are the temporal positions of the three successive R peaks. This is based on the band-limited characteristics of the variation of R peaks, and its effectiveness was verified by an open ECG database [35]. Using this standard, if the number of three peaks that satisfy the standard in ECG ANC is higher than that of ECG m , then ECG ANC is used as the output of our algorithm. This is because it can be considered that ECG ANC contains more valid information on R peaks than ECG m . As an example, only two groups of three R peaks satisfied the standard in ECG m , as shown in Figure 6. On the other hand, the number of three R peaks groups that satisfy the standard was three for ECG ANC by reduced noise. In this case, ECG ANC is selected as the output of the algorithm by the rule. For cases that do not satisfy the first rule, the second rule is applied. The rule compares the change of signal power in 5-15 Hz after ANC. The range of 5-15 Hz is the frequency band of an R peak [29]. Therefore, if ANC operates effectively and motion noise is reduced, the power of the frequency band will be decreased. This is because ANC assumes that the measured signal contains an additive noise to an original signal, and ANC operates to cancel out the noise. On the other hand, the increased power of the frequency band can be considered as an abnormal operation of ANC. As seen in Figure 5, if the reference signal contains noise that is uncorrelated with ECG m , it can generate additional noise that can cause the incorrectly detected R peaks or a performance degradation in R peak detection. For this reason, ECG ANC is discarded and ECG m is used as the output of the proposed algorithm when this rule is satisfied: where P ANC is the 5-15 Hz signal power of ECG ANC and P m is that of ECG m . For the rest of the cases, ECG ANC is used as the output of the algorithm. Performance Index To represent the accuracy of the detected R peaks, two performance indices were used: a) the sensitivity (Se), and b) the positive predictivity (P + ) index. The two indices are calculated as follows, where TP is the number of correctly detected R peaks, FN is the number of undetected R peaks within searching windows based on true R peaks, and FP is the number of detected R peaks at the outside of the searching window. Conventionally, 150 ms of searching window is used and it has 54 sample points in our system with a 360 Hz sampling rate [36]. However, we used 15 sample points for the searching window to identify the correctly detected R peaks because the ECG measured in the non-intrusive manner is much noisy than the ECG by the contact electrodes. Figure 7 shows the necessity of the decreased searching window. In the figure, the third peak was incorrectly detected by noise, but the sample difference with the position of its true R peak was 42 samples. In this case, the peak can be considered as the correctly detected R peak by the 150 ms of searching window. To avoid the false classification, the decreased searching window was used. Comparison of Results To verify the effect of the proposed method, we applied the method to the experimental data. In this process, every 4.5 s block of recorded data were processed sequentially. We used the 4.5 s length because at least three R peaks were needed for our post-processing. More longer block can be used, but we did not enlarge the length considering near-real-time operation. Furthermore, short block length will be advantageous for the post-processing because signal can be evaluated more minutely. In the processed data, 1.5 s of data represented the overlapped data that were used in previous processing. The overlapped data were needed to detect a R peak occurred at the boundary of two processing windows stably. The step size and tap length of the adaptive filter were set to 0.01 and 360 experimentally. A comparison of the results for the indoor and outdoor experiments is listed in Tables 1 and 2. In the tables, Se and P + are presented for ECG m , ECG ANC , and the signal after post-processing. To show the difficulty of using a conventional ECG denoising method for the signal by a non-intrusive sensor, the performance indices of a wavelet-thresholding-based method were attached together [37]. In addition, the results by EKF were presented for comparison. In the table for the indoor experiment, data 1-1, 1-2, and 1-3 are the data from the subject 1. Likewise, data 2-1, 2-2, and 2-3, and data 3-1, 3-2, and 3-3, and data 4-1, 4-2, and 4-3 were measured from same subjects. d_acc is the difference of Se + P + between the tested signal and that of ECG m . That is, let Se and P + of ECG m be Se m and P + m , and those of the tested signal be Se t and P + t . Then, d_acc is as follows: This is calculated to compare the increase in R peak detecting accuracy by applied methods considering both Se and P + . In Table 1, the effect of ANC could be identified. For most data, Se and P + were increased after ANC. However, the R peak detecting accuracy was lower than that of ECG m at data 3-3. This was because the reference signal of the data contained uncorrelated noise with the ECG m , as seen in Figure 5. This noise could happen by an electrical or measurement noise at the additional sensors used to construct the reference signal. Post-processing is added for cases like this. After post-processing, the accuracy of all data were higher than that of the ECG m . Se + P + increased 8.91% on average and 26.26% maximally at data 2-1. For the outdoor data, the proposed method likewise detected R peaks more accurately, as seen in Table 2. The proposed method increased Se + P + by 7.36% on average. However, ANC itself was not effective and the increase in accuracy was not significant after post-processing for outdoor data 1, 2, and 3. This was related to the characteristics of the data and will be discussed in the next section. Post-processing increased the accuracy in most data. Its effect was noticeable at indoor data 3-3, and outdoor data 1, 2, and 3 having lower accuracy than ECG m after ANC. By the post-processing, the accuracy of the data was increased and it was higher than that of ECG m likewise other data. On the other hand, the R peak detecting accuracy of ANC decreased for indoor data 2-1, 3-2, and 4-2 after the post-processing. This was because the post-processing could not perfectly distinguish the signal having better accuracy in a certain processing window. It was hard to choose better signal when the signal contained a noisy or irregular component. However, the decreased accuracy was insignificant and it was lower than 0.5%. Furthermore, the accuracy increased for all outdoor data having longer data length than indoor data after the post-processing. This result shows an advantage of the post-processing for general cases. In the results, the wavelet-based method had an effect on the indoor data 1-3, 2-2, 3-2, 4-2, and 4-3 and outdoor data 1, 2, and 6, but the accuracy was mostly lower than the original one. This was because of the large amplitude of motion noise. The wavelet-based method divides the signal into the signals of a certain frequency band, and it applies a threshold to remove noise components. Therefore, this method was not effective in the non-intrusive ECG because the ECG contained motion noise having a large amplitude. The EKF enhanced the R peak detecting accuracy of all data except for indoor data 1-1 and outdoor data 3. For indoor data 3-3 and 4-3, its accuracy was higher than that of ANC with post-processing. However, the increased accuracy was not significant for all data. This limitation comes from the model-based approach of EKF. It uses an ECG model to estimate the original ECG, but the used model will have a large error with a measured signal. This error can be generated by the low signal-to-noise ratio of the non-intrusive ECG signal itself and the motion noise having a significant amplitude. The increased Se + P + was maximum in indoor data 1-3 as 2.59% by the EKF. On the other hand, the effect of ANC was remarkable. Its accuracy was higher than other methods for most of the data, and its increased range was significant. Furthermore, it could improve the accuracy of all data with post-processing. The averaged rate of increase was 8.39% for all experimental data. Figure 8 shows one example that represents the advantage of ANC. The wavelet-based method and EKF were not effective to reduce motion noise in the figure. However, ANC could reduce the motion noise prominently and all R peaks were correctly detected. This effectiveness of ANC results from the proposed reference signal that contains information on the occurred motion noise. In addition, the complexity of our method is not high. In our experiments, the measured ECG was processed by MATLAB and a personal computer having an Intel Core i7-3930k CPU and 16 GB of RAM. The processing time for the data divided into a 4.5 s length was 0.21 s on average and total processing time for the entire data including total indoor and outdoor data was 662.98 s. In the 4.5 s data, a 1.5 s length is overlapped data and 3 s is newly processed data. The 0.21 s is very short compared with the 3 s length of the new data. Therefore, our algorithm could be implemented in a near-real-time system. Analysis on Outdoor Data In Table 2, ANC was less effective for outdoor data 1, 2, and 3 compared to indoor data. To investigate the reason, the outdoor data were analyzed in Table 3. In the table, Se+P + represents the sum of Se and P + for the each ECG m of outdoor data. Se+P + was high in outdoor data 1, 2, and 3 because the data had little noise. In this case, the application of ANC could not make much difference in R peak detecting accuracy because the signal was already clean and contained little noise. This can be one reason that the effect of ANC was not remarkable. To know the characteristics of noise in the data, we divided the processing windows of each outdoor data into two groups. One is the windows having 100% of Se and P + . The windows can be considered signal parts having little noise, and the averaged signal power of the windows (P clean ) was calculated to compare it with noise power (P noise ) in each data. Windows in the other group can be considered as noisy signal parts because it contains R peak detecting error. P noise was obtained as the averaged signal power of the windows in each data, and the relative amplitude of P noise (r_P noise ) for clean signal parts was calculated as r_P noise = P noise P clean (19) r_P noise was extremely high in outdoor data 2 and 3 as seen in Table 3. This severe noise occurred when the body of subjects was totally detached and far away from the cECGs by the manipulation of the steering wheel in a corner, or by the sway of the vehicle owing to speed bumps. For this excessively severe noise, ANC was not effective. This is because additional cECGs could not measure the valid motion information that was related to the noise in ECG m . Furthermore, the characteristics of noise measured in the additional cECGs could be much different from the noise in ECG m . Outdoor data 2 and 3 included a small number of noisy windows, and the noise of the windows was severe. The use of ANC was not effective in the data by this reason. When the data were measured, the movement of subjects was restricted by the cautious behavior of subjects for the experiment and the use of contact ECG electrodes. This could make the data contain a few signal parts having the noise that can be effectively reduced by ANC. In real driving situations where the various movements of a subject occur, the effect of ANC will be more prominent as in the rest of the outdoor data. In addition, the same problem with indoor data 3-3 occurred in some outdoor data as seen in Figure 9. This problem was caused by the motion information that is included in reference signal but uncorrelated with motion noise in ECG m . A measurement noise or electrical noise can be a reason for this phenomenon. However, these errors were corrected by post-processing. After ANC with the post-processing, R peak detecting accuracy was increased, and the increased accuracy outperformed those of other methods in all outdoor data. Appropriateness of Proposed Reference Signal This section analyzes the suitability of the proposed reference signal as a reference signal for ANC. First, to confirm the assumption that the reference signal and the R wave of the ECG are uncorrelated, the ECG with a 3-min length (ECG clean ) was measured in a stable condition to make it contain only the ECG itself with as little motion noise as possible. Then, ECG clean passed through the BPF of 5-15 Hz to get R waves, and the correlation coefficients (CC) with r L and r R constructing the reference signal were calculated. For two representative signals X and Y, having M samples each, their correlation can be expressed by CC: where X and Y are the means of X and Y [38]. If the two signals have a low correlation, the value of CC will be close to 0. For signals that have a high correlation, the absolute value of CC will be close to 1. For the 5-15 Hz components of ECG clean , calculated CC can be seen in Figure 10, and the same analysis was conducted for indoor data 2-1 in Figure 11 for comparison. The lower parts of two figures represent the 5-15 Hz signal power of each signal. The signal power was much higher in the indoor data 2-1 because it contained much motion noise. Looking at the results on CC, the absolute value of CC was lower than 0.15 when there was little motion noise and the signal was approximated to the R wave. Comparing this result with the CC of the noisy signal in Figure 11, it is obvious that the correlations between R wave and r L , r R are low. Therefore, it can be concluded that the correlation between the proposed reference signal and the R wave is low because the proposed reference signal consists of r L and r R . From the former results, the correlation between motion noise and r L , r R could also be known because the absolute value of CC was increased when the motion noise occurred. To show a more generalized result, the ECG m of the indoor data and ECG clean were filtered by a HPF of 1 Hz to reduce the effect of respiration. Let the filtered ECG be ECG H , the sum of absolute values of CCs between ECG H with r L and with r R (CC L+R ) was calculated at every processing window unit. Then, CC L+R s were averaged for each indoor data and ECG clean . That is, the averaged CC L+R (a_CC L+R ) is calculated as CC L+R,i = abs(CC L,i ) + abs(CC R,i ) where N is the number of processing windows in each data. CC L,i or CC R,i is the CC between the tested ECG H and r L or r R in the i th processing window. The calculated a_CC L+R is listed in Table 4. In all cases, a_CC L+R of indoor data containing motion noise was higher than ECG clean . This result reveals that the reference signal and the motion noise have a correlation because a_CC L+R increased when the ECG contained the motion noise. To validate the relation in more detail, all ECG clean , indoor and outdoor data were filtered by the HPF of 1 Hz and the averaged signal power was calculated according to CC L+R . That is, CC L+R was divide into 0.05 interval and signal power in each interval was averaged in Figure 12. For the experiment, a processing window having signal power higher than a threshold was treated as a outlier and excluded. The threshold was set as 100 times P clean for each data. P clean is obtained as the averaged signal power of processing windows having 100% of Se and P + for each data as mentioned in the previous section. In the Figure 12, the degree of correlation between the reference signal and ECG was increased mostly in accordance with the increase in signal power or the occurrence of motion noise. This tendency can be considered as the evidence of a high correlation between the reference signal and the motion noise because the correlation is increased as the ratio of motion noise in ECG increased. Table 4. Averaged sum of absolute values of CCs between the ECG H with r L and with r R for the indoor data and ECG clean . Conclusions This study proposes a method to reduce motion noise and to accurately detect R peaks using non-intrusive sensors. It uses additional cECGs placed adjacent to the cECGs for ECG measurement. Then, a reference signal is constructed using the sensor signals, and it is utilized in ANC including an adaptive filter with an APA to reduce motion noise and enhance R peaks. Post-processing is added to prevent an incorrect result in exceptional situations and to make our method more practical. In experiments, the system was implemented in a chair and a driving seat. Based on the results, an increase in R peak detecting accuracy was verified when the proposed method was used. Then, the analysis of the proposed reference signal was conducted to show its suitability for theoretical assumptions. The proposal for the new reference signal to denoise non-intrusive ECG is our original work. In addition, the effect of our method using the new reference signal could be shown numerically by our experimental results. Because of the advantages of our method, more accurate and stable HR or HRV values can be obtained by detected R peaks in non-intrusive measurements. In addition, the proposed method does not need high-computational complexity, and it can proceed in near-real-time. Therefore, our method can be used in applications that require real-time HR or HRV information and that use a device having a shape that is similar to a chair. One of the examples is a driver's condition monitoring system. To improve our system, an adaptive filter technique like variable step size algorithms or the combination of two adaptive filters can be used to deal with motion noise. Furthermore, post-processing can be modified to obtain better results and not to degrade the performance of ANC as pointed out at result section. In addition, just one additional cECG was used in our method to obtain motion information for each left and right side, because this research was conducted to investigate its availability. Therefore, research using more sensors can be performed to improve R peak detecting accuracy in the future. Moreover, the system was integrated with a chair and driving seat in our experiment. Research to apply our system to a wearable device can be conducted. This will expand the applicability of our method.
2016-06-10T08:59:46.098Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "533bcc3d188536e0e5ecdae1e3ed79f82977fbb8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/16/5/715/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "533bcc3d188536e0e5ecdae1e3ed79f82977fbb8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
263136143
pes2o/s2orc
v3-fos-license
Uncertainty quantification and complex analyticity of the nonlinear Poisson-Boltzmann equation for the interface problem with random domains The nonlinear Poisson-Boltzmann equation (NPBE) is an elliptic partial differential equation used in applications such as protein interactions and biophysical chemistry (among many others). It describes the nonlinear electrostatic potential of charged bodies submerged in an ionic solution. The kinetic presence of the solvent molecules introduces randomness to the shape of a protein, and thus a more accurate model that incorporates these random perturbations of the domain is analyzed to compute the statistics of quantities of interest of the solution. When the parameterization of the random perturbations is high-dimensional, this calculation is intractable as it is subject to the curse of dimensionality. However, if the solution of the NPBE varies analytically with respect to the random parameters, the problem becomes amenable to techniques such as sparse grids and deep neural networks. In this paper, we show analyticity of the solution of the NPBE with respect to analytic perturbations of the domain by using the analytic implicit function theorem and the domain mapping method. Previous works have shown analyticity of solutions to linear elliptic equations but not for nonlinear problems. We further show how to derive \emph{a priori} bounds on the size of the region of analyticity. This method is applied to the trypsin molecule to demonstrate that the convergence rates of the quantity of interest are consistent with the analyticity result. Furthermore, the approach developed here is sufficiently general enough to be applied to other nonlinear problems in uncertainty quantification. Introduction Nonlinear elliptic partial differential equations (PDEs) are frequently used as models for applications in electrostatics.In particular, a salient problem in the field is the modeling of potential fields generated by molecules in solvents.The nonlinear Poisson-Boltzmann Equation (NPBE) serves as an accurate representation of the molecule-solvent interactions and is employed in molecular dynamics simulations and chemical applications [36,74].It has been used in the modeling of electrode-electrolyte interfaces [60,75,76] and in solvers such as the Adaptive Poisson-Boltzmann Solver (APBS) in determining the electrostatic potential for biomolecular processes [6,48]. The NPBE can be written as −∇ • (ϵ(x)∇u(x)) + κ 2 (x) sinh(u(x)) = f (x), for x ∈ D, where D ⊂ R 3 is the domain, ϵ(x) > 0 is a dimensionless dielectric function, κ(x) ≥ 0 is the modified Debye-Hückel parameter, and f (x) gives the charge of the particles in the region.The desired solution u represents the (dimensionless) potential function.One typically assumes the domain is separated into three parts: the solvent, the molecular region, and the ion-exclusion layer.From this assumption, ϵ(x) becomes discontinuous at the interfaces between these regions, and so this PDE is sometimes referred to as an elliptic interface problem.Variational methods can be used to show that a unique weak solution (i.e. a function u ∈ H 1 ) exists under certain conditions [43]. When computing molecular dynamics (MD), the presence of thermal fluctuations and solvent interactions (among other factors) can lead to random conformations of the molecules, and more accurate models incorporate this stochasticity.For instance, stochastic initial velocities are used in [2,61] when computing MD.Other approaches to MD which factor in stochasticity include Langevin dynamics [22,37,47] and Markov random models [80].In this paper we assume that the random domain conformations are represented using a finite dimensional model with N random variables such as Karhunen-Loève expansions or similar random field stochastic representations. Quadrature methods can compute stochastic measures for the Quantity of Interest (QoI) of the potential field under random configurations of the molecules.However, for each quadrature point we must compute the solution of the NPBE.As the number of dimensions, N , increases the calculation quickly becomes intractable. One strategy to reducing the cost of the calculations is to show that the QoI varies analytically with respect to the stochastic parameters.In this case, one can compute the N -dimensional quadrature using a sparse grid [63], which gives sub-exponential or algebraic decay in the error as a function of the number of interpolation points.Thus the "curse of dimensionality" from the N can be ameliorated, and the problem becomes tractable.If the QoI depends analytically on the solution u, then it is sufficient to prove that the solution varies analytically with respect to the stochastic parameters. Previous studies in uncertainty quantification (UQ) have explored the analyticity of solutions to linear partial differential equations with random domains [18,21].The authors in [39] explore the utilization of stochastic collocation and Galerkin methods for the NPBE.The NPBE is treated as a semi-linear stochastic boundary valued problem and the existence of a unique solution is proved.This approach extends the existence and uniqueness result found in the deterministic case [43] and follows a similar proof strategy.However, due to the absence of analytic regularity results, convergence rates for implementing the stochastic collocation method are not derived.For the case of elliptic interface problems, the regularity of point evaluations of solutions and how to approximate them with Deep Neural Networks (DNNs) have been studied [68].Furthermore, in [65] the authors show that given a holomorphic map, such as an analytic extension of the solution of a PDE, there exists a DNN with exponential accuracy with respect to the dimensionality of the DNN. In the case of the NPBE, our objective is to demonstrate that analytical deformations of the domain result in the analytic variation of the solution u.This particular investigation introduces two challenges that were not addressed in previous research: (1) Nonlinearity.Previous results have been proved by showing that the solutions satisfy the Cauchy equations and are thus analytic.This becomes difficult to do with the nonlinearity introduced by sinh(u).A more subtle complication due to the nonlinearity is that outputs of the function might not end up in the desired space.That is, for potential weak solutions u ∈ H 1 , it is not guaranteed that sinh(u) ∈ L 2 . (2) Interfaces.In [26] the analyticity properties of the NPBE are studied where ϵ exhibits some degree of regularity.However in our case, the assumption that ϵ is Lipschitz continuous on the entire domain is relaxed to account for the interface problem. The main strategy of this paper is to use the implicit function theorem and the domain mapping method (introduced in [19]) to show that u is analytic with respect to the stochastic parameters.This avoids trying to show the Cauchy equations hold, and it is a general strategy that can be applied to other nonlinear PDEs.In order to apply the implicit function theorem, we have to specify a function domain for our solution.As noted above, we cannot take u ∈ H 1 since this does not imply sinh(u) is in L 2 .If u were in H 2 then the fact that the Sobolev space is a Banach algebra (see [1,Thm. 4.39] for a proof) would let us conclude the nonlinearity is in L 2 , but the discontinuity of ϵ in the problem means that u is not (weakly) differentiable across the interfaces.However, we can instead define a "piecewise H 2 " space, in which u naturally lies.Thus we can then apply the implicit function theorem to get analyticity.From there, estimates of the rate of convergence of the sparse grid method can be obtained by getting a priori bounds on the region of analyticity of the solution.The results here are also notable in that they can easily be generalized to other UQ problems that come from nonlinear PDEs with interfaces. The paper will be structured as follows.In section 2, we introduce the problem of a linear elliptic PDE with interfaces.We introduce a suitable Banach space in which a strong solution naturally exists: a "piecewise H 2 " space denote by H(U ).It is then shown that linear problem has unique strong solutions that induces an isomorphism between H(U ) and the Banach space of the forcing functions.In section 3, the NPBE is reformulated onto a reference domain with solutions in H(U ).From there, the implicit function theorem is used to get an analytic mapping from the parameter space to the solutions of the NPBE.Furthermore, we give details on how to get a priori bounds on the size of the region of analyticity after applying the implicit function theorem.Section 4 gives an overview of applying sparse grids to the efficient computation of integrals of analytic functions.Finally, numerical experiments are performed in section 5 to demonstrate the convergence results. 2 The Linear Elliptic PDE with Interfaces Definitions and notations We first consider a linear elliptic PDE with possibly discontinuous coefficients at interfaces.For our problem, the domain is split up into three subdomains.The coefficients of the PDE are sufficiently regular on each subdomain, and the subdomains are nested within each other.The boundary between the subdomains form the interfaces in our problem.It is straightforward to generalize this to an arbitrary number of nested subdomains. Definition 1.We say that a connected, bounded open set U ⊂ R 3 is properly decomposed into l subdomains U 1 , U 2 , . . ., U l if the following holds: There is a sequence of compactly embedded subsets U l−1 = U (1) \ U (2) . . . 1) . We define the interfaces I 1 , . . ., I l−1 to be Remark 1.The results for this section could also be generalized to the case where subdomains are no longer strictly nested within each other.However, we will use the above definition since it is sufficient for our application and it keeps the notation simple. For convenience, we shall refer to ∂U by I l .We now assume that U ⊂ R 3 is properly decomposed into l subdomains U 1 , . . ., U l where the interfaces and the boundary of U are all of class C 1,1 .We choose our domain to be in R 3 for our application; other dimensions are possible, but the choices of Sobolev spaces will be affected.Denote by ν k the outward facing normal for the surface I k .On each of these surfaces we can define trace operators.For 1 ≤ k ≤ l − 1, there are two trace operators depending on if we take the domain to be H Figure 1: Here a set U is properly decomposed into the three subdomains U 1 , U 2 , and U 3 .The interface I 1 is where the boundaries of U 1 and U 2 meet, and the interface I 2 is where the boundaries of U 2 and U 3 meet.The boundary ∂U is also referred to as I 3 . be the trace operator from the domain U k to its outer boundary (for 1 ≤ k ≤ l), and similarly let be the trace operator from the domain U k+1 to its inner boundary (for 1 ≤ k ≤ l − 1).Define a second-order elliptic operator P k on each U k by where a ij ∈ C 0,1 (U k ) for each i, j = 1, 2, 3 and k = 1, 2, . . ., l and c ∈ L ∞ (U ).We assume that a ij = a ji for all i and j.We further assume that each P k satisfies a uniform ellipticity condition on U k , i.e., there exists a constant θ > 0 such that for a.e.x ∈ U k and all ξ ∈ R 3 .We can choose θ independently of k.The operator P k is naturally associated with the bilinear map Φ k given by The operators P k define co-normal derivatives on the interfaces and boundary.We define B ± k by where (ν k ) i denotes the ith component of the normal vector ν k .This gives maps . By specifying the value f ∈ H −1 (U k ) of P k u, the conormal derivative can be extended to H 1 (U k ) functions.If the choice of f ∈ H −1 (U k ) is clear, then we will simply say the distribution B ± k u ∈ H −1/2 (I k ) is the conormal derivative of u.These definitions allow us to use the following Green's identity for 2 ≤ k ≤ l: And in the case of k = 1 we have Weak and strong forms of elliptic problem with interfaces Similar to standard elliptic PDE theory, the existence and uniqueness of the weak solution for the discontinuous interface problem will first be established.Consequently, this result will be used to show existence and uniqueness of the strong solution.However, to motivate the weak formulation, we first start off by defining the strong form of the problem.Normally, the strong solution of an elliptic PDE would lie in the space H 2 (U ), but this cannot be the case for our problem since we can lose regularity at the interfaces.The next best option is to require a "piecewise H 2 regularity" for the strong solution, where the function is H 2 when restricting to the subdomains. This is a Banach space with a norm given by This Banach space depends on our decomposition of U , but when this decomposition is clear we will simply write H(U ) instead of H(U ; U 1 , U 2 , . . ., U l ).Throughout the paper, we will denote u| U k or f | U k by u k or f k , respectively, to cut down on notation. Requiring the strong solution u to lie in H(U ) is insufficient to define a unique strong solution for the elliptic problem.If the strong solution were only required to satisfy P k u k = f k on each U k along with a Dirichlet boundary condition, then infinitely many solutions would be possible; for instance, in the case where c ≡ 0 adding a constant value to u 1 on U 1 would give another solution to the problem.Unique solutions exist if certain jump conditions are satisfied at the interfaces. For u ∈ H(U ; U 1 , U 2 , . . ., U l ), define Then the strong form of the elliptic PDE with interfaces is stated as follows: Problem 1 (Strong form of elliptic PDE with interfaces).Suppose U is properly decomposed into subdomains U 1 , U 2 , . . ., U l , where the interfaces and boundary are of class C 1,1 .Let P k be defined as in eq.(2).Fix ) is a strong solution of the elliptic PDE with interfaces if The boundary condition can be set to zero by setting w ∈ H 2 (U ) to be such that γ + l w l = g l .Then we can break up u into u = ũ + w where ũ ∈ H 1 0 (U ) ∩ H(U ).The weak formulation of the problem is derived by taking P ũ = f − Pw (where P is the differential operator that is locally P k on each U k ), multiplying each side of the equation by v ∈ H 1 0 (U ), and integrating over U .Applying eqs.( 3) and ( 4) and summing up the terms gives us Note that since v ∈ H 1 0 (U ), the functions γ + k v k and γ − k+1 v k+1 will be equal, which allowed us to combine terms in the equation above.Equation ( 8) makes sense even in the case where ũ is not in H(U ), so we use this equation to define a weak solution in H 1 0 (U ). Problem 2 (Weak form of elliptic PDE with interfaces).Suppose U is properly decomposed into subdomains U 1 , U 2 , . . ., U l , where all interfaces and the boundary are of class C 1,1 .Let P k be defined as in eq.(2), and let Φ k be the bilinear maps associated with Remark 2. The formulation of problem 2 agrees with the weak form of the linear Poisson Bolztmann equation in the case where g 1 , g 2 , . . ., g l−1 are set to zero (c.f.[43]), which suggests that this is the appropriate formulation of the weak problem for our application.Although in practice there will be no forcing terms on the interfaces, allowing the possibility of non-zero g k 's is of theoretical importance when we later apply the implicit function theorem. Showing that problem 2 has unique solutions follows from applying the Lax-Milgram theorem in a similar way to how it is applied in standard linear elliptic theory.Proposition 1. Suppose c ∈ L ∞ (U ) is non-negative.Then we have a unique solution to problem 2. Remark 3. The regularity of the data can be loosened in proposition 1; for instance, we can let f ∈ H −1 (U ) and still have unique weak solutions.However, for our purposes we do not need these low regularity results. The uniqueness of weak solutions can now be used to show the existence and uniqueness of strong solutions. Theorem 1. Suppose that c ∈ L ∞ (U ) is non-negative.Then there exists a unique solution u ∈ H(U ) to problem 1.Moreover, problem 1 defines an isomorphism between H(U ) and by associating solutions u with data (f, g 1 , g 2 , . . ., g l−1 , g l ). Proof.From proposition 1, it follows that the weak solution u ∈ H 1 (U ).To show that u ∈ H(U ), we can appeal to [57,Thm. 4.20].Namely we have that . ., l and thus u ∈ H(U ).Applying the Green's identities in eqs.( 3) and ( 4) demonstrates that u is a solution of problem 1.Since solutions of problem 1 are also solutions of problem 2, the strong solution u is unique.The isomorphism result follows immediately from existence and uniqueness of the strong solution and the continuity of the solution map with respect to the boundary and forcing data. 3 The Nonlinear Poisson Bolztmann Equation on a Reference Domain Existence of Region of Analyticity We now return to our main focus of the paper: solutions for the NPBE.The NPBE given in eq. ( 1) is a nonlinear elliptic PDE.For our applications, we will have the main domain properly decomposed into three subdomains.We also allow for the possibility of random perturbations of the boundary and interfaces. Let Ω be the sample space.Each outcome ω ∈ Ω designates a random domain D(ω) on which the NPBE will evaluated.The domain D(ω) is properly decomposed into three subdomains D 1 (ω), D 2 (ω), and D 3 (ω) with interfaces I 1 (ω) and I 2 (ω).The parameters ϵ, κ 2 , g, and f will also depend on ω.From here one can define strong and weak solutions of the NPBE on the stochastic domain in a similar way to that in [19].In practice, one usually assumes that the value of each parameter is given as a function of the random vector ) taking values on the compact set Γ ⊂ R N and with known density ρ : Γ → R ≥0 .Typically Γ = [−1, 1] N with ρ a truncated normal distribution, although the distribution can be more general.Often the parameters will vary analytically with respect to the value of Y and are usually polynomials in Y. Thus the NPBE can be stated as a problem with parameters y ∈ Γ ⊂ R N . To parameterize the random domain, we assume that the random domain has a pullback onto some fixed open set for each ω.In particular, take U ⊂ R 3 to be a bounded, open set that is properly decomposed into three subdomains U 1 , U 2 , and U 3 with interfaces I 1 = ∂U 1 ∩ ∂U 2 and I 2 = ∂U 2 ∩ ∂U 3 .The interfaces and boundary are taken to have C 1,1 regularity.We assume for each y ∈ Γ that there is The Jacobian matrix of F (•; y) will be denoted by J(•; y).Note that since F is a C 2 diffeomorphism, the regularity of the interfaces and the boundary are preserved under the mapping.To distinguish between the coordinates in each domain, we will denote by r elements of U and by x elements of D(y).Similarly ∇ r and ∇ x will be used to distinguish between the derivatives in U and D(y), respectively.Hence, we can define the strong form of the NPBE on the random domain.Again, we will use subscripts to denote restrictions to the subdomains (e.g.u k = u| D k ).The trace operators γ ± k are defined similar to those in section 2.2.Also, we have the conormal derivative B k using the elliptic operator defined by a ij (x; y) = ϵ(x; y)δ ij , where δ ij is the Kronecker delta. ) is a strong solution for the NPBE on the random domain if for each y ∈ Γ we have u(•; y) satisfies From the strong form, we can derive a weak version of the NPBE by integrating against a test function and using integration by parts.Here we define w(•; y) ∈ H 2 (D(y)) to be the inverse trace of g(•; y) so that γ + 3 (w) = g. The weak form of the NPBE given above agrees with the standard definition of the weak form for this equation (c.f.[43, §2.1.5]).To pull back onto the reference domain, we construct an equivalent weak form of the NPBE for the pullback of u onto U given by u * (•; y) = u(F (•; y); y).Following results given in [19][20][21], we can write the weak form of the pullback onto the reference domain. The strong form of the NPBE on the reference domain is defined in an analogous way to problem 1.This also corresponds to the weak problem given in problem 5 in that assuming sufficient regularity of the weak solution and integrating by parts gives the strong formulation. ) is a strong solution for the NPBE on the random domain if for each y ∈ Γ we have u * (•; y) satisfies To summarize, we have four problems that we can consider for the NPBE depending on whether we want the weak or strong solution and whether the domain is random or fixed.We can move from the the random domain problems to the reference domain problems by using the pullback F * .Transitioning between weak and strong forms is done by integrating-by-parts or having increased regularity of the solution.Figure 2 illustrates the relationship between these problems.We will be working with the reference domain moving forward: first showing that a weak solution exists, and then applying that result for the strong solution.We can recapture results on the random domain by composing solutions on the reference domain with the diffeomorphism F . To get solutions for the NPBE, we must make some assumptions on the parameters.The assumptions that ϵ * and (κ 2 ) * are positive and non-negative, respectively, come from the physics of the simulation and are generally satisfied.The function f * is used to model the point charges, and ideally would be made to be a sum of Dirac deltas.However, there are few results for the nonlinear version of the Poisson-Boltzmann equation with forcing functions in H −2 (U ).Thus in practice, one approximates the point charges with L 2 functions (e.g.Gaussian functions centered at the location of the charge), and so we take f * ∈ L 2 .The Dirichlet boundary condition is typically taken to be the long-distance approximation of the potential from the point charges and is smooth on ∂U = I 3 , but simply requiring g * ∈ H 3/2 (I 3 ) gives sufficient regularity.Finally, we assume all the parameters vary analytically with respect to y ∈ Γ, which is reasonable to assume when computing numerical solutions. Thus we make the following assumptions: Assumption 1.For y ∈ Γ, we have that Using integration-by-parts with tests functions allows us to get the weak formulation of the NPBE from the strong formulation.If the solution of the weak problem is sufficiently regular, then it will also be a strong solution. ) and the maps from Γ into the respective Banach spaces are analytic.Since the inverse trace operator is linear, we also have y → w * (•; y) ∈ H 2 (U ) is analytic. We will also need to assume that det J(r; y) is bounded away from 0 for y ∈ Γ and r ∈ U .This is essentially assuming that the map F is non-singular and preserves the orientation of the domain, both of which are reasonable when considering small perturbations of the interface. Proof.By assumption 4, we have that det J ≥ c 2 > 0 and so the matrix J −1 (r; y)J −T (r; y) det J(r; y) is symmetric positive definite for each r ∈ U and y ∈ [−1, 1] N .The proof of weak solutions to the NPBE given in [43,Thm. 2.14] can easily be adjusted to the case where the matrix a := J −1 (r; y)J −T (r; y) det J(r; y) is symmetric positive definite.That argument gives unique solutions to problem 5.The fact that sinh(u * (•; y)) is an L 2 function also follows from the proof in [43]. Remark 4. The same assumptions also imply that there is a unique solution for problem 4 which also satisfies sinh(u(•; y)) ∈ L 2 (D(y)) for each y ∈ Γ. Remark 5.The next result follows from the analytic version of the Implicit Function Theorem on Banach spaces.The statement of this theorem is analogous to the finite-dimensional version, and an exact statement of it can be found in [79].One key change from the finite-dimensional version to the infinite-dimensional version is that we now use Fréchet derivatives and require that the derivative is an isomorphism between Banach spaces.Theorem 2. Suppose assumptions 1 to 4 hold.Then there exists a unique solution to problem 6.Furthermore, there exists a complex neighborhood of Γ given by N ⊂ C N such that there is a function y → u * (•, y) ∈ H(U ), for y ∈ N where (i) the map is analytic from N into H(U ), and (ii) the map agrees with the strong solution on Γ. Proof.From proposition 2, there are unique weak solutions u * (•, y) ∈ H 1 (U ) for each y ∈ [−1, 1] N and that sinh(u * (•; y)) ∈ L 2 (U ).Setting f (r; y) = f * (r; y) det J(r; y) − (κ 2 ) * (r; y) sinh(ũ * (r; y) + w * (r; y)) det J(r; y), we get that f (•; y) ∈ L 2 (U ) and eq.( 14) can be written as which is in the same form as eq.( 9).Then applying theorem 1 gives that u * (•; y) ∈ H(U ).To show analyticity, we apply the analytic version of the implicit function theorem.We define a mapping such that F(y, u * ) = (0, 0, 0, 0) if and only if u * is a strong solution on the reference domain for that fixed y.The first component of F is defined on each U k by ) and thus can be used to define an L 2 function on all of U .The second and third component of F(y, u * ) is defined to be [B k u * ] I k for k = 1 and k = 2, respectively.The final component of F(y, u * ) is given by γ + 3 u * 3 − g * .The second and third components define linear maps, and the first and fourth component are easily verified to be analytic.Thus F is an analytic map between Banach spaces.Fix y 0 ∈ Γ.Then since u * (•; y) is a strong solution we have We want to apply the implicit function theorem to eq. ( 18) to get u * as an analytic function of y in a neighborhood around y 0 .To do this, we must check that the derivative of F with respect to u * at (y 0 , u * (•; y 0 )) is an isomorphism between H(U ) and Z. (To apply the Implicit Function Theorem to eq. ( 18) to get u * as an analytic function of y in a neighborhood around y 0 , it must checked that the derivative of F with respect to u * at (y 0 , u * (•; y 0 )) is an isomorphism between H(U ) and Z.)One can compute that , where L is defined locally in eq. ( 15).This linear operator is of the same form of problem 1, and so proposition 1 implies that D u * F(y 0 , u * (•; y 0 )) is in fact an isomorphism.Therefore the map y → u * (•; y) is analytic in a neighborhood of y 0 ∈ R N .This map can be extended to a complex neighborhood of y 0 ∈ C N .Applying this argument to every point in Γ gives the complex neighborhood N for which N ∋ y → u * (•; y) is analytic. Estimates on the Region of Analyticity Theorem 2 shows that there exists a region of analyticity for the strong solution of the NPBE.This will be used to prove convergence results relating to the quantity of interest by using sparse grids [63].In this section, a quantitative bound on the size of the region of analyticity is derived, which is applied to obtain the aforementioned convergence rates; see Figure 3 and the discussion that follows.However, the rate of convergence we depend on the size of the region of analyticity. The typical application of the implicit function theorem does not give a priori bounds on the size of the region of analyticity.For the finite-dimensional case, an application of Rouché's theorem can give estimates of this region.The results in [24] give simple bounds on the radius of the region of analyticity.We will show a similar result holds for Banach spaces. Theorem 3. Let F : C n × X → Y be an analytic function with X and Y Banach spaces.Suppose that F(0, 0) = 0 and D x F(0, 0) : X → Y is an isomorphism.Let ∥D x F(0, 0) −1 ∥ L(Y,X) ≤ a and suppose that ∥F(z, x)∥ Y ≤ M on B where B = {(z, x) : |z|, ∥x∥ X ≤ R}.Also, suppose that D x F(z, x) −1 exists and is a bounded operator for each (z, x) ∈ B. Then the analytic function z → x(z) is defined in a region containing the ball Proof.The proof follows in the spirit of arguments from degree theory.We find r > 0 such that F(z, x) ̸ = 0 for z ̸ = 0 sufficiently small and ∥x∥ X = r.This -along with the assumption on the inverse of D x F -guarantees the zero from the implicit function theorem does not leave the ball, bifurcate, or vanish.Obviously, z → x(z) is continuous and so if ∥x(z 1 )∥ X > r for some z 1 then there must be some point z 0 where ∥x(z 0 )∥ X = r and F(z 0 , x(z 0 )) = 0, which contradicts our assumed bound.By repeatedly applying the implicit function theorem, one can show that we cannot have the function lose analyticity if the function x(z) does not leave the ball of radius r.Thus we can increase |z| up to the point where the zero can leave the ball of radius r and this becomes our estimate for the region of analyticity. We first find an r > 0 such that ∥F(0, x)∥ Y > 0 for all 0 < ∥x∥ X ≤ r.Because F is analytic, we can write F(0, x) as a power series centered at (0, 0): where a k are k-linear maps.Using a Cauchy estimate we have that Then rearranging eq. ( 19), applying D x F(0, 0) −1 to both sides, and taking norms gives Thus we have that To guarantee the right-hand side is strictly greater than zero we need that 0 < ∥x∥ X < R 2 R + aM . Thus we can choose any r such that 0 < r < R 2 R + aM . Now we want to find θ > 0 such that if |z| < θ then F(z, x) ̸ = 0 when ∥x∥ X = r.It will be sufficient to find a θ where for any |z| < θ By using a power series expansion around (0, x) with respect to z and the Cauchy estimate we get that Then eq. ( 20) holds if Setting θ equal to the right-hand side gives the desired result.Furthermore, the value on the right-hand side is maximized for fixed values of a, M , and R when Thus the optimal radius can be given by plugging in this value for r, from which we get Θ(M, a, R; . For the purposes of showing sub-exponential convergence of the sparse grid, we want to find the largest polyellipse in the region of analyticity.For one dimension, a Bernstein ellipse E σ is given by For multiple dimensions, we define a polyellipse to be a direct product of Bernstein ellipses: Applying theorem 3 directly using uniform estimates for each point in Γ gives the analytic domain where B Θ (y) is a ball of radius Θ = Θ(M, a, R; F) centered at y. Thus we want to fit the largest Bernstein ellipse into G Θ , as shown in fig. 3. The following is a simple result following from theorem 3. Corollary 1.Take F to be the function designated in eq.(17).Set the positive constants R, M , and a such that the following hold: (ii) M > 0 is large enough so that ∥F(y 0 +y, u * (•, y 0 )+u * )∥ Z ≤ M whenever y 0 ∈ Γ and |y|, ∥u Then defining where Θ = Θ(M, a, R; F), we have that the polyellipse E σ1,σ2,...,σ N is inside the region of analyticity for the solution y → u * (•; Proof.By applying theorem 3 to each point y 0 ∈ Γ, we get that there is a region of analyticity for the solution y → u * (•; y), where a ball of radius Θ centered at any y 0 ∈ Γ is contained in the region.The largest polyellipse E σ1,σ2,...,σ N with σ 1 = σ 2 = • • • = σ N in the region of analyticity can be computed.From [19,21], we know the largest polyellipse occurs when σ * is defined as in eq. ( 21). Remark 6. Conditions (ii) and (iii) of corollary 1 are straightforwardly made to satisfy the conditions of theorem 3. The condition (i) follows from the specific form of the PDE in question.A sufficient condition for the inverse [D u * F(y, u * )] −1 to exist is ϵ * (•; y) > 0, det J(•; y) > 0, κ 2 (•; y) ≥ 0, and Re cosh(u * ) ≥ 0. The first three inequalities can be satisified by choosing y to be sufficiently close to Γ.The term Re cosh(u * ) will be strictly positive when u * is real-valued, and only becomes negative when Im u * is sufficiently large. Remark 7. The estimate for the size of the polyellipse takes each σ k for k = 1, 2, . . ., N to be equal to σ * .This will only give the optimal estimate of the decay rate when using an isotropic sparse grid.For anisotropic sparse grids, we would need to choose different values for each σ k . Theorem 3 show how we can obtain a priori bounds on the region of analyticity after applying the implicit function theorem.To apply these bounds, one needs to find the values for the constants a and M (for a fixed R).For our problem, there must be some choice of constants that work, but computing them is tricky.The constant a can be difficult to estimate because it involves getting bounds on the solution of a backward problem.That is, we ultimately want to find a bound on the norm of [D x F(0, 0)] −1 , which typically means solving a linear PDE.For simple linear operators and domains -for example, a Helmholtz operator −∆ + k 2 on the sphere-this norm is possible to calculate explicitly.However, our domain has interfaces, which makes estimation difficult.For the moment, we set aside the problem of bounding the constant a and leave the task of optimizing the bounds for that value to future work.The constant M can be more easily estimated since we are now solving a forward problem.That is, given some inputs to our (known) function F we want to determine the size of the outputs.The remainder of this section is devoted to showing how the estimate for M can be obtained. To get the explicit bounds needed to apply corollary 1, we will need to make assumptions on the parameters in the NPBE.Suppose that Γ = [−1, 1] N .To simplify the arguments, we will assume that ϵ * k and (κ 2 ) * are piecewise constant and that g * = 0. Suppose that F has the form and that the µ k 's are decreasing in value.Then the Jacobian, J(r; y) has the form so that J(•; y) = I + By.Note that we can treat B as a linear map from We will also assume that f * has the form where ξ is a Gaussian function and each η k ∈ U is a fixed point.Suppose that F(y 0 , u 0 ) = 0 for some y 0 ∈ Γ and u 0 ∈ H(U ).To apply corollary 1, we want an estimate on ∥F(y 0 + y, u 0 + u) − F(y 0 , u 0 )∥ Z . Note that the norm on the last three coordinates of F (see eq. ( 17)) can be estimated by finding bounds for the trace operators and co-normal derivatives.Let us focus on the first coordinate of eq. ( 23), which is given by Estimating the above term in the L 2 (U ) requires us to define the norms of several other spaces in order for the calculation to be tractable.Norms in finite-dimensional vector spaces (e.g.R n ) will be denoted with single bars, | • |, while norms in infinite-dimensional function spaces will be denoted with double bars, ∥ • ∥.Similar notation will be used for the norms induced on linear operators on normed spaces.In particular, | • | p for 1 ≤ p ≤ ∞ will be used to denote the typical ℓ p norms in R n or C n as well as the associated matrix norms.So if v ∈ C n and A ∈ C n×n , then |v| 2 will be the standard Euclidean norm of v and We will assume L 2 (U ; C n ) to have the norm We will also assume that H 1 (U ; C n ) has the norm For the space C 1 (U ; C 3×3 ), we introduce the norm Note that for any B ∈ C 1 (U ; C 3×3 ) and v ∈ H 1 (U ; C 3 ), we have Recall that B as defined in eq. ( 22) is a linear map from C N into C 1 (U ; C 3×3 ), and so inherits a natural norm from being a bounded linear operator between two Banach spaces.We introduce a different norm for these maps that will be easier to estimate and be an upper bound for the linear operator norm.Let Then for 1 ≤ p, q ≤ ∞ with 1 p + 1 q = 1, we have that For the following estimates on various parameters, the hypothesis is assumed, which defines J −1 (y 0 + y) in L(H 1 (U ; C 3 )). Proposition 3. Let L = L(H 1 (U ; C 3 )) denote the space of bounded linear operators from H 1 (U ; C 3 ) to itself.Suppose that y 0 ∈ Γ and Then we have the following bounds: Proof.See appendix A for the proof. Let C k > 0 denote the constant associated with the Banach algebra Then we have Thus the L 2 (U ) norm is bounded by where Finally the forcing term can be dealt with by finding the derivatives with respect to y k for k = 1, 2, . . ., N .We have that and so Combining the estimates in eqs.( 34) to (36) gives a bound on the L 2 (U ) part of F(y 0 + y, u 0 + u) − F(y 0 , u 0 ).We still have constants that have not been explicitly given (such as C max and the norm of the solution u 0 ), but estimating these would be difficult to do in the space of this paper. Sparse Grids Sparse grids are a mathematical technique used to efficiently approximate functions and solve problems in high-dimensional spaces.They provide a way to reduce the computational cost associated with highdimensional problems by exploiting the sparsity of the underlying function.In many real-world applications, such as optimization, machine learning, and scientific simulations, the dimensionality of the problem can be quite large.Traditional numerical methods often struggle to handle these high-dimensional scenarios due to the exponential growth of computational requirements with increasing dimensions.Sparse grids offer a solution to this problem by selectively evaluating the function only at a subset of points in the highdimensional space.The idea is to concentrate computational effort on regions that contribute the most to the overall approximation accuracy, while ignoring or approximating the function in less significant regions. Sparse grids are constructed from tensor products of Lagrange iterpolation.Given a set of data points (ζ 0 , z 0 ), (ζ 1 , z 1 ), . . ., (ζ p , z p ) ∈ Γ × R, where we define Γ := [−1, 1] and the ζ i values are distinct, Lagrange interpolation constructs a polynomial P (ζ) of degree at most n that satisfies: The polynomial P (ζ) is defined as the linear combination of Lagrange basis polynomials l i (ζ), which are constructed to ensure that P (ζ i ) = z i for each data point: The Lagrange basis polynomials are defined as: These basis polynomials have the property that l i (ζ i ) = 1 and l i (ζ j ) = 0 for j ̸ = i, ensuring that the polynomial P (ζ) passes through the corresponding data point (ζ i , z i ).It is clear that P (ζ) ∈ P p ( Γ) := span{ζ m : m = 0, . . ., p}. Consider the vector of approximations i = (i 1 , i 2 , . . ., i N ) ∈ N N 0 , and form the space P p (Γ) = N n=1 P pn ( Γ) then the Lagrange interpolation for N dimensions operator I N i : C 0 (Γ) → P p (Γ) can now be built as More explicitly for each dimension n = 1, . . ., N let {y n 1 , . . ., y n m(i) } ⊂ Γ be a sequence of abcissas for the Lagrange interpolation operator I m(in) n . Thus for any ν ∈ C 0 (Γ) where However, the dimensionality of P p explodes as N n=1 (p n + 1) making Lagrange interpolation intractable for even a number of moderate dimensions.In contrast, if there exists a complex analytic extension of ν(y) with respect to y then sparse grids are a better choice [5,8,63,71].They provide almost the same convergence accuracy of full tensor product grids, but with significant reductions in dimensionality.This is achieved by judiciously selecting a reduced set of monomials from the full tensor product. Let m(i) = (m(i 1 ), . . ., m(i N )) ∈ Z N be the vector of the number of evaluations points for each dimension.For a given non-negative integer w, we define the index set Λ m,g (w) as follows: Λ m,g (w) = {p ∈ N N 0 , g(m −1 (p + 1)) ≤ w}. In this context, the function g : Z N → Z acts as a restriction function along each dimension of the complete tensor grid.The indices in Λ m,g (w) constitute the set of permissible polynomial moments P Λ m,g (w) (Γ), subject to the restrictions imposed by (m, g, w).Specifically, this polynomial set is defined as: Let's consider the difference operator along the n th dimension of Γ, denoted as By taking the tensor product of these difference operators across all dimensions, we can construct a sparse grid.In this context, w ∈ N 0 represents the desired approximation level.The sparse grid approximation of ν is then obtained as follows: We have the flexibility to choose different values for the parameters m and g.Our main objective is to achieve accurate results while controlling the increase in dimensionality of the space P Λ m,g (w) (Γ).To address this, we can utilize the well-known Smolyak sparse grid method introduced by Nobile et al. (2008), which can be constructed using the following formulas: For this particular choice, the index set Λ m,g (w) is defined as follows: Λ m,g (w) := {p ∈ N N 0 : n f (p n ) ≤ w} where Alternative choices, such as the Total Degree (TD) and Hyperbolic Cross (HC) grids, are described in [19]. The last component of the sparse grid is the selection of the abcissas {y n 1 , . . ., y n m(i) } ⊂ [−1, 1] along each dimension.One option is the extrema of Chebyshev polynomials: This popular choice are denoted as Clenshaw-Curtis abscissas.It is worth noting that not all stochastic dimensions need to be treated equally.Some dimensions may contribute more to the sparse grid approximation than others.By customizing the restriction function g according to the input random variables y n for n = 1, . . ., N , a more accurate anisotropic sparse grid can be obtained [62,69].In this paper, for the sake of simplicity, we focus on isotropic sparse grids.However, extending the approach to an anisotropic setting is a straightforward.Our current focus is on establishing error bounds for the sparse grid, specifically the norm ∥ν − S m,g [ν] ∥ L ∞ (Γ) .This bound can be controlled by three key factors.Firstly, the number of dimensions, denoted as N , influences the bound.Secondly, the number of knots, denoted as η, in the sparse grid plays a role.However, the most crucial factor is the size of the complex region in the analytic extension of ν(y) onto C N and the following bound on the polyellipse M (ν) := sup With the parameters analytic extension parameters (σ * , M (ν)), the number of dimensions N and the level of the sparse grid w the error of the sparse grid ∥ν − S m,g [ν] ∥ L ∞ (Γ) can be bounded.Define the following constants Theorem 4. Suppose that ν ∈ C 0 (Γ; R) admits an analytic extension on E σ1,...,σ N .Let S m,g w [ν] be the sparse grid approximation of the function ν with Clenshaw-Curtis abcissas.If w > N/ log 2 then Furthermore, if w ≤ N/ log 2 then the following algebraic convergence bound holds: Proof.Theorem 3.10 and 3.11 in [63]. Numerical Results We now test the complex analyticity result for the NPBE by computing the potential field of the Trypsin protein (PDB:1ppe [11], n = 1,852 atoms) submerged in a solvent.In Figure 4 The protein will be now shifted using a stochastic model inside the domain D. Since boundary conditions are set to zero the solution will not be a simple translation, but will depend nonlinearly on the shift of the atom locations given by C. For each shift, the domain of the Trypsin molecule is discretized and the potential field is solved using APBS.Let C(ω) correspond to the set of atom locations shifted by the event outcome ω ∈ Ω, i.e. C(ω) ] and independent of each other.Thus for each event ω ∈ Ω each element of C are shifted as follows: with respect to stochastic domain shifts Y 1 , . . ., Y N .Using the sparse grid the error In Figure 4 (b) the convergence graphs are plotted for w = 2, 3, 4, 5 for N = 2.We assume that w = 7 is the actual value for E [Q(û)].Notice that the convergence rate decays algebraically.This is consistent with theorem 4. We observe the similar algebraic decay for N = 3. Conclusion In this paper we show the existence of complex analytic extension of the solution of the NPBE with respect to the random perturbations of the domain, and its application to UQ is explored.This is a difficult problem due to the exponentially-growing nonlinear term and the discontinuity of the interface.The analyticity of solutions holds significant practical implications for efficiently computing quantities of interest.Any bounded linear map Q : H(U ) → R can be computed at an algebraic or sub-exponential rate, since they are analytic with respect to solutions ũ ∈ H(U ).Notably, the linearity of surface integrals allows for the direct application of these results in such cases. We have also given estimates on the region of analyticity for the solution of the NPBE, which would allow for explicit calculations of the convergence rate of the sparse grid quadrature.However the estimate of the region relies on constants, whose exact values are difficult to determine, a topic of interest that are left for future work. More generally, the framework developed here is applicable to other problems in UQ.The strategy to applying these results to other problems is roughly as follows: 1. Rewrite the problem as a functional equation, F(y, u) = 0, where y are the stochastic parameters in R N or C N and u represents the solution to the nonlinear PDE or equation in an appropriate Banach space.If the problem involves an interface, then a space similar to H(U ) may be useful. 2. Applying the implicit function theorem (similar to how it was done for theorem 2) gives a region of analyticity for the solution y → u(y).It is important that the Banach space for u is appropriately chosen so that the Fréchet derivative D u F is an isomorphism. 3. Theorem 3 can be applied to estimate the region of analyticity.This gives useful convergence results for numerical methods (e.g.sparse grid quadratures).To get explicit bounds, one needs to estimate the norm of the linear operator [D u F ] −1 , which for simple domains and operators can be computed. Thus the results in this paper can be more broadly implemented in other nonlinear UQ problems in order to demonstrate the analyticity of solutions and observables as well as determine estimates on the region of analyticity. A Computations for estimates Proof of proposition 3. The space L is nice to work with since it is a Banach algebra, i.e., for A 1 , A 2 ∈ L we have We will use this fact frequently in the proof.From eq. ( 24), we have that where σ min (•) represents the smallest singular value of a matrix.We will need estimates on the norms J −1 (y 0 + y) and det J(y 0 + y), and so the following identity will be useful: J(y 0 + y) = J(y 0 )[I + J −1 (y 0 )By]. Lastly, we want to get the estimate eq. ( 33).We have that Using Jacobi's formula, we have that the corresponding derivative can be bounded as follows: FFigure 2 : Figure2: Moving from the random domain to the reference domain is done by using the pullback F * (y).Using integration-by-parts with tests functions allows us to get the weak formulation of the NPBE from the strong formulation.If the solution of the weak problem is sufficiently regular, then it will also be a strong solution. Assumption 4 . There exists c 2 > 0 such that for any y ∈ Γ we havedet J(r; y) ≥ c 2 , ∀r ∈ U.Hence, we can prove the existence and uniqueness of weak solutions.Proposition 2. If assumptions 1 to 4 hold, then problem 5 has a unique solution y (a) the secondary structure of the Trypsin molecule is rendered with a meshed surface of the molecular boundary.This corresponds to the molecular boundary obtained by rolling a solvent atom around the molecule.This boundary corresponds to the interface I 1 .Inside the molecule the dielectric is set to ϵ = 70 and outside the boundary it is set to ϵ = 1 (e.g.solvent dielectric).Note that these dielectric values are unit-less.The second boundary I 2 (not rendered) corresponds to the ion-accessible surface.The Debye-Hückel parameter, κ, is set to zero inside the surface and non-zero outside.The entire protein is contained in a cubic domain D measuring 70 × 70 × 70 Å and the Dirichlet boundary conditions are set to zero, i.e., u ≡ 0 on ∂D.The temperature of the solvent is set to T = 310 Kelvin.Let C := {x 1 , .. ., x n } correspond to set of the location of the molecular atoms.This information is contained in the PDB file.In theory and in practice these locations represent point charges that are replaced with functions L 2 (Ω) as this guarantee the existence of a unique solution for the NPBE[43].The potential field, which corresponds to the solution of the NPBE, is then solved using APBS. Figure 4 : Figure 4: Trypin protein NPBE test.(a) Visual representation of the Trypsin molecule's secondary structure, rendered as a meshed surface that outlines the molecular boundary.This boundary is obtained by rolling the molecule with a solvent atom and is referred to as interface I 1 .The potential field is computed with the APBS software for each knot in the sparse grid.(b) Convergence graphs of the error |E [Q(û)] − E [S m,g w [Q(û)]] | giving stochastic shifts of the molecular domain.We notice that the error decays algebraically.
2023-09-29T06:41:24.833Z
2023-09-28T00:00:00.000
{ "year": 2023, "sha1": "631c850e67d645f2347eaf8fa26de45f8b648d2b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "631c850e67d645f2347eaf8fa26de45f8b648d2b", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
35973053
pes2o/s2orc
v3-fos-license
Evaluation of Intensity Modulated Radiation Therapy Delivery System using a Volumetric Phantom on the Basis of the Task Group 119 Report of American Association of Physicists in Medicine There are many articles and reports referring to the commissioning and the verification process of intensity-modulated radiation therapy (IMRT) system consisting of a linear accelerator (linac) equipped with a multileaf collimator (MLC) and a treatment planning system (TPS) with an inverse planning option. The American Association of Physicists in Medicine (AAPM) 2003 Guidance Document [1] refers to the planning and delivery techniques that are used to enhance the prospects for more accurate and trustworthy results in IMRT commissioning. The document states that the verification measurements are initially implemented on a phantom plan, where the beam segments and the position of the MLCs have to be checked. In clinical routine and for patient-specific quality assurance (QA), a patient plan is transferred on a phantom computed tomography study, recalculating the specific dose distribution on the phantom geometry to be delivered in the phantom. The resulting dose distribution is measured using either ionization chambers, films, or other detectors. The resulting measurement is then compared to the predicted dose to the phantom [1] and the objective is to coincide the planned dose distributions with the measured one. In 2009, AAPM Task Group (TG) 119 [2,3] focused on the problem of quantifying the overall performance of an IMRT system. The The current work describes the implementation of the American Association of Physicists in Medicine (AAPM)’s Task Group 119 report on a volumetric phantom (Delta 4 , Scandidos, Uppsala, Sweden) following the stated dose goals, to evaluate the step-and-shoot intensity modulated radiation therapy (IMRT) system. Delta 4 consists of diode detectors, lying on two crossed planes, measuring the delivered dose, and providing two-dimensional dosimetric information. Seven plans of different goals and complexity were performed, with individual structure sets. TG199 structure sets and plans were transferred and implemented on the Delta 4 phantom taking into account its cylindrical geometry. All plans were delivered with a 6 MV linear accelerator equipped with multileaf collimator of 1 cm thickness. Plan results for each test met the recommended dose goals. The evaluation was performed in terms of dose deviation, distance to agreement, and gamma index passing rate. In all test cases, the gamma index passing rate was measured >90%. Delta 4 phantom has proven to be fast, applicable, and reliable for the step-and-shoot IMRT commissioning following TG119’s recommended tests. Although AAPM’s TG119 report is referring to the implementation of test plans that do not correspond to patient plans, it could be used as an evaluation tool of various IMRT systems, considering the local treatment planning system and the delivery system. report recommends a set of test cases, to assess the accuracy of planning and delivery process of IMRT treatments. It basically refers that those tests should be performed at a primary level in every institution to be able to proceed on IMRT treatments. TG119 recommends verification measurements with ionization chamber for one-dimensional (1D) point measurements and film for two-dimensional (2D) measurements, all performed on a solid water phantom. It is essential to study many different cases and run multiple measurements during the commissioning process to be as accurate as possible and make comparisons between IMRT delivery systems. Performing treatment plans of different type and complexity may arise unidentified inaccuracies of the local treatment and delivery system procedure and hence lead institutions to improve the IMRT process. Although AAPM TG119 suggests 1D and 2D measurements, it has not been applied so far in literature with a volumetric phantom. The aim of this study is the implementation of AAPM's TG119 Report on a volumetric dosimetric phantom to evaluate step-and-shoot IMRT procedure at our department. MaterIals and Methods All measurements were carried out at Aretaieion University Hospital in a 6 MV photon beam linac Siemens Oncor Impression. It is equipped with an MLC of 82 leaves in the X-axis and a nominal width of 1 cm at the isocenter. The Y-axis jaws could move independently creating the desirable field length. The treatment planning was done on the local TPS Oncentra V4.3 (Nucletron, Elekta). The dose distribution for all the test cases was calculated using a collapsed cone convolution algorithm both for optimization process and for final dose calculation. The calculation grid used during the planning process was 0.15 cm. Gamma index: A quantitative tool for dose distribution comparison The idea of dose deviation (DD) was introduced by Van Dyk et al. [4] It referred to the comparison between two dose distributions at low dose gradient areas. Mistreatments that may occur during setup and alignment of a detector system showed a great influence on the results of the comparison. To confront those drawbacks, Van Dyk introduced another quantitative tool, described by the distance to agreement (DTA) for low dose at low gradient areas. It was applied then by Harms et al., [5] as a software tool, introducing the idea of the closest distance between two dose distributions, which lie at the same dose level. To achieve more precise and accurate results, Low [6] combined the two parameters, DD and DTA, into one factor, the gamma index. This factor is satisfied when both variables DD and DTA pass specific criteria. Many groups have attempted to develop methods for the improvement of the calculation of gamma index, with Stock et al. [7] in 2005, to introduce 2D gamma analysis and gamma histograms for complex dose distributions. Hence, within the gamma index, DTA, and DD were summarized to one evaluation tool for the verification of a treatment plan. Gamma criteria of 3 mm DTA and 3% DD were used for the evaluation of IMRT treatment plans. The criteria's threshold, as Low stated, could be modified depending on the clinical needs that are examined. [6] Moreover, it is possible to set an acceptable benchmark passing rate, below which, the gamma index is unacceptable and above that, passes the criteria. In this work, a passing rate of 90% and above for the gamma index (3% DD/3 mm DTA) was considered as acceptable. [8] Delta 4 phantom All test cases, investigated in this work, were planned and delivered on Delta 4 Phantom (Scandidos, Uppsala, Sweden). Delta 4 is a cylindrical, polymethylmethacrylate phantom, consisting of two orthogonal detector planes in a crossed array. It consists of 1069 p-type silicon diodes that can measure point doses and can be used for QA of patient-specific treatment delivery. The detector planes spatial resolution is 5 mm at the central area of 6 cm × 6 cm and 10 mm at the outer area in each plane. The cylindrical phantom has diameter and length of 22 cm and 40 cm, respectively. [9] For the verification of a patient treatment plan, it was applied on a Delta 4 phantom and the dose distribution inside the phantom was recalculated in the TPS. The comparison between the calculated and the measured radiotherapy (RT) dose was translated to DTA, DD, and gamma index. The evaluated gamma passing rate is only given for the measured points in the two detector planes, therefore the analysis is limited to the measurement points. [9] Diodes that received a dose less than a certain percentage of the maximum absorbed dose was ignored in the analysis. These ignored readings typically were located in the low gradient regions where the diode response is less reliable. [10] In this work, dose values below 10% of the dose maximum were excluded from the final result. It has to be emphasized that there is not a direct calculation of the delivered dose for every point within the phantom. However, according to the manufacturer, a 3D calculation of the delivered dose is available, even if the planned control points are missing. This interpolation method theoretically uses depth dose distributions for different field sizes, calculated with the TPS for the Delta 4 phantom and processed by its software. The 3D dose determination for a single beam in the cylindrical Delta 4 phantom requires the planned dose to be known in the complete cylindrical volume, while the measured dose to be known in the two orthogonal detector planes. The planned dose for each beam is renormalized using the ratio between the planned and the measured dose at the intersection point of the beam with the detector plane. Finally, the dose is calculated for all the radiation fields. [11] The above process has been carried out during the calibration of the phantom, which is recommended by the manufacturer to be performed once a year including the wing uniformity response, directional dependence, and absolute dose calibration. Reference treatment planning data, such as DICOM RT objects, beam arrangement, and structures from the original plan were transferred to the Delta 4 system. The software has a variety of tools for displaying the differences between the measured and the calculated dose. The gamma analysis was performed based on the formulae by Low et al. [12] The histograms of DD, DTA, gamma index, [6] and the passing rates were calculated by the Delta 4 's software (Scandidos, Uppsala, Sweden). For the statistical calculation, all detectors from both the detector boards were used. Before the evaluation of an IMRT plan, two more measurements were done by delivering 100 cGy with a 10 cm × 10 cm field at gantry angles of 0° and 90°, in order to check the phantom for positional corrections and linac output constancy. [13] These setup corrections were then applied to the rest of the measurements performed with the same phantom position. American Association of Physicists in Medicine Task Group-119 report TG119 report suggests seven test cases for the evaluation of the IMRT procedure. Each case included a target and peripheral normal structure shapes in a DICOM format, which were imported in the TPS. Thereafter, the DICOM RT structures were fused and registered on the dosimetric phantom. Each test also included specific dose goals [Tables 1-5] and beam arrangement to be applied. The seven tests were of varying type and complexity, and hence different optimization criteria had to be used. Each case required certain specific measurements to be performed for testing the accuracy of both delivery and planning systems, through comparison of the results with the published values in the report. Preliminary/forward planning test cases Test P1: Anterior-posterior: Posterior-anterior and Test P2: Bands Test P1 is a simple, parallel-opposed setup of the phantom using anterior-posterior: Posterior-anterior (AP: PA), 10 cm × 10 cm fields to a dose of 200 cGy to the isocenter. Test P2 (Bands test) is also a simple, parallel-opposed setup using a series of adjacent AP: PA fields, to create a set of five bands, 3 cm wide and a total field length of 15 cm. This was achieved by opening the jaws at Y-axis up to 15 cm at field length, while the MLC's, which lie in X-axis, were moving asymmetrically producing gradually a field width of 15 cm of 3 cm per time). The proposed dose escalation to be achieved was 40-200cGy, in five 40cGy steps. Inverse planning test cases Test I1: Multitarget Multitarget consists of three cylindrical targets, which are stacked along the axis of rotation of the gantry. Each has a diameter of 4 cm and length of 4 cm. The objective of the test is to deliver different doses to each target, with the central target receiving the largest dose per fraction, and the superior and targets receiving 50% and 25% of the prescribed dose respectively. Dose goals are specified as D99, referring to the dose of 99% of the volume and D10, referring to the dose of 10% of the volume respectively, for each and every target [ Table 1]. Test I2: Mock prostate At the mock prostate test case, the planning target volume (PTV) is expanded 0.6 cm around the clinical target volume of the prostate with a posterior concavity. The bladder and the rectum are also included in the structure set and need to be protected. Dose goals for prostate PTV, are specified as D95 and D5, referring to the dose of 95% and 5%, respectively. In addition, rectum and bladder have to be protected, and dose has to be kept under specific limits. D30 and D10, which are referring to the dose of 30% and to the dose of 10% of the volume, respectively, have to be characterized [ Table 2]. Table 3]. Test I4 and I5: C-shape The last two IMRT plans refer to a C-shaped target that surrounds a central avoidance structure. The center core is a cylinder 1 cm in radius and is located in the inner arc of the PTV. Two versions of this test case, with different numerical goals but with the same beam arrangements, were examined. In the first and easier one, the central core must be kept under 50% of the target dose, while in the second and harder test case the central core has to be kept below 20% of the target dose. For both cases, PTV dose goals are specified as D95, aka the dose to 95% of the volume and D10, aka the dose to 10% of the volume. For the core, D10 needs to meet individual criteria, which are stricter for the harder case [Tables 4 and 5]. results The proposed dose goals from TG119 and plan results from the local TPS are shown in Tables 1-5, individually for all the tests performed at our department. Mean values achieved by the institutions referred to the TG119 report are also shown in Tables 1-5, respectively. Results refer to the doses in cGy of the different PTV's and to the organs at risk that have to be protected according to TG119. Planning parameters, including number of fields, segments and monitor units (MU's), and the gamma analysis results of each test case, including DD, DTA, and gamma index passing rates, are also presented in Table 6. Test P1: Anterior-posterior: Posterior-anterior and P2: Bands The percentage of gamma passing rate of the test P1 is 99.3%, and for the test, P2 is 99.7%. For the simple irradiation of parallel-opposed fields, AP: PA and bands test, the results of gamma index were high. Those tests could also be performed as a primary accuracy check of a delivery system for everyday practice. Tests I1: Multitarget The gamma passing rate for the multitarget was calculated at 98.1%. Test case I1 represents a concomitant target IMRT, asking from the planner to achieve gradually different doses to the three targets. The DD pass percentage was 81.7% and the DTA pass percentage was 95.1%. Test I2: Mock prostate The gamma passing rate for test I2 was 99.6%, while the pass percentages of DD and DTA were 80.1% and 98.6% respectively. Dose distribution at axial projection and on the Delta 4 planes and the gamma index results are also provided [ Figure 1]. Test I3: Mock head/neck The gamma passing rate for the I3 test case was 98.6%, while the pass percentages of DD and DTA were 83.1% and 94.2% respectively. The mock structures of the spinal cord and the parotid glands showed a decline in dose, while target receives at least 95% of the prescribed dose [ Figure 2]. Tests I4 and I5: C-shape For each case of the C-shape test, different objectives and constraints had to be achieved. For the easier case, the pass percentage of gamma index was 95.2%, while that for DD and DTA were 48.8%, and 87.8% respectively and are presented below [ Figure 3]. For the harder case, the gamma index pass percentage was 90.3%. For the DD and DTA, the pass percentages were 48.2% and 86.7% respectively. dIscussIon For each TG119 test case, there were specific dose goals that had to be fulfilled by the IMRT system, including the TPS, the optimization algorithm, and the linac's MLC system. In Tables 1-5, the plan results of our IMRT system are presented. As it can be seen most of our plan results met the TG119 recommended dose goals. The multitarget test case was satisfied for most of the planning goals, except the D99 for the central target with a difference of 3.1% lower dose coverage than the required constraint and the D10 for the superior target with a difference of 1.5% higher value than the TG119's constraint. These differences are low and could be considered as acceptable, while the rest parameters satisfied the TG119 needs. Considering the results of the institutions with the dose goals of TG119, there was a difficulty in achieveing the goals of D99 and D10 of the central target by 0.9% lower and 2.9% higher than the asked values, respectively. Regarding mock prostate test case, all the specified goals were met. Like the other institutions, the doses to critical organs were kept much lower than the proposed doses. Furthermore, the dose coverage at the PTV both for the D95 and the D5 did satisfy the stated goals. For the head/neck test case there was a 1.4% deficiency in reaching the goal for D99 of the PTV and also a deficiency of 0.2% in achieving the goal of PTV D90, while PTV D20 stayed below 2.2% the required dose. On the other hand, the published results in the report satisfied all the goals both for the PTV and for organ-at-risks (OARs). However, due to the fact that the deviations of the dose that we measured are lower than 3% of the proposed dose, they could be considered as acceptable. In the easy case of C-shape, the objectives referring to PTV were satisfied with the core to be kept under the proposed dose. TG119 results for the easy case of C-shape were also acceptable for the D95 and D10 of the PTV, and also the core kept under the asked limits. However, in the harder case of C-Shape, there was a disability to keep PTV D10 lower than the goal, and there was a deficiency in meeting the benchmark value of D95 for PTV. Nevertheless, the cord dose was kept lower than 1000 cGy. Unlike other institutions, we chose to preserve the core and underdose the PTV in order to stay below the core constraint. The report's results, on the contrary, satisfied the dose coverage of PTV95 by remaining 0.2% above the proposed goal, but at the same time, PTV D10 exceeded the proposed value by 3.7% and also D10 for the core exceeded by 63% of the required constraint. Another basic purpose of the implementation of the tests was to check if the measured dose from the diodes of the phantom agreed with the planned dose from the TPS. The comparison included measuring the percentages of DD, DTA, and from those two the percentage of gamma index. It is essential, these three parameters to be combined for more accurate results. Acceptance criteria of 3% DD, 3 mm DTA, and 90% threshold of the passing rate for the gamma index were used. As presented in Table 4, in all test cases, the gamma index passing rate was measured >90%. Furthermore, in each test case DTA was measured closer to the gamma index value than the DD value, respectively. However, in the C-shape test cases, even though the gamma index has been satisfied, DD or/ and DTA have been measured relatively low. The above has been stated by Low et al., [14] to prove that it is not obligatory for the acceptance of gamma index percentage to conclude to also acceptable percentages of DD and DTA. In particular, it is mentioned that DTA tool is more intense and variable at steep dose gradient regions, where as the spatial discrepancy of dose distributions is rather low. In clinical practice, a treatment plan, in order to be accepted, must be verified by comparing the measured dose with the planned dose. Different practice has to be followed for different cases, referring to the criteria that a plan needs to satisfy depending on tumor/critical organs and taking into account the uncertainties of spatial resolution. Nevertheless, Low et al. [14] studied and have proved that these two vectors tend to conclude to similar results. As an illustration, all above measurements have shown this tendency of DTA to the gamma index pass percentage. The DD pass percentage on the other hand, shows a general decline to most of the test cases, despite gamma index's acceptable percentage. The DD pass percentage was the lowest for the harder case of C-shape test at 50%. The complexity of the planning goals for the individual plans was varied according to the objectives and the constraints given in TG119. The measured DD, DTA, and gamma index histograms of the forward plans resulted in high pass percentages. The two forward plans can be used as an initial cross-check of the planning and the delivery system, before the IMRT procedure. The IMRT plans resulted in acceptable pass percentages of the gamma index and did meet the criteria asked from the report. In addition, from IMRT results it was revealed an increase in the planned MU's with the complexity of the plans and respectively a decrease in the gamma passing rate. Summing up, Delta 4 provides a thorough data analysis and a relatively faster way to take measurements without necessitating further QA systems. Μeasurements take place on the two planes of the phantom, where a 3D dose distribution of the dose could be computed by the software of the phantom through an interpolation method. Α major advantage using a volumetric phantom for the implementation of TG119 is that one measurement is enough to calculate absolute doses at different points corresponding to points in PTVs and OARs. The measurement procedure with the volumetric phantom gives much more information than point dose measurement with an ionization chamber. The Delta 4 phantom can be set-up easily, and positional errors could be diminished to the minimum. It is important that the initial calibration and the commissioning process of Delta 4 has to be accomplished with rigorous and careful measurements to ensure patient specific QA tests are accurate. [8] Last but not least, it should be noted that TG119 report refers to commissioning process of an IMRT system at a primary level. Consequently, it cannot be used directly on the clinical practice, because it refers to methods that should be followed in order to test the IMRT planning and delivery system before perform IMRT plans on actual patients. In addition, the report's test cases refer to specific mock structures but in real patient cases there might be multiple OARs, and also the size of PTVs/OARs may differ. The requirements to fulfill according to TG119 report were of high intricacy both for the planning and for the delivery system. Comparing our results with those of other institutions, it was concluded that even if they are similar in most of the test cases, (except the test case of harder C-shape), they cannot be compared directly. This is basically because institutions referred in this report used different delivery and planning systems. Results reported in TG119 report refer to mean values of doses that were achieved by all institutions collectively and not individually. conclusIon Delta 4 phantom has proven to be fast and reliable for the step-and-shoot IMRT commissioning following TG119's recommended tests. AAPM TG119 test cases have been applied successfully on the Delta 4 volumetric phantom, providing accurate results at a primary level and before any clinical use. It has to be noted that the test cases refer to theoretical objectives and constraints but not to practical guidelines on performing IMRT plans on actual patients. Nevertheless, the TG119 report could be used as an evaluation tool of different IMRT systems between institutions, in order to compare results for different combinations of planning and delivery techniques. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T04:57:48.693Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "79a2fcd47b8aacaab1131931714f921f4ca872c4", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0971-6203.202419", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "96b4a8a6b0977b1ea3d7e059387b5059c8c11488", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
21907021
pes2o/s2orc
v3-fos-license
Multiple-Access Quantum Key Distribution Networks This paper addresses multi-user quantum key distribution networks, in which any two users can mutually exchange a secret key without trusting any other nodes. The same network also supports conventional classical communications by assigning two different wavelength bands to quantum and classical signals. Time and code division multiple access (CDMA) techniques, within a passive star network, are considered. In the case of CDMA, it turns out that the optimal performance is achieved at a unity code weight. A listen-before-send protocol is then proposed to improve secret key generation rates in this case. Finally, a hybrid setup with wavelength routers and passive optical networks, which can support a large number of users, is considered and analyzed. Multiple-Access Quantum Key Distribution Networks Mohsen Razavi, Member, IEEE Abstract-This paper addresses multi-user quantum key distribution networks, in which any two users can mutually exchange a secret key without trusting any other nodes. The same network also supports conventional classical communications by assigning two different wavelength bands to quantum and classical signals. Time and code division multiple access (CDMA) techniques, within a passive star network, are considered. In the case of CDMA, it turns out that the optimal performance is achieved at a unity code weight. A listen-before-send protocol is then proposed to improve secret key generation rates in this case. Finally, a hybrid setup with wavelength routers and passive optical networks, which can support a large number of users, is considered and analyzed. Index Terms-Quantum key distribution, multiple access, cryptography. I. INTRODUCTION C ONSIDER different branches of a bank within a metropolitan area. Suppose any two of them may need to exchange a secret key-a sequence of random bits-without trusting any other parties. They require to be immune against any eavesdropping attacks, during the key exchange, and, also, at any time in the future. Except for possibly some initial setup requirements, they also require the convenience of remotely exchanging the keys without relying on traditional meet-andexchange protocols. Whereas most known cryptographic solutions to this problem rely on computational complexity, hence threatened by future technological advancements, there is one emerging technology that can meet all above requirements. Quantum key distribution (QKD) is a future-proof protocol, whose security results from the laws of physics as modeled by quantum mechanics [1], [2]. This paper studies the above and similar multi-user scenarios, where QKD lends itself to network environments. QKD relies on single-photon technology requiring us to deal with challenges of generation, transmission, and detection of single photons. A quantum leap forward was recently taken by the introduction of decoy-state protocols [3], which relaxed the need for generating ideal single photons by replacing them with weak laser pulses. These weak laser pulses still need to go through lossy channels with possibly additional background noise. Nevertheless, throughout the past two decades, pointto-point QKD has successfully been demonstrated over fiber and free-space channels covering distances over 100 km [4], [5], and at gigahertz transmission rates over 60 km of optical fiber [6]. This distance is large enough to support applications This research has received funding from the Seventh Framework Programme under Grant Agreement 277110. M. Razavi is with the School of Electronic and Electrical Engineering, University of Leeds, Leeds, LS2 9JT, UK, e-mail: m.razavi@leeds.ac.uk. in local and metropolitan areas without relying on quantum repeaters [7]- [10]. Several essential steps have recently been taken to make the QKD technology available to the public. The most important breakthrough is the implementation of QKD over existing fiber-optic infrastructures [11]- [16]. This is especially important because QKD protocols are extremely sensitive to the background noise, and the crosstalk noise between classical and quantum channels substantially deteriorates QKD performance. Nevertheless, the integration between classical and quantum communications is inevitable. It is partly because all QKD protocols rely on conventional classical communications for their operation, but, more importantly, because it will be too costly to use dedicated channels for QKD applications. All setups proposed in this paper would therefore support both types of communications on a common platform. Another key requirement for a more versatile use of QKD is its expansion from point-to-point links to multi-user networks, which will be at the core of this work. The initial steps toward this end have already been taken. Key exchange between one central node and several access nodes has been studied and developed in the past few years [16], [17]; see Fig. 1(a). A realtime demonstration of key exchange over a mesh network of several nodes was also demonstrated by the SECOQC project [18], and, more recently, in Tokyo QKD network [15]. Such examples, despite their being important breakthroughs in the field, suffer from a general limitation. In order to enable key exchange between any two network users, they often require to trust certain intermediate nodes. In the case of access networks, the central node must be trusted by all nodes, and, in the case of mesh networks, there is not necessarily a direct route between any two nodes, unless a full mesh network is used; see Fig. 1(b). The latter architecture is too costly with the number of required channels scaling quadratically with the number of nodes. Instead of connecting all nodes directly together, as in a full-mesh network, one can use the concept of switching. This idea has been used in a wavelength division multiplexing (WDM) network, in which any two nodes are assigned a certain wavelength for key exchange and a wavelength router links them together [14]; see Fig. 1(c). Such a system, however, requires a linearly increasing number of wavelength resources with the number of nodes, even if we efficiently reuse the channels [19]. In the field demonstration of [14], there is also a specific detection module required per incoming wavelength, which makes the system even more costly. In this paper, we look at other available dimensions, i.e, time and code, to be employed in time/code division multipleaccess (TDMA/CDMA) QKD networks. They are particularly useful if combined with a WDM routing setup, in which case, each WDM node can serve as a hub through which multiple TDMA/CDMA users can be supported. In particular, a hybrid WDM-CDMA setup does not require any network-wide time coordination and is suitable if the need for key exchange is sporadic. To this end, we first study a TDMA/CDMA based multi-user star-topology QKD network, and find average lower bounds on the secret key generation rates when decoy-state protocols are used. For CDMA we use optical orthogonal codes (OOCs) to address each node [20]. It turns out that the lower the code weight the higher the key generation rate is. It follows that a code with weight one is the best option for QKD. This case is equivalent to a TDMA system whose users are not synchronized. To get the advantage of both TDMA and CDMA, a listen-before-send (LBS) protocol is proposed, in which users attempt to pick a free time slot by first listening to the channel and then proceed to key exchange only if no other user is detected in that slot. The rest of the paper is organized as follows. In the next section, we review original and decoy-state QKD protocols and the existing lower bound on the secret key generation rate in the latter case. In Sec. III, we study a passive star QKD network using TDMA and CDMA as their multiple-access methods. We also calculate the average key rate for the LBS protocol. Some numerical results will be presented in Sec. IV, before introducing hybrid WDM-T/CDMA QKD networks in Sec V. Section VI concludes the paper. II. QUANTUM KEY DISTRIBUTION In this section, we first review the original BB84 protocol, that relies on ideal single photons [1], and then we present its decoy-state variant and the existing lower bounds for secret key generation rate in the case of point-to-point QKD links. The BB84 protocol enables two parties, namely, Alice and Bob, to securely exchange, or, more precisely, extend a secret key sequence. The probability of failure in achieving either an identical or insecure key, possibly due to the presence of eavesdroppers, can be made arbitrarily small. It performs this task through the following steps. Step one includes the transmission of a raw key by Alice's encoding and sending single photons to Bob. Encoding is done in two, randomly chosen, nonorthogonal polarization/phase bases, which creates maximum uncertainty in decoding if one does not know the basis used. The bases will be revealed later, via authenticated classical communications, in order that Alice and Bob turn their raw keys into sifted keys by keeping only the bits for which the same basis has been used for encoding and decoding. In the next step, Alice and Bob attempt to correct for possible discrepancies in their sifted keys using errorcorrection techniques. If they find the quantum bit error rate (QBER) too high, they will abort the protocol, otherwise they apply privacy amplification to their corrected keys to bring the amount of leaked information to eavesdroppers below a desired threshold. Since the introduction of BB84, it has been tempting to use weak laser pulses, with less-than-unity average photon numbers, instead of ideal single photons. It can, however, be shown that by taking advantage of multiple-photon components of a coherent state, an eavesdropper can obtain information about the key, hence reducing the secret key generation rate. In fact, if the channel transmissivity between Alice and Bob is denoted by η, the key generation rate will drop proportionally to η 2 if one uses coherent states, whereas it scales with η if ideal single photons are used [21]. It was until the advent of the decoy-state protocol that was shown that by a simple trick one could get back to the same scaling with η even if weak laser pulses are used. The trick is in Alice, occasionally, changing her laser intensity from its typical signal value to some decoy values. These random-power pulses provide Alice and Bob with extra information that help them better detect potential eavesdroppers. The secret key generation rate per transmitted pulse, for a BB84 protocol that uses decoy coherent states and threshold detectors for its implementation, in the limit of an infinitely long key, is lower bounded by are, respectively, the overall gain and the QBER, are, respectively, the gain and the error rate of a single-photon state, µ is the average number of photons in a signal pulse, is the yield of a single-photon state, η is the total transmissivity of the link including the efficiency of Bob's detectors, e d represents the misalignment error, and Y 0 is the probability of a click on the Bob's side without having any incident photons from Alice. In a point-to-point link, Y 0 , the yield of the vacuum state, models the photodetectors' dark current and the background noise. III. QKD OVER STAR NETWORKS Switching and routing devices are among key components of modern communication networks. For quantum applications, such as QKD, one should also consider whether the employed switching scheme is compatible with the singlephoton regime of operation. In this section, we consider a simple, low cost, switching idea suitable for quantum optical communications. In our setup, N users, at an identical distance L from each other, are connected via an N × N star coupler. A star coupler combines the signals at its input ports and broadcasts them to all output ports. Each output port will then carry a shadow of each input signal. This results in a multiple-access scenario, which must be handled at the receiver. Here, we employ the two approaches of TDMA and CDMA to distinguish between different users' signals. TDMA is, in principle, interference free, but it requires networkwide time synchronization. It, however, provides us with a reference to which we can compare the performance of other proposed techniques such as CDMA. We should also bear in mind that the splitting nature of star couplers will prevent us from supporting a large number of users. This transparent architecture can, however, be used as a private local/metropolitan area network, or as the access part of a larger optical network, an example of which will be discussed in Sec. V. QKD relies on both classical and quantum communications for its key exchange protocol. In a multi-user scenario, we should, therefore, support both types of communications for all users. It is an ongoing research in finding out the best strategy for coexistence of weak, single-photon, quantum signals and strong, multi-million-photon, classical pulses on a single strand of fiber. The main problem is the crosstalk noise induced by classical signals over quantum channels, which can fully mask the information in the QKD single photons. Here, we use the scheme proposed in [13] by using the 1550 nm wavelength band for classical applications and the 1310 nm band for quantum signals. Depending on the employed band demultiplexer additional filtering may also be required. In our analytical model, we model the above crosstalk effect by considering a fixed background noise in our quantum channels. Star coupler Each user, in Fig. 2(a), has an Alice box and a Bob box; see Fig. 2(b). Alice boxes include a QKD encoder using a faint laser source at the 1310 nm band, followed by a multipleaccess (MA) encoder, which addresses, either in time or by a code, the intended receiver. The output of the MA encoder is multiplexed with classical channels in the 1550 nm band. The quantum and classical parts can be run independently of each other. Bob boxes include a band demultiplexer, which A TDMA time frame for QKD users. Each frame has a width T , and is divided into Nc time slots (chips). The width of the laser pulse is denoted by τp and the gate width of single-photon detectors is denoted by τ d . (b) An example of an OOC sequence. In CDMA QKD, in order to send a raw key bit, instead of sending a single pulse, one should send a sequence of pulses corresponding to the designated code to the intended receiver. separates the quantum and the classical channels, an MA decoder, which attempts to extract the intended weak laser pulse from the received signal, followed by necessary QKD measurements. Circulators can be used to enable bidirectional transmission to/from Bob/Alice boxes. Only one QKD measurement module, in our setup, is needed per user. This is a cost-effective approach as the most expensive elements of a QKD link are commonly the avalanche photodiods (APDs) used for single-photon detection. Only one user, at a time, can, then, exchange a key with a certain Bob box. Proper media-access layer (MAC) protocols must then be used to avoid collisions, of attempting to exchange a key with the same user, between users. This can, however, be coordinated via classical channels before two QKD users start their protocol, and is assumed throughout the paper. We also assume that the users employ the decoy-state variation of the standard BB84 protocol [3] and that channel phase/polarization distortions are compensated at the receiver. A. TDMA QKD Networks In TDMA QKD networks, each receiver is assigned a certain time slot out of N c available time slots. The total frame length is denoted by T , which must be longer than the dead time of single-photon detectors; see Fig. 3(a). The width of each laser pulse is denoted by τ p , which must not exceed T c ≡ T /N c , or the chip period. The MA encoder's task is to make sure that the transmitted weak laser pulses will arrive at the receiver at the right time. For instance, in order to send a raw key bit to user k, for k = 1, . . . , N , the transmitted signal state is as followŝ where |n k represents an n-photon Fock state corresponding to the kth chip, and no photons in any other chips. The MA decoder box removes all but the signal in the desired time slot by opening the detector gate during the expected arrival time. Under these conditions, the secret key generation rate per user is lower bounded by where with γ dc being the total dark count rate generated by photodetectors in a Bob box, η d the quantum efficiency of singlephoton detectors at Bobs' boxes at 1310 nm, γ xtalk the background rate of crosstalk photons leaked from classical channels at the receiver per unit of bandwidth, B opt the optical bandwidth of the receiver, and τ d ≥ τ p the photodetectors' gate width. Throughout the paper, we assume that γ dc and γ xtalk are fixed and identical for all users. In principle, because of practical constraints on photodetectors, τ d can become greater than T c , in which case we have to consider the interference from other users as well. Here, we assume that τ d ≤ T c so that the above TDMA rate, in (5), provides us with a reference in comparison with other methods. The total channel transmissivity to be used in (1)- (3) and (5) is given by where α is the channel loss factor in dB per unit of length. With N A ≤ N active pairs of users operating in the network, the total secret key generation rate for the entire network is then given by R (tot) B. CDMA QKD Networks In CDMA QKD networks, each receiver is assigned a certain code of length N c by which it can be addressed. In our case, we use OOCs, with minimal auto-and cross-correlation properties. Each code in an OOC family can be represented by an N c -long sequence of zeros and ones. A bit one in position k of a code sequence represents a pulse in the kth chip in Fig. 3(b). Bits zero represent no pulses. In such zero-one codes, two code sequences A and B are overlapping whenever a pulse in code A is in the same time slot as a pulse in code B. The minimal cross-correlation criterion guarantees that once such an overlap between two pulses occurs, there will be no other overlapping pulses in codes A and B. That will also be the case if we shift one code sequence against the other in which case the total extent of overlapping between two codes does not exceed one pulse duration. In our QKD setup, in order to send a raw key bit to a user with an OOC sequence of weight w and pulse positions 1 ≤ k 1 < k 2 , . . . < k w ≤ N c , the following state is generated in each CDMA frame: For such a code, the decoder box will combine the received pulses in positions k 1 , ..., k w into one pulse and pass it to the next stage for QKD measurements; see Fig. 4. In the absence of any interfering users, and assuming that different components in (9) will be added incoherently [22], the decoded signal pulse will then be Poisson distributed with mean ηµ. Both the MA encoding and decoding steps in OOC CDMA can be performed transparently using passive optical elements, making them compatible to QKD applications. Figure 4 shows passive structures for CDMA encoders and decoders, where splitters and combiners along with proper delay elements have been used to split a single pulse into multiple pulses at the transmitter, and to recombine them again at the receiver. One raw key bit Fig. 4. Passive encoders and decoders for CDMA QKD. In order to reduce the total splitting loss, one can use active switches at the decoder. Note that the passive structure for the decoder results in an additional loss factor of w, which can be avoided if, instead of the splitter, a controllable switch is used in the decoder. The switch will direct each pulse in the incoming signal to the relevant delay branch in the decoder. For the rest of this section, we assume that such an active switching has been used in the decoder. It turns out, however, that, in the optimal regime of operation, such an assumption is not required. A new source of background noise in a CDMA setup is the interference from other QKD users. The minimal crosscorrelation property guarantees that the arbitrarily shifted versions of no two codes, belonging to an identical OOC family, will overlap at more than one pulse position. There will be, however, cases where a pulse from one code will interfere with a pulse from another code. In such a case, the background noise in the QKD channel will increase. Assuming m such interfering users, the background noise will be given by Y where we assumed that all users are chip synchronous [20]. This assumption is not required in practice, but it will simplify our analysis and help us overestimate the effect of noise in line with the lower bound in (1). The secret key generation rate per user, in the presence of m such interfering users, is then lower bounded by Although, in the worst-case scenario, all active users might maximally contribute to the interference level experienced by each user, on a typical operating conditions and considering the asynchronous nature of a CDMA network, the number of interfering users follows a binomial distribution corresponding to N int = N A − 1 trials of a Bernoulli experiment with success probability p = w 2 /N c , [20]. The collision probability p represents w 2 cases of overlap between two codes once one shifts one code, chip-by-chip, vis. a total of N c shifts, against the other code. We can then define an effective rate of secret key generation rate per user as follows where is the probability of having m interfering users. The total effective rate will then be given byR Table I. Figure 5(a) shows the effective key generation rate versus the number of active users for TDMA and CDMA QKD networks. In the latter case, we have considered three different values of weight, w, for the code. Given that the number of available codes of length N c and weight w is limited to (N c − 1)/w/(w − 1), [20], for N c = 16, a maximum of 7 and 2 users, respectively, can be supported at w = 2 and w = 3. In Fig. 5(a), however, we have assumed that optical orthogonal codes can be assigned to all users. It is interesting to see that, whereas in classical optical CDMA an increase in the code weight would ultimately result in improving the system performance [23], for QKD applications, it has the opposite effect. To see the reason for this difference, note that there are two competing terms in (12) that affect R CDMA . The parameter p, for the collision probability, quadratically increases with w, which would require lower weight values, whereas the interference noise in (10) is inversely proportional to w. The winner of this trade-off turns out to be the former mainly because the advantage that we obtain, by increasing w, from interference reduction, in a practical regime, is close to none. For low values of w, the interference noise, mηµ/w, in (10), is comparable with the signal rate mηµ at the receiver, because of which R (m) CDMA is zero for m > 0. In fact, in order to get a nonzero value for R (m) CDMA , for m > 0, w should be roughly greater than 100, which imposes impractical constraints on the system. In a practical regime of operation, where all terms in (12), but the first, vanish, we obtain which is decreasing with w. The importance of collision probability in CDMA QKD networks is also noted in [24]. C. LBS QKD Networks In the previous section, we showed that our CDMA QKD network would achieve its optimal performance at w = 1. The case of w = 1 corresponds to the sequences of codes with only one bit 1, representing a pulse, out of N c chips. Considering the asynchronous nature of CDMA networks, in this case, any pairs of users, who wish to exchange a secret key, can effectively choose a random chip for their QKD pulses. The performance of this scheme would be inevitably lower than that of TDMA, because, in the case of CDMA, collisions between pulses from different users are also allowed. This collision probability can, however, be reduced by employing an LBS algorithm. By using the LBS scheme, we achieve the simplicity of an asynchronous system, such as CDMA, with a performance approaching that of TDMA, as we show below. In our LBS QKD setup, each pair of users who need to exchange a key first pick a random chip for their pulse position. The receiver, then, listens to the channel for k periods, and if no photon is detected during the chosen chip, the two users proceed with the key exchange protocol. Otherwise, they repeat the same process again, until they find a free time slot. This scheme is very similar to the well-known carrier-sense multiple access scheme, with the distinction that, in a QKD setup, we have to sense signals as weak as single photons. The collision probability in an LBS scheme can be approximated as follows. Suppose two users A and B are exchanging a key by sending QKD pulses at a certain time slot. Under chip-synchronous conditions, the probability that another pair, after following the LBS protocol, choose to use the same time slot is given by The effective secret key generation rate can then be approximated bȳ which approaches R TDMA as we let k → ∞. Figure 5(b) shows how key generation rate improves when the number of listening periods increases. With the nominal values used in Table I, it takes on the order of 1000 listening periods to achieve the same performance as the Table I. TDMA scheme. For the network at its full capacity, that is when 16 active pairs of users are present, it takes each pair roughly 1 second to generate 15kb of secret key. The extra 1000T = 16µs overhead for the LBS scheme is then negligible compared to the time needed to generate the key. Using CDMA, with w = 1, only 6 kb/s of secret key can be generated, whereas the effective rate for TDMA is about 16.5 kb/s per user. IV. NUMERICAL RESULTS In this section, we study the performance of the system against variations in the length of the code, the number of users, and the distance, among other parameters. We consider the three cases of TDMA, CDMA with w = 1, and LBS with k = 500 for our comparative study, where the effective key generation rates can, respectively, obtained from (5), (12), and (15). The nominal values used are listed in Table I, which are based on practical values affordable by the today's technology. The average number of photons µ = 0.48 for signal photons maximizes the secret key generation rate for CDMA (w = 1) and TDMA QKD networks. The 6 dB path loss corresponds to a metropolitan area with roughly 20-30 km in diameter. The chosen crosstalk rate is based on the numerical values reported in [13]. The designated dark count and quantum efficiency are also achievable by single-photon APDs at the 1300 nm band. Figure 6 depicts the effective secret key generation rate per user versus code length. Two cases have been considered. In Effective key generation rate as a function of maximum number of users, specified by the size of the star coupler switch, for TDMA, CDMA (w = 1), and LBS (k = 500) QKD networks. Solid lines represent rates per user, whereas dashed lines represent the total secret key generation rate in the network. It is assumed that the number of active users is at its maximum possible N A = N , and the code length is fixed at 128 in all cases. Other parameters are taken from Table I. Fig. 6(a), we have assumed that the chip duration τ c is fixed at 1 ns, and we have increased the length of the code, or, equivalently, the number of chips, N c . By doing so, the total frame period T = N c τ c would increase and that would reduce the effective rate, inversely proportional to T , in the case of TDMA. By increasing N c , the collision probability, in the case of CDMA, would, however, decrease, and this effect would, to some extent, balance the reduction in rate due to increase in T . The LBS curve lies somewhere between that of CDMA and TDMA curves. In Fig. 6(b), however, the frame period T is fixed at 16 ns, and increasing N c implies using shorter chips. In this case, the TDMA rate remains constant as it represents, effectively, a point-to-point system with a repetition rate of 1/T . The CDMA and LBS performances, however, improve as a result of reduction in the collision probability. We can effectively make the chip duration as short as our detectors allow. Based on the above results, to push the performance of the system to its maximum, for a given photodetector with time resolution τ d and dead time τ D , one can choose τ p = τ c = τ d , and, correspondingly, to minimize the crosstalk noise, B opt = 1/τ p . The period T will then be specified by the maximum of τ D and N τ c . The number of chips, N c , is then given by T /τ c . We use this prescription, as summarized in Table I, for the upcoming graphs. Figure 7 shows the effect of the splitting loss, or equivalently the maximum number of users supported by the network, on the secret key generation rate. As expected, by increasing the size of the star coupler, the splitting loss would increase and that proportionally reduce the effective rate for each pair of users. At its maximum capacity, however, when all users are paired up to exchange secret keys, the total rate remains almost constant, as shown by the dashed line in Fig. 7. This implies that the total number of key bits distributed in the network is almost identical to the number of key bits generated in a point-to-point system with no splitting loss. This fair distribution of keys among users has been achieved with no loss in the case of TDMA, and small penalties in the Path Loss, dB Table I. case of CDMA or LBS schemes. Finally, Fig. 8 presents the effect of path loss on the effective key generation rate for the three schemes of interest. It can be seen that by increasing the path loss, the LBS protocol approaches the CDMA one, as it becomes harder and harder to detect a single photon in only k = 500 listening periods. Another important factor shown in Fig. 8 is the dependence of the key generation rate on the crosstalk noise. It can be seen that even for crosstalk rates three orders of magnitude higher than the nominal value in Table I, our multiple-access system is capable of supporting 16 users at 10 dB channel loss. This makes this approach promising for home users in a passive optical network (PON) setup in future generations of optical networks. In next section, we propose a setup that combines the routing capabilities in WDM networks with the ease of access in PON systems to support a large number of QKD users. V. HYBRID WDM-T/CDMA QKD NETWORKS The passive star networks described in Sec. III support quantum and classical communications for a moderate number of users offering certain key features. They enable secret key exchange between any pairs of users without requiring them to trust any other nodes. The transparent nature of star couplers also enable coexistence of classical and quantum signals over the same infrastructure. The proposed passive structure can then be made compatible to PON architectures for consumer users. Finally, whereas the repetition rate in a point-to-point QKD link is commonly limited by the after-pulse effect in single-photon APDs, in our proposed schemes, the detectors' dead time is efficiently split between multiple users, either synchronously by using TDMA, or asynchronously by using CDMA or LBS schemes. To support a large number of users, however, we need to bring in another degree of freedom, vis. the wavelength. Figure 9 illustrates a possible way for network expansion by combining WDM routing with each of TDMA/CDMA/LBS techniques. Here, Alice and Bob boxes are similar to those Fig. 9. An expansion of a hybrid WDM-T/CDMA setup. Alice and Bob boxes are the same as that of Fig. 2(b). A total of M = N × W users are being supported by such a setup, where each N users represent a PON with a single wavelength, and W is the number of available channels. Different users of a PON are being separated in time, code, or by the LBS method. The WDM routers must provide a clear path between the intended users. MUX and DeMUX, respectively, represent band multiplexer and demultiplexer that combine/separate classical and quantum channels. In the above setup, the two systems can be run independently of each other, while the classical system can be used to coordinate, at/above the physical layer, between quantum users. of Fig. 2(b), except that Alice boxes now require tunable lasers. Each Bob box is designated by one of the W wavelength channels in the 1310 nm band as well as a time slot corresponding to the employed TDMA/CDMA/LBS scheme. Classical communications can be independently handled, by a separate switch, in the 1550nm band. They can, nevertheless, be used to coordinate between quantum users. All Bob boxes with the same wavelength are accessed via a 1 × N splitter, similar to a PON system [25]. The WDM router combines a maximum of N signals on the same wavelength and directs them to the corresponding output port. Proper MAC layer protocols will be used to avoid/reduce collisions. In essence, the network in Fig. 9, for each wavelength, resembles a passive network as in Fig. 2(a). The same rate analysis, with slight changes to incorporate the crosstalk noise from quantum users on different wavelengths, will then apply to this new setup as well. The switching technology at the core network determines the extent of this new form of cross talk. With quantum network at its full capacity, the background rate in (6) will be modified to where α xt represents the crosstalk factor between different wavelength channels, assumed identical for all wavelengths. Note that this assumption is in line with the lower bound nature of (1) as, in many cases, the only significant crosstalk terms are from adjacent channels. Using (16), along with (5), (12), and (15), we can obtain the secret key generation rate for hybrid WDM-TDMA/CDMA/LBS systems. Figure 10 shows the secret key generation rate per user, for the network in Fig. 9, at its full capacity, for the WDM-TDMA scheme. We use the nominal values given in Table I for the subnetworks, and the horizontal axis represents the total number of users M = N W , where, here, N = 16. As shown in Fig. 10, an isolation factor of 30 dB is sufficient to support hundreds of QKD users. With 20 dB isolation, the total number of channels that can be supported drops to 8. In principle, however, we can always use additional optical filters in Fig. 9 to ensure that the crosstalk noise from WDM quantum users is below the desired level. Table I for TDMA subnetworks. Different values of intra-channel crosstalk factor, αxt, are considered. VI. CONCLUSIONS In this paper, well-known techniques in classical multipleaccess optical communications were applied to quantum cryptography applications. That enabled multiple users to exchange secret keys, via an optical network, without trusting any other nodes. The proposed setups offered key features that would facilitate their deployment in practice. In all of them, classical communications services were integrated with that of quantum on a shared platform, which would substantially reduce the cost for public and private users. More generally, by sharing network resources among many users, the total cost per user would shrink, making the deployment of such systems more feasible. Another cost-saving feature in our setups was their relying on only one QKD detection module per user. The setups considered were inspired by existing optical access networks as well as future all-optical networks. A passive star-coupler network was first studied when multiple QKD users could pair up and simultaneously exchange secret keys via the network. Each user could independently use classical communications as well. Different users could be distinguished in time, using a TDMA scheme, or in the code space using OOC CDMA. It turned out that, whereas TDMA QKD could offer an interference free, or, effectively, a pointto-point QKD service, CDMA QKD should deal with the interference effect. It was shown that the optimal performance for a CDMA QKD system could be achieved if codes with weight one were used, for which interference probability would be minimum. In this case, the CDMA system was similar to the TDMA one, except that no time coordination was needed between all network users. To enjoy the benefits of both TDMA and CDMA systems, a listen-before-send protocol was proposed, whose performance could approach the TDMA QKD once the number of listening periods was sufficiently large. To support a larger number of users, hybrid WDM-TDMA/CDMA architectures, potentially compatible to future all-optical networks and PON access systems, were proposed and their performance in terms of secret key generation rates and numbers of users was studied.
2011-12-14T14:03:34.000Z
2011-12-14T00:00:00.000
{ "year": 2011, "sha1": "977a983684c0b89aa17405a31e60fde8e5184736", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1112.3218", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "977a983684c0b89aa17405a31e60fde8e5184736", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
2127858
pes2o/s2orc
v3-fos-license
Reappraisal of the Therapeutic Role of Celecoxib in Cholangiocarcinoma Cholangiocarcinoma (CCA), a lethal disease, affects many thousands worldwide yearly. Surgical resection provides the best chance for a cure; however, only one-third of CCA patients present with a resectable tumour at the time of diagnosis. Currently, no effective chemotherapy is available for advanced CCA. Cyclooxygenase-2 (COX-2) is a potential oncogene expressing in human CCA tissues and represents a candidate target for treatment; however, COX-2 inhibitors increase the risk of negative cardiovascular events as application for chemoprevention aim. Here, we re-evaluated the effectiveness and safety of celecoxib, one widely used COX-2 inhibitor, in treating CCA. We demonstrated that celecoxib exhibited an anti-proliferative effect on CGCCA cells via cell cycle arrest at G2 phase and apoptosis induction. Treatment for 5 weeks high dose celecoxib (160 mg/kg) significantly repressed thioacetamide-induced CCA tumour growth in rats as monitored by animal positron emission tomography through apoptosis induction. No obviously observable side effects were noted during the therapeutic period. As retrospectively reviewing 78 intrahepatic mass-forming CCA patients, their survival was strongly and negatively associated with a positive resection margin and high COX-2 expression. Based on our result, we concluded that short-term high dose celecoxib may be a promising therapeutic regimen for CCA. Yet its clinical application still needs more studies to prove its safety. Introduction Cholangiocarcinoma (CCA), which originates in the epithelial lining of the biliary tract, is the second most malignant tumour in the liver after hepatocellular carcinoma [1][2][3]. CCA accounts for 10-15% of hepatobiliary neoplasms, and its incidence and mortality have recently increased [4,5]. The survival rate of patients with CCA is very poor, and surgical resection provides the best chance for a cure [6][7][8]. However, the difficulty in early diagnosis makes most patients poor candidates for surgery. For patients with unresectable CCA, prognosis is dismal and most die within 1 year [9]. Additionally, CCA is resistant to traditional chemotherapy and radiotherapy. Thus, advanced CCA represents a challenge for clinicians, and the establishment of a new therapeutic regimen for CCA is urgent and warranted. Local inflammation of the biliary tree may be the main cause of epithelial transformation of the biliary tract from dysplasia to subsequent malignancy. Reparative proliferation of the biliary epithelium after wounding, primary sclerosing cholangitis, clonorchiasis, hepatolithiasis, parasitic infestation, and other conditions is associated with increased incidence of CCA [4,10,11]. Cyclooxygenase-2 (COX-2), the inducible form of prostaglandin endoperoxidase, is constitutively expressed in some human CCA cell lines [12]. COX-2 also plays an important role in eliciting cholangiocarcinogenesis in human and rat models [13,14]. Moreover, the overexpression of COX-2 as determined by immunohistopathologic staining of CCA was reported in humans [12,15,16]. Thus, COX-2 represents a potential oncogene in CCA, and a COX-2 inhibitor may represent a new treatment for CCA. Previously, the COX-2 inhibitors JTE-522 and NS-398 were shown to inhibit cell growth in 5 human COX-2expressiing CCA cell lines at concentrations of 100 mM and 200 mM, respectively [12]. Celecoxib, one of the most widely used clinical COX-2 inhibitors, represses rat C611B CCA cell growth markedly in a dose-dependent manner [14]. Further, celecoxib exerted this anti-proliferative effect on CCA in vivo in a xenograft animal model [17]. However, the clinical use of COX-2 inhibitors is hampered because of their potential for causing serious cardiovascular events when applied for chemoprevention [18]. We previously generated a rat CCA cell line (CGCCA) [19] derived from the thioacetamide (TAA)-induced CCA rat model [20], in which CCA can be induced by administering TAA-containing water for 20 weeks. This model successfully recapitulates human CCA progression, and rat CCA tumour growth can be easily evaluated by animal PET [21]. In this study, we aimed to examine the presumed anti-proliferative effect of celecoxib on CCA in vitro and in vivo to re-examine its effectiveness. We also retrospectively reviewed patients with mass-forming CCA (MF-CCA) to investigate the relationship of COX-2 expression with clinical characteristics and prognosis of CCA patients in an effort to re-evaluate the role of COX-2 inhibitors in CCA treatment. Cell Culture The TAA-induced CCA cell line (CGCCA) established previously in our laboratory [19] was grown in a cell culture medium containing Dulbecco's Modified Eagle Medium with 100 U/ml penicillin and 100 U/ml streptomycin (basal medium) plus 10% foetal bovine serum. The medium was freshly prepared as indicated with a final concentration of 0.1% dimethyl sulfoxide (DMSO) and was changed every 2 days. Immunohistochemical Staining of CCA Cells and Rat Tissue for COX-2 CGCCA cells were seeding at 2000 cells/well in a culture slide (#154461 Lab-Tek II Chamber Slide System, Nalge Nunc International, USA) and incubated at 37uC incubator overnight. After rinsed with PBS twice, the cells were fixed with 4% PFA for 1 min and were perforated with 1% tween 20 for 1 min. The slide was prepared as usual method for routine IHC staining. Slides were incubated with primary monoclonal antibody against COX-II (RB-9072-P Lab vision corporation, Fremont, CA, USA, dilution 1:400) overnight at 4uC. The slides were then washed three times for 5 minutes in TBST before visualization with LSAB2 system-HRP (No K0675 Dako Cytomation, Carpinteria, USA). Control slides were incubated with secondary antibody only. BrdU Assay The BrdU assay was performed using a BrdU ELISA kit (Roche Cell Proliferation ELISA, BrdU (colorimetric) #11647229001, Roche Diagnostrics, GmbH, Mannheim, Germany). Briefly, cells were plated in a 96-well plate, cultured to approximately 50% confluence, and treated with the indicated concentrations of celecoxib for 20 h. The harvest and labelling procedures adhered to those described in the kit manual. Cell Cycle Analysis by Flow Cytometry Flow cytometry for cell cycle analysis was performed using a FACSCalibur (BD Biosciences, San Jose, CA, USA) as described previously [22]. Propidium Iodide (PI) Staining After treatment with celecoxib at the indicated concentrations, the cultured cells were stained with a solution containing 4 mg/ml PI and 100 mg/ml RNaseA in 16 PBS and incubated in the dark for 30 min. Apoptosis Analysis by TUNEL Assay The TUNEL assay procedure was described previously [23]. Cellular DNA was stained using an apoptosis detection kit (Millipore). Tissues DNA was stained by TumorTACS In Situ Apoptosis Detectin kit (TREVIGEN #4815-30-K). The assay was performed according to the manufacturer's instructions. Animal Studies All animal studies were approved by the experimental animal ethics committee at Chang Gung Memorial Hospital and conformed to the US National Institute of Health (NIH) guidelines for the care and use of laboratory animals (Publication No. 85-23, revised 1996). A total of 40 adult male Sprague-Dawley (SD) rats with a mean weight of 250614 g were used in the experiments. The animals were divided into 1 control group (n = 10) and 3 experimental groups (n = 30). The rats were housed in an animal room with a 12/12-hour light-dark cycle (light from 8:00 AM to 8:00 PM) at an ambient temperature of 2261uC. SD rats were administered 300 mg TAA/L daily in their drinking water for up to 20 weeks [20]. Celecoxib in water was administered by gavage to the 3 experimental groups at doses of 40, 80, or 160 mg/kg (10 rats in each group) once daily at between 9:30 and 10:00 AM. The 10 rats in the control group received water only according to the same schedule. Treatment was administered for 5 days/week over a 33-day period (from the 20 th to the 25 th week). Efficacy Evaluation with Positron Emission Tomography (PET) Imaging To evaluate glycolysis alteration in the liver tumours of living animals, all rats underwent 18F-fluorodeoxyglucose ( 18 F-FDG) PET studies at the molecular imaging centre of Chang Gung Memorial Hospital. All 40 rats underwent serial PET scans at weeks 20, 21, and 25 after toxin treatment using the Inveon TM system (Siemens Medical Solutions Inc, Malvern, PA, USA). Details regarding radioligand preparation, scanning protocols, and optimal scanning time are described in our previous report [21]. Quantification of 18 F-FDG uptake in the largest liver tumour and normal liver tissue was performed according to the recommendations of the European Organization for Research and Treatment of Cancer [24] by calculating the standardised uptake value (SUV) using the following formula: SUV~D ecay corrected tissue activity (Bq=mL) Injected dose (Bq)=Body weight (g) The tumour regions of interest (ROIs) were determined according to the largest diameter of the selected tumour in transverse images, and the ROIs of apparently normal liver tissue were determined from the same transverse images. The tumour and liver mean SUVs (SUV mean ) and the tumour-to-liver (T/L) radioactivity ratio were calculated for comparisons. Apoptosis Determined by DNA Laddering DNA ladders were analysed using the QIAamp DNA Mini Kit #51304 (QIAGEN, Valencia, CA, USA). Briefly, up to 25 mg of tissue was ground thoroughly in liquid nitrogen with a mortar and pestle. Buffer ATL (containing proteinase K) was added, and the sample was incubated at 56uC until complete lysis was achieved. The RNA was removed using RNase A, and the sample was purified using a QIAamp Mini spin column. The DNA quantity was determined on a TECAN Infinite M200 PRO machine and analysed on a 2% agarose gel. Clinicopathological Features of 78 Patients with Intrahepatic MF-CCA From the archives of Chang Gung Memorial Hospital, 78 MF-CCA patients who had undergone hepatectomy between 1989 and 2006 were selected based on the availability of sufficient quantities of tumour cells. Intrahepatic CCA was defined as carcinoma that arose from distal second order (or higher) branches of the intrahepatic ducts. Curative resection was defined as a negative resection margin observed during histopathological examination. Surgical mortality was defined as death that occurred within 1 month of surgery. Laboratory tests were conducted on the day before the surgery. The tumour stage was defined according to the pathological tumour node metastasis (pTNM) classification proposed by the American Joint Committee on Cancer (AJCC), 6 th edition. This retrospective study was approved by the institutional review board at Chang Gung Memorial Hospital (clinical study No. 99-2886B). Written consent was given by the patients for their information to be stored in the hospital database and used for research. Statistics All data are presented as means with standard deviations (SD). Differences between experimental animals and controls were calculated using the Mann-Whitney U test or the Kruskal-Wallis test. The overall survival rates were calculated using the Kaplan-Meier method. Sixteen clinicopathological variables were selected for difference analysis by using the log-rank test (univariate). The Cox proportional hazards model was employed for multivariate regression analysis. SPSS statistical software for Windows was used for the statistical analysis (SPSS version 10.0, Chicago, IL, USA). P#0.05 was considered statistically significant. Anti-proliferative Effect of Celecoxib on CGCCA Cells in Vitro The CGCCA cell line previously developed in our lab [19] showed prominent cytoplasmic expression of COX-2 as determined by immunohistochemical staining ( Figure 1A). Therefore, we applied the selective COX-2 inhibitor celecoxib to investigate the presumed anti-proliferative effect of celecoxib on CGCCA cells. Figure 1B shows that 1-day celecoxib treatment inhibited the proliferation of CGCCA cells in a concentration-dependent manner as determined by the BrdU assay. No inhibitory effect of aspirin on CGCCA cell growth was noted ( Figure 1B). As shown in Figure 1C, cultured CGCCA cells were treated with 12, 25, 50, and 100 mM of celecoxib and aspirin, and the MTT assay was performed on day 3. No significant effect was observed in CGCCA cells treated with 12 and 25 mM celecoxib; however, 50 mM of celecoxib induced 41% 67% growth inhibition. Notably, almost all the cells treated with 100 mM celecoxib for 3 days died. As for aspirin, no cell growth inhibition for CGCCA cells was detected. Taken together, our results indicate that celecoxib induced concentration-and time-dependent inhibition of CGCCA cell proliferation. Cell Cycle Distribution Analysis To further understand the mechanism underlying the antiproliferative effect of celecoxib on CGCCA cells, the treated cells were analysed by flow cytometry to determine the number of cells at each stage of the cell cycle. As shown in Figure 2A and Figure 2B, 1-day treatment with 12, 25, and 50 mM celecoxib increased the proportion of cells at the G2/M phase from 16.32% to 18.04%, 22.59%, and 30.61%, respectively. Most of the cells treated with 100 mM celecoxib were not viable after 2 days of treatment; thus, the distribution of cells at each stage could not be calculated at this concentration (data not shown). To verify cell cycle arrest in the G2 or M phase, PI staining was performed to identify the M-phase cells with condensed chromatin. As shown in Figure 2C and D, the percentage of M-phase cells did not differ between the control and treated groups, indicating that cell cycle arrest induced by celecoxib in CGCCA cells occurs at the G2 phase rather than at the M phase. To clarify the possible mechanisms underlying this G2 arrest, we examined Cdc25C, functioning to remove the inhibitory phosphorylation of cyclin dependent kinase (CDKs), and Cyclin dependent kinase 1(CDK-1), two important factors to proceed cell cycle progression from G2 to M. [25,26] As shown in Figure 2E, celecoxib induced significant dose-dependent repression of CDK-1 expression in CGCCA cells. As for Cdc25C, it seems no significant change was induced in CGCCA cells after celecoxib treatment. Apoptosis Analysis Previously, celecoxib was demonstrated to induce apoptosis in CCA [14,17,27,28]. To verify the ability of celecoxib to induce apoptosis in CGCCA cells, we treated CGCCA cells with celecoxib at 12.5, 25, 50, and 100 mM for 2 days. A TUNEL assay was then conducted to calculate the percentage of apoptotic cells. As shown in Figure 3A and B, celecoxib induced CGCCA cells to undergo apoptosis in a concentration-dependent manner. The Effect of Celecoxib on Cholangiocarcinoma in vivo (A) Evaluation of the anti-tumour effect of celecoxib on CCA in vivo. After induction of CCA by TAA, tumours in the control and treated groups were evaluated by animal PETcomputed tomography (CT) on transverse, sagittal, and coronal views. As shown in Figure 4A and B, both groups showed at least 1 FDG-avid tumour in the liver after 20 weeks of TAA treatment. In the experimental groups, rats were given celecoxib at doses of 40, 80, and 160 mg/kg. Rats treated with 40 and 80 mg/kg celecoxib were similar to those in the control group. Thus, we focused on the rats treated with 160 mg/kg celecoxib. The SUV values of tumours and livers and the T/L ratios for both groups are shown in Figure 4C-E. The celecoxib-treated group clearly showed a lower T/L ratio at 21 and 25 weeks than the control group ( Figure 4E) (p,0.05). Accordingly, 160 mg/kg celecoxib treatment resulted in partial but significant suppression of the tumourous growth of rat CCA in vivo. (B) Celecoxib induced apoptosis in CCA tissues in vivo. After 25 weeks, the rats were sacrificed and TAA-induced CCA tissues were harvested. After immunohistochemical staining, the rat CCA tissues exhibited prominent cytoplasmic expression of COX-2 ( Figure 5A), which agrees with the results shown in Figure 1A. CCA tissues in the group treated with 160 mg/kg celecoxib showed much more prominent DNA fragmentation as than the control group as demonstrated by the DNA laddering assay ( Figure 5B). The similar results were obtained as checked the rat CCA tissues by TUNNEL assay (Figure 5C,D, and E). Clinicodemographic Features and COX-2 Expression Levels in Patients with MF-CCA 44 of 78 specimens (56.4%) obtained from MF-CCA patients revealed high expression of COX-2 (2+ and 3+ positive). Clinicodemographic features were similar between patients with low and high COX-2 expression (Table S1 in File S1). Survival and Prognostic Analysis of MF-CCA Patients who Underwent Hepatectomy A total of 78 MF-CCA patients who had undergone hepatectomy were enrolled in the survival analysis study. The follow-up duration ranged from 1.4 to 94.1 months (median = 13.6 months). The overall survival (OS) rates at 1, 3, and 5 years were 55.1%, 22.9%, and 14.9%, respectively (data not shown). The clinicodemographic data for MF-CCA patients with low or high COX-2 expression is similar (Table S1 in File S1). Univariate log-rank analysis identified the following factors as adverse influences on the OS rate of the 78 MF-CCA patients who had undergone hepatectomy: presence of symptoms, high preoperative alkaline phosphatase and carcinoembryonic antigen levels, tumour size .5 cm, high COX-2 expression, and positive surgical margin status (Table S2 in File S1). After examining these factors using multivariate Cox proportional hazard analysis, only negative margin status and low COX-2 expression independently predicted favourable OS for MF-CCA patients after hepatectomy (Table 1). Discussion CCA, which develops from the epithelial lining of the biliary tract, is a lethal disease that yearly affects many thousands worldwide. CCA accounts for 10-15% of hepatobiliary neoplasms, and its incidence and mortality have been recently increasing [4,5,29]. Surgical resection provides the best chance of a cure for patients with this disease. For resectable CCA, the 5-year survival rate is approximately 22-44% after radical resection [30,31]. However, only one-third of CCA patients present with a resectable tumour at the time of diagnosis [32,33]. Currently, there is no effective chemotherapy regimen for CCA that benefits survival. The current recommended regimen is a combination of cisplatin and gemcitabine as suggested in the UK NCRI ABC-02 trial [34]. Radiotherapy plays a role in palliation for advanced CCA [35,36]. Thus, the development of a new therapeutic regimen for CCA patients is urgently needed. Overexpression of COX-2 and subsequent overproduction of prostaglandins have been implicated in variety of neoplastic diseases including pancreas, breast, liver, lung, and other cancers [37][38][39]. Regarding CCA, COX-2 is expressed abundantly in cancerous bile ducts but not in normal bile ducts [12,16]. Moreover, because COX-2 and its corresponding prostaglandins, mainly PGE2, are able to induce cell proliferation and promote cell survival [39][40][41], targeting COX-2 may effectively aid in the treatment of CCA. As shown in Figures 1A, 5A, and Fig. S1 in File S1, COX-2 is strongly expressed in CGCCA cells, TAA-induced rat CCA tissue, and human CCA tissues based on immunohistological analysis. The selective COX-2 inhibitor, celecoxib, exhibited a concentration-and time-dependent anti-proliferative effect on CGCCA cells ( Figure 1B-D). However, the COX-2 expression in CGCCA cells after celecoxib treatment was not measured in the current study. To further understand the mechanisms by which celecoxib represses CGCCA cell growth, we analysed the distribution of CGCCA cells at various stages of the cell cycle after celecoxib treatment. Previously, celecoxib was shown to induce cell cycle arrest at the G1/S phase through upregulation of the cyclindependent kinase inhibitors p21 and p27 [42]. In this report, we found that 1 day of celecoxib treatment induced cell cycle arrest at the G2/M phase rather than the G1/S phase in CGCCA cells (Figure 2A and B). Because G2 and M phases cannot be differentiated by flow cytometry with PI staining because of similar amounts of DNA, we counted M-phase cells, which have condensed chromatins, after PI staining microscopically. As shown in Figure 2C and D, the number of cells in the M phase did not increase with celecoxib treatment, indicating that celecoxibinduced cell cycle arrest occurred at the G2 phase in CGCCA cells. We also examined the presumed target proteins of celecoxib, p21, and p27, in CGCCA cells after celecoxib treatment but found no difference in their expression (data not shown). Taken together, our data indicate that celecoxib induces cell cycle arrest at the G2 phase rather than at the G1 phase in CGCCA cells, indicating that celecoxib works in a highly cell-specific manner. During cell cycle progression, the combination of specific cyclins and cyclin dependent kinases (CDKs) plays a critical role. Among these, CDK-1 is crucial for G2/M progression [25,26]. The activation of cyclin-bound CDK-1 requires the removal of inhibitory phosphorylation by Cdc25C [25]. As shown in Figure 2E, celecoxib treatment induced a dose-dependent decrease in CDK1 expression without significantly downregulating Cdc25C expression. Taken together, our data indicate that celecoxib induces downregulation of CDK-1 in CGCCA cells, which could lead to G2 arrest. In addition to disturbance of cell cycle progression, COX-2 inhibition has been shown to induce apoptosis in CCA tissues through the inactivation of Akt [14,17,27,28]. Zhang et al. further demonstrated that COX-2 inhibition also induced Bax translocation in mitochondria, which resulted in release of cytochrome c from the mitochondria, thus activating caspase 3 and 9, which are involved in the intrinsic apoptotic pathway [17]. Other COX-2related apoptosis-inducing mechanisms include repression of PDK1 and PTEN [43][44][45]. In agreement with previous reports, our current data also showed that celecoxib induced CGCCA cell apoptosis in vitro in a concentration-dependent manner after 3 days of treatment as determined by the TUNEL assay ( Figure 3A and B). Although numerous studies have demonstrated the anti-tumour effect of COX-2 inhibitors, their clinical application is impeded because of the severe cardiovascular side effects that occur during long-term administration for chemoprevention [18,46,47]. To investigate the effectiveness of short-term use of COX-2 inhibitors for the treatment of CCA, we used the previously established TAA-induced rat CCA model [19]. This model recapitulates the histological progression of human CCA [20], indicating that it is a good platform to investigate new CCA treatment regimens. As shown in Figure 5A, TAA-induced CCA presented increased COX-2 expression. Animal PET was then used to measure tumour response to celecoxib treatment [21]. Because lesions ,2 mm in size could not be detected on our animal PET and the border of invasive CCA was indistinguishable from the normal liver background, the T/L ratio of SUV was used to represent tumour growth [21]. The experimental rats were dosed with 40, 80, and 160 mg/kg of celecoxib for 5 weeks as indicated in Materials and Methods. The results for rats treated with 40 and 80 mg/kg celecoxib were similar to those of the control group (data not shown). The T/L ratio of SUV was significantly lower in rats treated with 160 mg/ kg celecoxib than in rats of the control group at different time points ( Figure 4E), suggesting that tumour growth was repressed in rats treated with 160 mg/kg celecoxib. The CCA in group treated by 160 mg/kg celecoxib showed prominent DNA fragmentation ( Figure 5B), indicating celecoxib repressed tumour growth through apoptosis induction in vivo, which is also supported by TUNNEL Figure 4. Detection of rat CCA by animal PET and SUV of Tumor, Liver, and Tumor/Liver ratio. (A). Transverse, sagittal, and coronal views of fused CT and PET scans of representing control rats revealed CCA expressing areas of the liver in which the 18 F-FDG uptake was increased from baseline 1 to 5 weeks after the experiment (i.e., weeks 20, 21, and 25). (B) Transverse, sagittal, and coronal views of fused CT and PET scans of representing rats treated with celecoxib revealed CCA expressing areas of the liver in which the 18 F-FDG uptake was slightly increased from baseline1 to 5 weeks after the experiment (i.e., weeks 20, 21, and 25). The arrows indicated the ''hottest point'' with highest 18 F-FDG uptake. (C) The tumour SUV mean of control rats was initially elevated (week 21) and then decreased to a lower level at the last scan (week 25); however, the tumour SUV mean of rats treated with celecoxib was decreased initially (week 21) and remained at a constant level until the last scan (week 25). (D) The liver SUV mean decreased gradually in control and treated rats during this experiment. (E) The tumour-to-liver (T/L) ratio of SUV showed trend of elevation until the last scans in the control group. In the treatment group, the T/L ratio of SUV was significantly decreased 1 week after celecoxib treatment (control, 1.8560.12; celecoxib, 1.5260.04; p,0.05). doi:10.1371/journal.pone.0069928.g004 assay results for rat CCA tissues ( Figure 5C, D, and E). Although the use of celecoxib for chemoprevention induces severe cardiovascular side effects [18,46,47], during the experimental period, no obvious observable side effects were noted in the treated rats. However, the clinical application of short-term high dose of celecoxib still needs more studies to confirm its safety. Although COX-2 overexpression in CCA has been demonstrated in previous studies [12,16,48], the relationship between COX-2 expression and CCA survival remains unclear. Schmitz et al. demonstrated that COX-2 expression is associated with poor prognosis of intrahepatic CCA after hepatectomy. Additionally, COX-2 expression was also linked with lower proliferation and high apoptosis as determined by Ki67 expression and TUNEL assay through immunohistochemistry [49]. As shown in Table 1, 56.4% of MF-CCA patients presented with high COX-2 expression, and the clinical demographic data of MF-CCA patients with high and low COX-2 expression is similar. After hepatectomy, only resection margin status and COX-2 expression level influenced the OS of MS-CCA independently (Table 1). In conclusion, CCA is a devastating disease with a very dismal outcome despite available chemotherapy and radiotherapy. Radical surgery is the only way to improve survival, but it is often not feasible because of late diagnosis. COX-2 is deemed an oncogene in a variety of cancers, and COX-2 is frequently overexpressed in CCA. The clinical application of COX-2 inhibitors is limited to their negative cardiovascular side effects. In this study, we demonstrated that celecoxib inhibited CCA growth in vivo and in vitro without causing obvious observable side effects. The strong correlation between high COX-2 expression and poor survival of MF-CCA patients further justifies the need for a COX-2 inhibitor for the treatment of CCA. On the basis of our results, we conclude that a short-term high dose of celecoxib may be a promising therapeutic strategy for CCA treatment. Further studies are warranted given the lack of effective treatment for CCA and its dismal prognosis. Supporting Information File S1 This file contains a supporting figure and supporting tables. Figure S1, COX-2 is diffusely expressed in the cytoplasm in human MF-CCA. Table S1, Clinicopathological features between COX II high expression and low expression of mass-forming CCA patients underwent hepatectomy.
2017-04-13T21:12:44.817Z
2013-07-26T00:00:00.000
{ "year": 2013, "sha1": "96d25e78fbdaf3ef8790a0bf97152e1a032388f1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0069928", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96d25e78fbdaf3ef8790a0bf97152e1a032388f1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
59129203
pes2o/s2orc
v3-fos-license
Bowen's construction for the Teichmueller flow Let Q be a connected component of a stratum in the space of quadratic differentials for a non-exceptional Riemann surface of finite type. We show that the probability measure on Q in the Lebesgue measure class which is invariant under the Teichmueller flow is obtained by Bowen's construction. Introduction The Teichmüller flow Φ t acts on components of strata in the moduli space of area one abelian or quadratic differentials for a non-exceptional surface S of finite type. This flow has many properties which resemble the properties of an Anosov flow. For example, there is a pair of transverse invariant foliations, and there is an invariant mixing Borel probability measure λ in the Lebesgue measure class which is absolutely continuous with respect to these foliations, with conditional measures which are uniformly expanded and contracted by the flow [M82, V86]. This measure is even exponentially mixing, i.e. exponential decay of correlations for Hölder observables holds true [AGY06,AR09]. The entropy h of the Lebesgue measure λ is the supremum of the topological entropies of the restriction of Φ t to compact invariant sets [H10b]. For strata of abelian differentials, λ is the unique invariant measure of maximal entropy [BG07]. The goal of this note is to extend further the analogy between the Teichmüller flow on components of strata and Anosov flows. An Anosov flow Ψ t on a compact manifold M admits a unique Borel probability measure µ of maximal entropy. This measure can be obtained as follows [B73]. Every periodic orbit γ of Ψ t of prime period ℓ(γ) > 0 supports a unique Ψ t -invariant Borel measure δ(γ) of total mass ℓ(γ). If h > 0 is the topological entropy of Ψ t then µ is the (unique) weak limit of the sequence of measures e −hR ℓ(γ)≤R δ(γ) as R → ∞. In particular, the number of periodic orbits of period at most R is asymptotic to e hR /hR as R → ∞. For any connected component Q of a stratum of abelian or quadratic differentials the Φ t -invariant Lebesgue measure λ on Q can be obtained in the same way. For a precise formulation, we say that a family {µ i } of finite Borel measures on the moduli space H(S) of area one abelian differentials or on the moduli space Q(S) of area one quadratic differentials converges weakly to λ if for every continuous function f on H(S) or on Q(S) with compact support we have Let Γ(Q) be the set of all periodic orbits for Φ t contained in Q. For γ ∈ Γ(Q) let ℓ(γ) > 0 be the prime period of γ and denote by δ(γ) the Φ t -invariant Lebesgue measure on γ of total mass ℓ(γ). We show The theorem implies that as R → ∞, the number of periodic orbits in Q of period at most R is asymptotically not smaller than e hR /hR. However, since the closure in Q(S) of a component Q of a stratum is non-compact, we do not obtain a precise asymptotic growth rate for all periodic orbits in Q. Namely, there may be a set of periodic orbits in Q whose growth rate exceeds h and which eventually exit every compact subset of Q(S). For periodic orbits in the open principal stratum, Eskin and Mirzakhani [EM08] showed that the asymptotic growth rate of periodic orbits for the Teichmüller flow which lie deeply in the cusp of moduli space is strictly smaller than the entropy h, and they calculate the asymptotic growth rate of all periodic orbits. Eskin, Mirzakhani and Rafi [EMR10] also announced the analogous result for any component of any stratum. The proof of the above theorem uses ideas which were developed by Margulis for hyperbolic flows (see [Mar04] for an account with comments). This strategy is by now standard, and the main task is to overcome the difficulty of absence of hyperbolicity for the Teichmüller flow in the thin part of moduli space and the absence of nice product coordinates near a boundary point of a stratum. Absence of hyperbolicity in the thin part of moduli space is dealt with using the curve graph similar to the strategy developed in [H10b]. Integration of the Hodge norm as discussed in [ABEM10] and some standard ergodic theory is also used. Relative homology coordinates [V90] define local product structures for strata. These coordinates do no extend in a straightforward way to points in the boundary of the stratum. In the case of the principal stratum, however, product coordinates about boundary points can be obtained by simply writing a quadratic differential as a pair of its vertical and horizontal measured geodesic lamination. Our approach is to show that there is a similar picture for strata. To this end, we use coordinates for strata based on train tracks which will be used in other contexts as well. The construction of these coordinates is carried out in Sections 3 and 4. The tools developed in Sections 3 and 4 are used in Section 5 to show that a weak limit µ of the measures µ R is absolutely continuous with respect to the Lebesgue measure, with Radon Nikodym derivative bounded from above by one. In Section 6 the proof of the theorem is completed. Section 2 summarizes some properties of the curve graph and geodesic laminations used throughout the paper. Laminations and the curve graph Let S be an oriented surface of finite type, i.e. S is a closed surface of genus g ≥ 0 from which m ≥ 0 points, so-called punctures, have been deleted. We assume that 3g − 3 + m ≥ 2, i.e. that S is not a sphere with at most four punctures or a torus with at most one puncture. The Teichmüller space T (S) of S is the quotient of the space of all complete finite volume hyperbolic metrics on S under the action of the group of diffeomorphisms of S which are isotopic to the identity. The fibre bundle Q 1 (S) over T (S) of all marked holomorphic quadratic differentials of area one can be viewed as the unit cotangent bundle of T (S) for the Teichmüller metric d T . We assume that each quadratic differential q ∈ Q 1 (S) has a pole of first order at each of the punctures, i.e. we include the information on the number of poles of the differential in the number of punctures of S. The Teichmüller flow Φ t on Q 1 (S) commutes with the action of the mapping class group Mod(S) of all isotopy classes of orientation preserving self-homeomorphisms of S. Therefore this flow descends to a flow on the quotient orbifold Q(S) = Q 1 (S)/Mod(S), again denoted by Φ t . 2.1. Geodesic laminations. A geodesic lamination for a complete hyperbolic structure on S of finite volume is a compact subset of S which is foliated into simple geodesics. A geodesic lamination ν is called minimal if each of its half-leaves is dense in ν. Thus a simple closed geodesic is a minimal geodesic lamination. A minimal geodesic lamination with more than one leaf has uncountably many leaves and is called minimal arational. Every geodesic lamination ν consists of a disjoint union of finitely many minimal components and a finite number of isolated leaves. Each of the isolated leaves of ν either is an isolated closed geodesic and hence a minimal component, or it spirals about one or two minimal components. A geodesic lamination ν fills up S if its complementary components are topological discs or once punctured monogons, i.e. once punctured discs bounded by a single leaf of ν. The set L of all geodesic laminations on S can be equipped with the restriction of the Hausdorff topology for compact subsets of S. With respect to this topology, the space L is compact. The projectivized tangent bundle P T ν of a geodesic lamination ν is a compact subset of the projectivized tangent bundle P T S of S. The geodesic lamination ν is orientable if there is an continuous orientation of the tangent bundle of ν. This is equivalent to stating that there is a continuous section P T ν → T 1 S where T 1 S denotes the unit tangent bundle of S. Definition 2.1. A large geodesic lamination is a geodesic lamination ν which fills up S and can be approximated in the Hausdorff topology by simple closed geodesics. Note that a minimal geodesic lamination ν can be approximated in the Hausdorff topology by simple closed geodesics and hence if ν fills up S then ν is large. Moreover, the set of all large geodesic laminations is closed with respect to the Hausdorff topology and hence it is compact. The topological type of a large geodesic lamination ν is a tuple (m 1 , . . . , m ℓ ; −m) where 1 ≤ m 1 ≤ · · · ≤ m ℓ , A measured geodesic lamination is a geodesic lamination ν equipped with a translation invariant transverse measure ξ such that the ξ-weight of every compact arc in S with endpoints in S − ν which intersects ν nontrivially and transversely is positive. We say that ν is the support of the measured geodesic lamination. The geodesic lamination ν is uniquely ergodic if ξ is the only transverse measure with support ν up to scale. The space ML of measured geodesic laminations equipped with the weak *topology admits a natural continuous action of the multiplicative group (0, ∞). The quotient under this action is the space PML of projective measured geodesic laminations which is homeomorphic to the sphere S 6g−7+2m . Every simple closed geodesic c on S defines a measured geodesic lamination. The geometric intersection number between simple closed curves on S extends to a continuous function ι on ML × ML, the intersection form. We say that a pair (ξ, µ) ∈ ML × ML of measured geodesic laminations jointly fills up S if for every measured geodesic lamination η ∈ ML we have ι(η, ξ) + ι(η, µ) > 0. This is equivalent to stating that every complete simple (possibly infinite) geodesic on S intersects either the support of ξ or the support of µ transversely. 2.2. The curve graph. The curve graph C(S) of S is the locally infinite metric graph whose vertices are the free homotopy classes of essential simple closed curves on S, i.e. curves which are neither contractible nor freely homotopic into a puncture. Two such curves are connected by an edge of length one if and only if they can be realized disjointly. The mapping class group Mod(S) of S acts on C(S) as a group of simplicial isometries. The curve graph C(S) is a hyperbolic geodesic metric space [MM99] and hence it admits a Gromov boundary ∂C(S). For c ∈ C(S) there is a complete distance function δ c on ∂C(S) of uniformly bounded diameter, and there is a number ρ > 0 such that δ c ≤ e ρd(c,a) δ a for all c, a ∈ C(S). The group Mod(S) acts on ∂C(S) as a group of homeomorphisms. Let κ 0 > 0 be a Bers constant for S, i.e. κ 0 is such that for every complete hyperbolic metric on S of finite volume there is a pants decomposition of S consisting of pants curves of length at most κ 0 . Define a map by associating to x ∈ T (S) a simple closed curve of x-length at most κ 0 . Then there is a number c > 0 such that for all x, y ∈ T (S) (see the discussion in [H10a]). For a number L > 1, a map γ : We say that an unparametrized quasi-geodesic is infinite if its image set has infinite diameter. There is a number p > 1 such that the image under Υ T of every Teichmüller geodesic is an unparametrized p-quasi-geodesic [MM99]. For each x ∈ T (S), the number of essential simple closed curves c on S whose xlength ℓ x (c) (i.e. the length of a geodesic representative in its free homotopy class) does not exceed 2κ 0 is bounded from above by a constant not depending on x, and the diameter of the subset of C(S) containing these curves is uniformly bounded as well. Thus we obtain for every x ∈ T (S) a finite Borel measure µ x on C(S) by defining where ∆ c denotes the Dirac mass at c. The total mass of µ x is bounded from above and below by a universal positive constant, and the diameter of the support of µ x in C(S) is uniformly bounded as well. Moreover, the measures µ x depend continuously on x ∈ T (S) in the weak * -topology. This means that for every bounded function f : The distances δ x are equivariant with respect to the action of Mod(S) on T (S) and ∂C(S). Moreover, there is a constant κ > 0 such that (4) δ x ≤ e κdT (x,y) δ y and κ −1 δ y ≤ δ ΥT (y) ≤ κδ y for all x, y ∈ T (S) (see p.230 and p.231 of [H09b]). An area one quadratic differential z ∈ Q 1 (S) is determined by a pair (µ, ν) of measured geodesic laminations which jointly fill up S and such that ι(µ, ν) = 1. The laminations µ, ν are called vertical and horizontal, respectively. For z ∈ Q 1 (S) let W u (z) ⊂ Q 1 (S) be the set of all quadratic differentials whose horizontal projective measured geodesic lamination coincides with the horizontal projective measured geodesic lamination of z. The space W u (z) is called the unstable manifold of z, and these unstable manifolds define the unstable foliation W u of Q 1 (S). The strong unstable manifold W su (z) ⊂ W u (z) is the set of all quadratic differentials whose horizontal measured geodesic lamination coincides with the horizontal measured geodesic lamination of z. These sets define the strong unstable foliation W su of Q 1 (S). The image of the unstable (or the strong unstable) foliation of Q 1 (S) under the flip F : q → F (q) = −q is the stable foliation W s (or the strong stable foliation W ss ). By the Hubbard-Masur theorem, for each z ∈ Q 1 (S) the restriction to W u (z) of the canonical projection P : Q 1 (S) → T (S) is a homeomorphism. Thus the Teichmüller metric lifts to a complete distance function d u on W u (z). Denote by d su the restriction of this distance function to W su (z). Then d s = d u • F , d ss = d su • F are distance functions on the leaves of the stable and strong stable foliation, respectively. For z ∈ Q 1 (S) and r > 0 let moreover B i (z, r) ⊂ W i (z) be the closed ball of radius r about z with respect to d i (i = u, su, s, ss). be the set of all marked quadratic differentials q such that the unparametrized quasi-geodesic t → Υ T (P Φ t q) (t ∈ [0, ∞)) is infinite. Thenà is the set of all quadratic differentials whose vertical measured geodesic lamination fills up S (i.e. its support fills up S, see [H06] for a comprehensive discussion of this result of Klarreich [Kl99]). There is a natural Mod(S)-equivariant surjective map F :à → ∂C(S) which associates to a point z ∈à the endpoint of the infinite unparametrized quasi-geodesic t → Υ T (P Φ t q) (t ∈ [0, ∞)). Call a marked quadratic differential z ∈ Q 1 (S) uniquely ergodic if the support of its vertical measured geodesic lamination is uniquely ergodic and fills up S. A uniquely ergodic quadratic differential is contained in the setà [H06,Kl99]. We have (Section 3 of [H09b]) (1) The map F :à → ∂C(S) is continuous and closed. form a neighborhood basis for F (z) in ∂C(S). For z ∈à and r > 0 let D(z, r) be the closed ball of radius r about F (z) with respect to the distance function δ P z . As a consequence of Lemma 2.2, if z ∈ Q 1 (S) is uniquely ergodic then for every r > 0 there are numbers r 0 < r and β > 0 such that Train tracks In this section we establish some properties of train tracks on an oriented surface S of genus g ≥ 0 with m ≥ 0 punctures and 3g − 3 + m ≥ 2 which will be used in Section 4 to construct coordinates near boundary points of strata. A train track on S is an embedded 1-complex τ ⊂ S whose edges (called branches) are smooth arcs with well-defined tangent vectors at the endpoints. At any vertex (called a switch) the incident edges are mutually tangent. Through each switch there is a path of class C 1 which is embedded in τ and contains the switch in its interior. A simple closed curve component of τ contains a unique bivalent switch, and all other switches are at least trivalent. The complementary regions of the train track have negative Euler characteristic, which means that they are different from discs with 0, 1 or 2 cusps at the boundary and different from annuli and once-punctured discs with no cusps at the boundary. We always identify train tracks which are isotopic. Throughout we use the book [PH92] as the main reference for train tracks. A train track is called generic if all switches are at most trivalent. For each switch v of a generic train track τ which is not contained in a simple closed curve component, there is a unique half-branch b of τ which is incident on v and which is large at v. This means that every germ of an arc of class C 1 on τ which passes through v also passes through the interior of b. A half-branch which is not large is called small. A branch b of τ is called large (or small ) if each of its two halfbranches is large (or small). A branch which is neither large nor small is called mixed. Remark: As in [H09], all train tracks are assumed to be generic. Unfortunately this leads to a small inconsistency of our terminology with the terminology found in the literature. A trainpath on a train track τ is a C 1 -immersion ρ : [k, ℓ] → τ such that for every i < ℓ − k the restriction of ρ to [k + i, k + i + 1] is a homeomorphism onto a branch of τ . More generally, we call a C 1 -immersion ρ : A generic train track τ is orientable if there is a consistent orientation of the branches of τ such that at any switch s of τ , the orientation of the large half-branch incident on s extends to the orientation of the two small half-branches incident on s. If C is a complementary polygon of an oriented train track then the number of sides of C is even. In particular, a train track which contains a once punctured monogon component, i.e. a once punctured disc with one cusp at the boundary, is not orientable (see p.31 of [PH92] for a more detailed discussion). A train track or a geodesic lamination η is carried by a train track τ if there is a map F : S → S of class C 1 which is homotopic to the identity and maps η into τ in such a way that the restriction of the differential of F to the tangent space of η vanishes nowhere; note that this makes sense since a train track has a tangent line everywhere. We call the restriction of F to η a carrying map for η. Write η ≺ τ if the train track η is carried by the train track τ . Then every geodesic lamination ν which is carried by η is also carried by τ . A train track fills up S if its complementary components are topological discs or once punctured monogons. Note that such a train track τ is connected. Let ℓ ≥ 1 be the number of those complementary components of τ which are topological discs. Each of these discs is an m i + 2-gon for some m i ≥ 1 (i = 1, . . . , ℓ). The topological type of τ is defined to be the ordered tuple (m 1 , . . . , m ℓ ; −m) where 1 ≤ m 1 ≤ · · · ≤ m ℓ ; then i m i = 4g−4+m. If τ is orientable then m = 0 and m i is even for all i. A train track of topological type (1, . . . , 1; −m) is called maximal. The complementary components of a maximal train track are all trigons, i.e. topological discs with three cusps at the boundary, or once punctured monogons. A transverse measure on a generic train track τ is a nonnegative weight function µ on the branches of τ satisfying the switch condition: for every trivalent switch s of τ , the sum of the weights of the two small half-branches incident on s equals the weight of the large half-branch. The space V(τ ) of all transverse measures on τ has the structure of a cone in a finite dimensional real vector space, and it is naturally homeomorphic to the space of all measured geodesic laminations whose support is carried by τ . The train track is called recurrent if it admits a transverse measure which is positive on every branch. We call such a transverse measure µ positive, and we write µ > 0 (see [PH92] for more details). A subtrack σ of a train track τ is a subset of τ which is itself a train track. Then σ is obtained from τ by removing some of the branches, and we write σ < τ . If b is a small branch of τ which is incident on two distinct switches of τ then the graph σ obtained from τ by removing b is a subtrack of τ . We then call τ a simple extension of σ. Note that formally to obtain the subtrack σ from τ − b we may have to delete the switches on which the branch b is incident. (2) An orientable simple extension τ of a recurrent orientable connected train track σ is recurrent. Moreover, Proof. If τ is a simple extension of a train track σ then σ can be obtained from τ by the removal of a small branch b which is incident on two distinct switches s 1 , s 2 . Then s i is an interior point of a branch b i of σ (i = 1, 2). If σ is connected, non-orientable and recurrent then there is a trainpath ρ 0 : [0, t] → τ − b which begins at s 1 , ends at s 2 and such that the half-branch ρ 0 [0, 1/2] is small at s 1 = ρ 0 (0) and that the half-branch ρ 0 [t − 1/2, t] is small at s 2 = ρ 0 (t). Extend ρ 0 to a closed trainpath ρ on τ − b which begins and ends at s 1 . This is possible since σ is non-orientable, connected and recurrent. There is a closed trainpath ρ ′ : [0, u] → τ which can be obtained from ρ by replacing the trainpath ρ 0 by the branch b traveled through from s 1 to s 2 . The counting measure of ρ ′ on τ satisfies the switch condition and hence it defines a transverse measure on τ which is positive on b. On the other hand, every transverse measure on σ defines a transverse measure on τ . Thus since σ is recurrent and since the sum of two transverse measures on τ is again a transverse measure, the train track τ is recurrent as well. Moreover, we have dimV(τ ) ≥ dimV(σ) + 1. Let p be the number of branches of τ . Label the branches of τ with the numbers {1, . . . , p} so that the number p is assigned to b. Let e 1 , . . . , e p be the standard basis of R p and define a linear map A : R p → R p by A(e i ) = e i for i ≤ p − 1 and A(e p ) = i ν(i)e i where ν is the weight function on {1, . . . , p} defined by the trainpath ρ 0 . The map A is a surjection onto a linear subspace of R p of codimension one, moreover A preserves the linear subspace V of R p defined by the switch conditions for τ . In particular, the corank of A(V ) is at most one. The image under A of the cone of all nonnegative weights on the branches of τ satisfying the switch conditions is contained in the cone of all nonnegative weights on τ −b = σ satisfying the switch conditions for σ. Therefore the dimension of the space of transverse measures on σ equals the space of transverse measures on τ minus one. This implies dim(V(τ ) = dim(V(σ) + 1 and completes the proof of the first part of the lemma. The second part follows in exactly the same way. As a consequence we obtain Corollary 3.2. Proof. The disc components of a non-orientable recurrent train track τ of topological type (m 1 , . . . , m ℓ ; −m) can be subdivided in 4g − 4 + m − ℓ steps into trigons by successively adding small branches. A successive application of Lemma 3.1 shows that the resulting train track η is maximal and recurrent. Since for every maximal recurrent train track η we have dimV(η) = 6g − 6 + 2m (see [PH92]), the first part of the corollary follows. To show the second part, let τ be an orientable recurrent train track of type (m 1 , . . . , m ℓ ; 0). Then m i is even for all i. Add a branch b 0 to τ which cuts some complementary component of τ into a trigon and a second polygon with an odd number of sides. The resulting train track η 0 is not recurrent since a trainpath on η 0 can only pass through b 0 at most once. However, we can add to η 0 another small branch b 1 which cuts some complementary component of η 0 with at least 4 sides into a trigon and a second polygon such that the resulting train track η is non-orientable and recurrent. The inward pointing tangent of b 1 is chosen in such a way that there is a trainpath traveling both through b 0 and b 1 . The counting measure of any simple closed curve which is carried by η gives equal weight to the branches b 0 and b 1 . But this just means that dimV(η) = dimV(τ ) + 1 (see the proof of Lemma 3.1 for a detailed argument). By the first part of the corollary, we have dimV(η) = 2g − 2 + ℓ + 2 which completes the proof. Note that by definition, a fully recurrent train track is connected and fills up S. The next lemma gives some first property of a fully recurrent train track τ . For its proof, recall that there is a natural homeomorphism of V(τ ) onto the subspace of ML of all measured geodesic laminations carried by τ . There are two simple ways to modify a fully recurrent train track τ to another fully recurrent train track. Namely, if b is a mixed branch of τ then we can shift τ along b to a new train track τ ′ . This new train track carries τ and hence it is fully recurrent since it carries every geodesic lamination which is carried by τ [PH92,H09]. Similarly, if e is a large branch of τ then we can perform a right or left split of τ at e as shown in Figure A below. A (right or left) split τ ′ of a train track τ is carried by τ . If τ is of topological type (m 1 , . . . , m ℓ ; −m), if ν ∈ LL(m 1 , . . . , m ℓ ; −m) is minimal and is carried by τ and if e is a large branch of τ , then there is a unique choice of a right or left split of τ at e such that the split track η carries ν. In particular, η is fully recurrent. Note however that there may be a split of τ at e such that the split track is not fully recurrent any more (see Section 2 of [H09] for details). Figure A The following simple observation is used to identify fully recurrent train tracks. (1) Let e be a large branch of a fully recurrent non-orientable train track τ . Then no component of the train track σ obtained from τ by splitting τ at e and removing the diagonal of the split is orientable. (2) Let e be a large branch of a fully recurrent orientable train track τ . Then the train track σ obtained from τ by splitting τ at e and removing the diagonal of the split is connected. Proof. Let τ be a fully recurrent non-orientable train track of topological type (m 1 , . . . , m ℓ ; −m). Let e be a large branch of τ and let v be a switch on which the branch e is incident. Let σ be the train track obtained from τ by splitting τ at e and removing the diagonal branch of the split. Then the train tracks τ 1 , τ 2 obtained from τ by a right and left split at e, respectively, are simple extensions of σ. If σ is connected and orientable then the train tracks τ 1 , τ 2 are not recurrent since no transverse measure can give positive weight to the diagonal of the split (compare the discussion in the proof of Lemma 3.1). However, since τ is fully recurrent, it can be split at e to a fully recurrent and hence recurrent train track. This is a contradiction. Now assume that σ is disconnected and contains an orientable connected com- and hence once again, τ i is not recurrent. As above, this contradicts the assumption that τ is fully recurrent . The first part of the corollary is proven. The second part follows from the same argument since a split of an orientable train track is orientable. Example: 1) Figure B below shows a non-orientable recurrent train track τ of type (4; 0) on a closed surface of genus two. The train track obtained from τ by a split at the large branch e and removal of the diagonal of the split track is orientable and hence τ is not fully recurrent. This corresponds to the fact established by Masur and Smillie [MS93] that every quadratic differential with a single zero and no pole on a surface of genus 2 is the square of a holomorphic one-form (see Section 4 for more information). F e 2) To construct an orientable recurrent train track of type (m 1 , . . . , m ℓ ; 0) which is not fully recurrent let S 1 be a surface of genus g 1 ≥ 2 and let τ 1 be an orientable fully recurrent train track on S 1 with ℓ 1 ≥ 1 complementary components. Choose a complementary component C 1 of τ 1 in S 1 , remove from C 1 a disc D 1 and glue two copies of S 1 − D 1 along the boundary of D 1 to a surface S of genus 2g 1 . The two copies of τ 1 define a recurrent disconnected oriented train track τ on S which has an annulus complementary component C. Choose a branch b 1 of τ in the boundary of C. There is a corresponding branch b 2 in the second boundary component of C. Glue a compact subarc of b 1 contained in the interior of b 1 to a compact subarc of b 2 contained in the interior of b 2 so that the images of the two arcs under the glueing form a large branch e in the resulting train track η. The train track η is recurrent and orientable, and its complementary components are topological discs. However, by Lemma 3.5 it is not fully recurrent. To each train track τ which fills up S one can associate a dual bigon track τ * (Section 3.4 of [PH92]). There is a bijection between the complementary components of τ and those complementary components of τ * which are not bigons, i.e. discs with two cusps at the boundary. This bijection maps a component C of τ which is an n-gon for some n ≥ 3 to an n-gon component of τ * contained in C, and it maps a once punctured monogon C to a once punctured monogon contained in C. If τ is orientable then the orientation of S and an orientation of τ induce an orientation on τ * , i.e. τ * is orientable. Measured geodesic laminations which are carried by τ * can be described as follows. A tangential measure on a train track τ of type (m 1 , . . . , m ℓ ; −m) assigns to a branch b of τ a weight µ(b) ≥ 0 such that for every complementary k-gon of τ with consecutive sides c 1 , . . . , c k and total mass µ(c i ) (counted with multiplicities) the following holds true. (The complementary once punctured monogons define no constraint on tangential measures). The space of all tangential measures on τ has the structure of a convex cone in a finite dimensional real vector space. By the results from Section 3.4 of [PH92], every tangential measure on τ determines a simplex of measured geodesic laminations which hit τ efficiently. The supports of these measured geodesic laminations are carried by the bigon track τ * , and every measured geodesic lamination which is carried by τ * can be obtained in this way. The dimension of this simplex equals the number of complementary components of τ with an even number of sides. The train track τ is called transversely recurrent if it admits a tangential measure which is positive on every branch. In general, there are many tangential measures which correspond to a fixed measured geodesic lamination ν which hits τ efficiently. Namely, let s be a switch of τ and let a, b, c be the half-branches of τ incident on s and such that the halfbranch a is large. If β is a tangential measure on τ which determines the measured geodesic lamination ν then it may be possible to drag the switch s across some of the leaves of ν and modify the tangential measure β on τ to a tangential measure µ = β. Then β − µ is a multiple of a vector of the form δ a − δ b − δ c where δ w denotes the function on the branches of τ defined by δ w (w) = 1 and δ w (a) = 0 for a = w. For a large train track τ let V * (τ ) ⊂ ML be the set of all measured geodesic laminations whose support is carried by τ * . Each of these measured geodesic laminations corresponds to a tangential measure on τ . With this identification, the pairing is just the restriction of the intersection form on measured lamination space (Section 3.4 of [PH92]). Moreover, V * (τ ) is naturally homeomorphic to a convex cone in a real vector space. The dimension of this cone coincides with the dimension of V(τ ). Remark: In [MM99], Masur and Minsky define a large train track to be a train track τ whose complementary components are topological discs or once punctured monogons, without the requirement that τ is generic, transversely recurrent or recurrent. We hope that this inconsistency of terminology does not lead to any confusion. Strata As in Section 2, for a closed oriented surface S of genus g ≥ 0 with m ≥ 0 punctures let Q 1 (S) be the bundle of marked area one holomorphic quadratic differentials with a simple pole at each puncture over the Teichmüller space T (S) of marked complex structures on S. For a complete hyperbolic metric on S of finite area, an area one quadratic differential q ∈ Q 1 (S) is determined by a pair (λ + , λ − ) of measured geodesic laminations which jointly fill up S and such that ι(λ + , λ − ) = 1. The vertical measured geodesic lamination λ + for q corresponds to the equivalence class of the vertical measured foliation of q. The horizontal measured geodesic lamination λ − for q corresponds to the equivalence class of the horizontal measured foliation of q. A tuple (m 1 , . . . , m ℓ ) of positive integers 1 ≤ m 1 ≤ · · · ≤ m ℓ with i m i = 4g − 4 + m defines a stratum Q 1 (m 1 , . . . , m ℓ ; −m) in Q 1 (S). This stratum consists of all marked area one quadratic differentials with m simple poles and ℓ zeros of order m 1 , . . . , m ℓ which are not squares of holomorphic one-forms. The stratum is a real hypersurface in a complex manifold of dimension The closure in Q 1 (S) of a stratum is a union of components of strata. Strata are invariant under the action of the mapping class group Mod(S) of S and hence they project to strata in the moduli space Q(S) = Q 1 (S)/Mod(S) of quadratic differentials on S with a simple pole at each puncture. We denote the projection of the stratum Q 1 (m 1 , . . . , m ℓ ; −m) by Q(m 1 , . . . , m ℓ ; −m). The strata in moduli space need not be connected, but their connected components have been identified by Lanneau [L08]. A stratum in Q(S) has at most two connected components. Similarly, if m = 0 then we let H 1 (S) be the bundle of marked area one holomorphic one-forms over Teichmüller space T (S) of S. For a tuple k 1 ≤ · · · ≤ k ℓ of positive integers with i k i = 2g − 2, the stratum H 1 (k 1 , . . . , k ℓ ) of marked area one holomorphic one-forms on S with ℓ zeros of order k i (i = 1, . . . , ℓ) is a real hypersurface in a complex manifold of dimension It projects to a stratum H(k 1 , . . . , k ℓ ) in the moduli space H(S) of area one holomorphic one-forms on S. Strata of holomorphic one-forms in moduli space need not be connected, but the number of connected components of a stratum is at most three [KZ03]. Recall from Section 2 the definition of the strong stable, the stable, the unstable and the strong unstable foliation W ss , W s , W u , W su of Q 1 (S). LetQ be a component of a stratum Q 1 (m 1 , . . . , m ℓ ; −m) of marked quadratic differentials or of a stratum H 1 (m 1 /2, . . . , m ℓ /2) of marked abelian differentials. Using period coordinates, one sees that every q ∈Q has a connected neighborhood U inQ with the following properties [V90]. is a smooth connected local submanifold of U of (real) dimension h which is called the local stable manifold of q inQ (see [V90]). Similarly we define the local unstable manifold W ũ Q,loc (q) of q inQ. If two such local stable (or unstable) manifolds intersect then their union is again a local stable (or unstable) manifold. The maximal connected set containing q which is a union of intersecting local stable (or unstable) manifolds is the stable manifold W s Q (q) (or the unstable manifold W ũ Q (q)) of q inQ. Note that W ĩ Q (q) ⊂ W i (q) (i = s, u). A stable (or unstable) manifold is invariant under the action of the Teichmüller flow Φ t . Remark: There may be a componentQ of a stratum and someq ∈Q such that W s (q) ∩Q has infinitely many components. The (strong) stable and (strong) unstable manifolds define smooth foliations W s Q , W ũ Q ofQ which are called the stable and unstable foliations ofQ, respectively. Define the strong stable foliation W ss Q (or the strong unstable foliation W sũ Q ) ofQ by requiring that locally the leaf W ss ) of all marked quadratic differentials whose vertical (or horizontal) measured geodesic lamination equals the vertical (or horizontal) measured geodesic lamination of q. The strong stable foliation ofQ is transverse to the unstable foliation ofQ. The foliations W ĩ Q (i = ss, s, su, u) are invariant under the action of the stabilizer Stab(Q) ofQ in Mod(S), and they project to Φ t -invariant singular foliations W i Q of Q =Q/Stab(Q). 4.1. Orbifold coordinates. In this technical subsection we describe for every component Q of a stratum in the moduli space of quadratic differentials and for every point q ∈ Q a basis of neighborhoods of q in Q with local product structures. The material is well known to the experts but a bit difficult to find in the literature. In the course of the discussion we introduce some notations which will be used throughout. Forq ∈ Q 1 (S) and z ∈ W s (q) there is a neighborhood V ofq in W su (q) and there is a homeomorphism with ζ z (q) = z which is determined by the requirement that ζ z (u) ∈ W s (u). We call ζ z a holonomy map for the strong unstable foliation along the stable foliation. Similarly, forq ∈ Q 1 (S) and z ∈ W u (q) there is a neighborhood Y ofq in W ss (q) and there is a homeomorphism with θ z (q) = z which is determined by the requirement that θ z (u) ∈ W u (u). We call θ z a holonomy map for the strong stable foliation along the unstable foliation. The holonomy maps are equivariant under the action of the mapping class group and hence they project to locally defined holonomy maps in Q(S) which are denoted by the same symbols. Recall from Section 2 the definition of the intrinsic path-metrics d i on the leaves of the foliation W i (i = s, u). These path metrics are invariant under the action of the mapping class group and hence they project to path metrics on the leaves of W i in Q(S) which we denote by the same symbols. For q ∈ Q(S), z ∈ W i (q) and any preimageq of q in Q 1 (S), the distance d i (q, z) is the shortest length of a path in W i (q) connectingq to a preimage of z. Let moreover d ss , d su be the restrictions of d s , d u to distances on the leaves of the strong stable and strong unstable foliation of Q 1 (S) and Q(S). be the canonical projection. For q ∈ Q(S) and r > 0 let be the closed ball of radius r about q in W i (q) (i = ss, su, s, u) with respect to the metric d i . Call such a ball B i (q, r) a metric orbifold ball centered at q if there is a liftq ∈ Q 1 (S) of q with the following properties. (1) The ball B i (q, r) ⊂ (W i (q), d i ) aboutq of the same radius is contractible and precisely invariant under the stabilizer Stab(q) ofq in Mod(S). (2) B i (q, r) = B i (q, r)/Stab(q) which means that the restriction of the map Π to B i (q, r) factors through a homeomorphism B i (q, r)/Stab(q) → B i (q, r). We also say that B i (q, r) is an orbifold quotient of B i (q, r). Note that every metric orbifold ball B i (q, r) ⊂ W i (q) is contractible. There is also an obvious notion of an orbifold ball which is not necessarily metric. For every point q ∈ Q(S) there is a number a(q) > 0 such that the balls B i (q, a(q)) are metric orbifold balls (i = ss, su) and that for any preimageq of q in Q 1 (S) and any z ∈ B ss (q, a(q)) (or z ∈ B su (q, a(q))) the holonomy map ζ z (or θ z ) is defined on B su (q, a(q)) (or on B ss (q, a(q))). Now let be Borel sets and letW 1 ⊂ B ss (q, a(q)),W 2 ⊂ B su (q, a(q)) be the preimages of Note that the map ξ : Then there is a continuous function (12) σ : V (B ss (q, a(q)), B su (q, a(q))) → R which vanishes on B ss (q, a(q)) ∪ B su (q, a(q)) and such that In particular, for every number κ > 0 there is a number r(κ) > 0 such that the restriction of the function σ to V (B ss (q, r(κ)), B su (q, r(κ))) assumes values in [−κ, κ]. Then for sufficiently small t 0 , say for all t 0 ≤ t(q), the following properties are satisfied. We call a set V (W 1 , W 2 , t 0 ) as in (13) which satisfies the assumptions a),b) a set with a local product structure. Note that every point q ∈ Q(S) has a neighborhood in Q(S) with a local product structure, e.g. the set V (B ss (q, r), B su (q, r), t) for r ∈ (0, a(q)) and t ∈ (0, t(q)). Moreover, the neighborhoods of q with a local product structure form a basis of neighborhoods. The above discussion can be applied to strata as follows. A connected component Q of a stratum Q(m 1 , . . . , m ℓ ; −m) or of a stratum H(m 1 /2, . . . , m ℓ /2) is locally closed in Q(S) (here we identify an abelian differential with its square). This means that for every q ∈ Q there exists an open neighborhood Using period coordinates [V90], one obtains that for every point q ∈ Q there is a number a Q (q) ≤ a(q) and a number t Q (q) ≤ t(q) with the following property. For r ≤ a Q (q) let B ss Q (q, r), B su Q (q, r) be the component containing q of the intersection B ss (q, r) ∩ Q, B su (q, r) ∩ Q (note that the intersection B ss (q, r) ∩ Q may not be closed and may have infinitely many components). . We say that this neighborhood has a local product structure. We say that a Borel set Y ⊂ Q has a local product structure if there is some q ∈ Y and if there are Borel sets and a number t The Φ t -invariant Borel probability measure λ on Q in the Lebesgue measure class admits a natural family of conditional measures λ ss , λ su on strong stable and strong unstable manifolds. The conditional measures λ i are well defined up to a universal constant, and they transform under the Teichmüller geodesic flow Φ t via dλ ss • Φ t = e −ht dλ ss and dλ su • Φ t = e ht dλ su . Let F : Q(S) → Q(S) be the flip q → F (q) = −q and let dt be the Lebesgue measure on the flow lines of the Teichmüller flow. The conditional measures λ ss , λ su are uniquely determined by the additional requirements that F * λ su = λ ss and that with respect to a local product structure, λ can be written in the form The measures λ u on unstable manifolds defined by dλ u = dλ su × dt are invariant under holonomy along strong stable manifolds. To summarize, we obtain the following. The natural homeomorphism maps the measure λ 0 on V defined by dλ 0 = Ψ * dλ ss × dλ su × dt to a measure of the form e ϕ λ where ϕ is a continuous function on V which vanishes on ∪ t∈[−tQ(q),tQ(q)] Φ t B ss Q (q, a Q (q)) (see [V86]). 4.2. Train track coordinates. The goal of this subsection is to relate components of strata in Q(S) to large train tracks. This will be used to define product coordinates near points in the boundary of a stratum. Note that the natural product coordinates on strata are period coordinates. For a point q in the boundary of a stratum, some of the relative periods vanish and there is no canonical choice of a relative cycle near q which can be used for period coordinates in a neighborhood of q. We chose to construct product coordinates near boundary points of a stratum using train tracks even though similar coordinates can be obtained using the usual period coordinate construction. These train track coordinates will be used in other contexts as well. We continue to use the assumptions and notations from Section 2 and Section 3. For a large train track τ ∈ LT (m 1 , . . . , m ℓ ; −m) let be the set of all measured geodesic laminations ν ∈ ML whose support is carried by τ and such that the total weight of the transverse measure on τ defined by ν equals one. Let Q(τ ) ⊂ Q 1 (S) be the set of all area one marked quadratic differentials whose vertical measured geodesic lamination is contained in V 0 (τ ) and whose horizontal measured geodesic lamination is carried by the dual bigon track τ * of τ . By definition of a large train track, we have Q(τ ) = ∅. The next proposition relates Q(τ ) to components of strata. Proof. By [L83], the support ξ of the vertical measured geodesic lamination of a marked quadratic differential z ∈ Q 1 (S) can be obtained from the vertical foliation of z by cutting S open along each vertical separatrix and straightening the remaining leaves with respect to the hyperbolic structure P z ∈ T (S). In particular, up to homotopy a vertical saddle connection s of z is contained in the interior of a complementary component C of ξ which is uniquely determined by s. Let τ ∈ LT (m 1 , . . . , m ℓ ; −m). Assume first that τ is non-orientable. Let µ ∈ V 0 (τ ) be such that the support of µ is contained in LL(m 1 , . . . , m ℓ ; −m) and let ν ∈ V * (τ ). Then µ is non-orientable since otherwise τ inherits an orientation from µ. The measured geodesic laminations µ, ν jointly fill up S (since the support of ν is different from the support of µ and the support of µ fills up S) and hence if ν is normalized in such a way that ι(µ, ν) = 1 then the pair (µ, ν) defines a point q ∈ Q(τ ). Our first goal is to show that q ∈ Q 1 (m 1 , . . . , m ℓ ; −m). The support of the geodesic lamination µ is contained in LL(m 1 , . . . , m ℓ ; −m) and therefore the orders of the zeros of the quadratic differential q are obtained from the orders m 1 , . . . , m ℓ by subdivision. There is a non-trivial subdivision, say of the form m i = s k s , if and only if there is at least one vertical saddle connection for q. Assume to the contrary that there is a vertical saddle connection s for q. Let q be the lift of q to a quadratic differential on the universal covering H 2 of S and lets ⊂ H 2 be a preimage of s. Letμ ⊂ H 2 be the preimage of µ. As discussed in the first paragraph of this proof, the saddle connections is contained in a complementary componentC of the support ofμ. This component is an ideal polygon with finitely many sides. A biinfinite geodesic line for the singular euclidean metric defined byq is a quasigeodesic in the hyperbolic plane H 2 and hence it has well defined endpoints in the ideal boundary ∂H 2 of H 2 . There are two vertical geodesic lines α 0 , β 0 forq which contain the saddle connections as a subarc and which are contained in a bounded neighborhood of a side α, β ofC. For a fixed orientation ofs, the geodesics α 0 , β 0 are determined by the requirement that their orientation coincides with the given orientation ofs and that moreover at every singular point x, the angle at x to the left of α 0 (or to the right of β 0 ) for the orientation of the geodesic and the orientation of H 2 equals π. The ideal boundary of the closed half-plane of H 2 which is bounded by α (or β) and which is disjoint from the interior ofC is a compact subarc a (or b) of ∂H 2 . The arcs a, b are disjoint (or, equivalently, the sides α, β ofC are not adjacent). A horizontal geodesic line forq which intersects the interior of the saddle connectioñ s is a quasi-geodesic in H 2 with one endpoint in the interior of the arc a and the second endpoint in the interior of the arc b. Now a carrying map F : S → S for µ with F (µ) ⊂ τ maps the support of µ onto τ and hence it induces a bijection between the complementary components of the support of µ and the complementary components of τ . In particular, the projections of the geodesics α, β to S determine two opposite sides of the complementary component C τ of τ corresponding to the projection ofC to S. On the other hand, by construction of the dual bigon track τ * of τ (see [PH92], if ρ : (−∞, ∞) → τ * is any trainpath which intersects the complementary component C τ of τ then every component of ρ(−∞, ∞) ∩ C τ is a compact arc with endpoints on adjacent sides of C τ . In particular, a lift to H 2 of such a trainpath is a quasigeodesic in H 2 whose endpoints meet at most one of the two arcs a, b ⊂ ∂H 2 . Since the support of the horizontal measured geodesic lamination ν of q is carried by τ * by assumption, every leaf of the support of ν corresponds to a biinfinite trainpath on τ * and hence a lift to H 2 of such a leaf does not connect the arcs a, b ⊂ ∂H 2 . This contradicts the assumption that q has a vertical saddle connection and hence we indeed have q ∈ Q 1 (m 1 , . . . , m ℓ ; −m). Let P(µ) ⊂ PML be the open set of all projective measured geodesic laminations whose support is distinct from the support of µ. Then the assignment ψ which associates to a projective measured geodesic lamination [ν] ∈ P(µ) the area one quadratic differential q(µ, [ν]) with vertical measured geodesic lamination µ and horizontal projective measured geodesic lamination [ν] is a homeomorphism of P(µ) onto a strong stable manifold in Q 1 (S). The projectivization P V * (τ ) of V * (τ ) is homeomorphic to a ball in a real vector space of dimension h − 1, and this is just the dimension of a strong stable manifold in a component of Q 1 (m 1 , . . . , m ℓ ; −m). Therefore by the above discussion and invariance of domain, there is a componentQ of the stratum Q 1 (m 1 , . . . , m ℓ ; −m) such that the restriction of the map ψ to P V * (τ ) is a homeomorphism of P V * (τ ) onto the closure of an open subset of a strong stable manifold W ss Similarly, if q ∈ Q(τ ) is defined by µ ∈ V 0 (τ ), ν ∈ V * (τ ) and if the support of ν is contained in LL(m 1 , . . . , m ℓ ; −m) then q ∈ Q 1 (m 1 , . . . , m ℓ ; −m) by the above argument. Moreover, for every [µ] ∈ P V(τ ) the pair ([µ], ν) defines a quadratic differential which is contained in a strong unstable manifold W sũ Q (q) of a component Q of the stratum Q 1 (m 1 , . . . , m ℓ ; −m), and the set of these quadratic differentials equals the closure of an open subset of W sũ Q (q). The set of quadratic differentials q with the property that the support of the vertical (or of the horizontal) measured geodesic lamination of q is minimal and of type (m 1 , . . . , m ℓ ; −m) is dense and of full Lebesgue measure in Q 1 (m 1 , . . . , m ℓ ; −m) [M82, V86]. Moreover, this set is saturated for the stable (or for the unstable) foliation. Thus by the above discussion, the set of all measured geodesic laminations which are carried by τ (or τ * ) and whose support is minimal of type (m 1 , . . . , m ℓ ; −m) is dense in V(τ ) (or in V * (τ )). As a consequence, the set of all pairs (µ, ν) ∈ V(τ ) × V * (τ ) with ι(µ, ν) = 1 which correspond to a quadratic differential q ∈ Q 1 (m 1 , . . . , m ℓ ; −m) is dense in the set of all pairs (µ, ν) ∈ V(τ ) × V * (τ ) with ι(µ, ν) = 1. Thus the set Q(τ ) is contained in the closure of a compo-nentQ of the stratum Q 1 (m 1 , . . . , m ℓ ; −m). Moreover, by reasons of dimension, {Φ t q | q ∈ Q(τ ), t ∈ [−δ, δ]} contains an open subset of this component. This shows the first part of the proposition. Now if τ ∈ LT (m 1 , . . . , m ℓ ; −m) is orientable and if µ is a geodesic lamination which is carried by τ , then µ inherits an orientation from an orientation of τ . The orientation of τ together with the orientation of S determines an orientation of the dual bigon track τ * (see [PH92], and these two orientations determine the orientation of S. This implies that any geodesic lamination carried by τ * admits an orientation, and if (µ, ν) jointly fill up S and if µ is carried by τ , ν is carried by τ * then the orienations of µ, ν determine the orientation of S. As a consequence, the singular euclidean metric on S defined by the quadratic differential q of (µ, ν) is the square of a holomorphic one-form. The proposition follows. IfQ is a component of a stratum Q 1 (m 1 , . . . , m ℓ ; −m) and if the large train track τ ∈ LT (m 1 , . . . , m ℓ ; −m) is such that Q(τ ) ∩Q = ∅ then we say that τ belongs tõ Q, and we write τ ∈ LT (Q). The next proposition is a converse to Proposition 4.1 and shows that train tracks can be used to define coordinates on strata. (1) For every q ∈ Q 1 (m 1 , . . . , m ℓ ; −m) there is a large nonorientable train track τ ∈ LT (m 1 , . . . , m ℓ ; −m) and a number t ∈ R so that Φ t q is an interior point of Q(τ ). Proof. Fix a complete hyperbolic metric on S of finite volume. Define the straightening of a train track τ to be the immersed graph in S whose vertices are the switches of τ and whose edges are the geodesic arcs which are homotopic to the branches of τ with fixed endpoints. The hyperbolic metric induces a distance function on the projectivized tangent bundle of S. As in Section 3 of [H09], we say that for some ǫ > 0 a train track τ ǫ-follows a geodesic lamination µ if the tangent lines of the straightening of τ are contained in the ǫ-neighborhood of the tangent lines of µ in the projectivized tangent bundle of S and if moreover the straightening of any trainpath on τ is a piecewise geodesic whose exterior angles at the breakpoints are not bigger than ǫ. By Lemma 3.2 of [H09], for every geodesic lamination µ and every ǫ > 0 there is a transversely recurrent train track which carries µ and ǫ-follows µ. Let q ∈ Q 1 (m 1 , . . . , m ℓ ; −m). Assume first that the support µ of the vertical measured geodesic lamination of q is large of type (m 1 , . . . , m ℓ ; −m). This is equivalent to stating that q does not have vertical saddle connections. For ǫ > 0 let τ ǫ be a train track which carries µ and ǫ-follows µ. If ǫ > 0 is sufficiently small then a carrying map µ → τ ǫ defines a bijection of the complementary components of µ onto the complementary components of τ ǫ . The transverse measure on τ ǫ defined by the vertical measured geodesic lamination of q is positive. LetC ⊂ H 2 be a complementary component of the preimage of µ in the hyperbolic plane H 2 . ThenC is an ideal polygon whose vertices decompose the ideal boundary ∂H 2 into finitely many arcs a 1 , . . . , a k ordered counter-clockwise in consecutive order. Since q does not have vertical saddle connections, the discussion in the proof of Proposition 4.1 shows the following. Let ℓ be a leaf of the preimage in H 2 of the support ν of the horizontal measured geodesic lamination of q. Then the two endpoints of ℓ in H 2 either are both contained in the interior of the same arc a i or in the interior of two adjacent arcs a i , a i+1 . As a consequence, for sufficiently small ǫ the geodesic lamination ν is carried by the dual bigon track τ * ǫ of τ ǫ (see the characterization of the set of measured geodesic laminations carried by τ * ǫ in [PH92]). Moreover, for any two adjacent subarcs a i , a i+1 of ∂H 2 cut out byC, the transverse measure of the set of all leaves of the preimage of ν connecting these sides is positive. Therefore for sufficiently small ǫ, the horizontal measured geodesic lamination ν of q defines an interior point of V * (τ ǫ ). Now the set of quadratic differentials z so that the support of the horizontal measured geodesic lamination of z is large of type (m 1 , . . . , m ℓ ; −m) is dense in the strong stable manifold W ss Q,loc (q) of q. The above reasoning shows that for such a quadratic differential z and for sufficiently small ǫ, the horizontal measured geodesic lamination of z is carried by τ * ǫ . But this just means that τ ǫ ∈ LL(m 1 , . . . , m ℓ ; −m). Moreover, if r > 0 is the total weight which the vertical measured geodesic lamination puts on τ ǫ then Φ − log r q is an interior point of Q(τ ǫ ). Thus τ ǫ satisfies the requirement in the proposition. Note that τ ǫ is necessarily non-orientable. If q ∈ H 1 (k 1 , . . . , k s ) is such that the support of the vertical measured geodesic lamination of q is large of type (2k 1 , . . . , 2k s ; 0) then the above reasoning also applies and yields an oriented large train track with the required property. Consider next the case that the support µ of the vertical measured geodesic lamination of q fills up S but is not of type (m 1 , . . . , m ℓ ; −m). Then q has a vertical saddle connection. The set of all vertical saddle connections of q is a finite disjoint union T of finite trees. The number of edges of this union of trees is uniformly bounded. For ǫ > 0 let τ ǫ be a train track which ǫ-follows µ and carries µ. If ǫ is sufficiently small then a carrying map µ → τ ǫ defines a bijection between the complementary components of µ and the complementary components of τ ǫ which induces a bijection between their sides as well. Modify τ ǫ as follows. Up to isotopy, a vertical saddle connection s of q is contained in a complementary component C s of τ ǫ which corresponds to the complementary component of µ determined by s (see the proof of Proposition 4.1). Since a carrying map µ → τ determines a bijection between the sides of the complementary components of µ and the sides of the complementary components of τ , the horizontal lines crossing through s determine two non-adjacent sides c 1 , c 2 of C s (see once more the discussion in the proof of Proposition 4.1). Choose an embedded rectangle R s ⊂ C s whose boundary intersects the boundary of C s in two opposite sides contained in the interior of the sides c 1 , c 2 of C s . Up to an isotopy we may assume that these rectangles R s where s runs through the vertical saddle connections of q are pairwise disjoint. Collapse each of the rectangles R s to a single segment in such a way that the two sides of R s which are contained in τ ǫ are identified and form a single large branch b s as shown in Figure C. The branch b s can be isotoped to the saddle connection s. Let η be the train track constructed in this way. Then η is of topological type (m 1 , . . . , m ℓ ; −m). S τ ǫ η q Figure C The train track τ ǫ can be obtained from η by splitting η at each of the large branches b s and removing the diagonal of the split. In particular, η carries τ ǫ and hence µ. The transverse measure on η defined by the vertical measured geodesic lamination of q is positive and consequently η is recurrent. Moreover, for sufficiently small ǫ, the horizontal measured geodesic lamination of q is carried by η * . As above, we conclude that if ǫ > 0 is sufficiently small then η is fully transversely recurrent and in fact large. There is a tangential measure on η which is defined by the horizontal measured geodesic lamination of q and which gives positive weight to each of the branches b s . Thus by possibly decreasing once more the size of ǫ, we can guarantee that for some t ∈ R the quadratic differential Φ t q is an interior point of Q(η). As a consequence, η satisfies the requirements in the proposition. If the support µ of the vertical measured geodesic lamination of q is arbitrary then we proceed in the same way. Let ǫ > 0 be sufficiently small that there is a bijection between the complementary components of the train track τ ǫ and the complementary components of the support of µ. As before, we use the horizontal measured foliation of q to construct for every vertical saddle connection s of q an embedded rectangle R s in S whose interior is contained in a complementary component of τ ǫ and with two opposite sides on τ ǫ in such a way that the rectangles R s are pairwise disjoint. Collapse each of the rectangles to a single arc. The resulting train track has the required properties. We discuss in detail the case that the support of µ contains a simple closed curve component α. Then τ ǫ contains α as a simple closed curve component as well. There is a vertical flat cylinder C for q foliated by smooth circles freely homotopic to α. The boundary ∂C of C is a finite union of vertical saddle connections. Some of these saddle connections may occur twice on the boundary of C (if µ = α then this holds true for each of these saddle connections). Assume without loss of generality (i.e. perform a suitable isotopy) that α is a closed vertical geodesic contained in the interior of C. For each saddle connection s in the boundary of C choose a compact arc a s contained in the interior of s. Choose moreover a foliation F of C by compact arcs with endpoints on the boundary of C which is transverse to the foliation of C by the vertical closed geodesics and such that the following holds true. If u 1 , u 2 are two distinct half-leaves of F with one endpoint in the arc a s and the second endpoint on α then the endpoints on α of u 1 , u 2 are distinct. In particular, each arc a s which occurs twice in the boundary of the cylinder C determines an embedded rectangle R s in S. Two opposite sides of R s are disjoint subarcs of α; we call these sides the vertical sides. Each of the other two opposite sides consists of two half-leaves of the foliation F which begin at a boundary point of a s and end in a point of α. The interior of the arc a s is contained in the interior of R s . The rectangles R s are pairwise disjoint. Therefore each of the rectangles R s can be collapsed in S to the arc a s . The resulting graph is a train track which carries α and contains for every saddle connection s which occurs twice in the boundary of C a large branch b s . If s is a saddle connection on the boundary of C which separates C from S − C then the arc a s is contained in the interior of a rectangle R s with one side contained in α and the second side contained in the interior of a branch of the component of τ ǫ different from α. This branch is determined by the horizontal geodesics which cross through s. As before, the rectangle R s is collapsed to a single branch. To summarize, the train track τ ǫ can be modified in finitely many steps to a train track η with the required properties by collapsing for every vertical saddle connection of q a rectangle with two sides on τ ǫ to a single large branch. This completes the construction and finishes the proof of the proposition. Remark: In the proof of Lemma 4.2, we constructed explicitly for every quadratic differential q ∈ Q(S) a train track τ q belonging to the stratum of q. If q is a one-cylinder Strebel differential then the train track τ q is uniquely determined by the combinatorics of its vertical saddle connections on the boundary of the cylinder. This fact in turn can be used to obtain a purely combinatorial proof of the classification results of Kontsevich-Zorich [KZ03] and of Lanneau [L08]. Let again τ ∈ LT (m 1 , . . . , m ℓ ; −m). Then τ ∈ LT (Q) for a componentQ of Q 1 (m 1 , . . . , m ℓ ; −m). For every µ ∈ V 0 (τ ) and every ν ∈ V * (τ ) so that the pair (µ, ν) jointly fills up S there is a unique q ∈ Q(τ ) with vertical measured geodesic lamination µ and horizontal measured geodesic lamination ι(µ, ν) −1 ν. Thus if P V * (τ ) denotes the projectivization of the cone V * (τ ) then for all a < b there is a natural homeomorphism ψ from the subset of V 0 (τ ) × P V * (τ ) × [a, b] corresponding to pairs (µ, [ν]) which jointly fill up S onto C = ∪ t∈[a,b] Φ t Q(τ ). The set C is the closure in Q 1 (S) of an open subset ofQ. We say that the map ψ defines on C a train track product structure. If A ⊂ V 0 (τ ), B ⊂ P V * (τ ) are Borel sets then we also say that the image of A × B × [a, b] under the map ψ has a train track product structure. If q ∈ Q(τ ) and if C is a neighborhood of q with a train track product structure which is precisely invariant under the stabilizer of q in Mod(S) then we say that the projection of C to Q(S) has a train track product structure. The following proposition establishes product coordinates near boundary points of strata. For this let again Q be a component of the stratum Q(m 1 , . . . , m ℓ ; −m), with closure Q. Let λ be the Lebesgue measure on Q. Proposition 4.3. For every q ∈ Q − Q and every closed neighborhood A of q in Q there is a closed neighborhood K ⊂ A of q in Q with the following properties. (2) For each i, the set K i contains q and has a train track product structure. Proof. Our goal is to show that every point q ∈ Q − Q has a closed neighborhood W in Q with the following property. LetQ ⊂ Q 1 (S) be a connected component of the preimage of Q and letq be a lift of q contained in the closure ofQ. Then W lifts to a contractible neighborhoodW ofq in the closure ofQ which is precisely invariant under Stab(q). Moreover,W is contained in for some a j < b j where η j ∈ LT (Q) and whereq is contained in the boundary of Φ −sj Q(η j ) for some s j ∈ (a j , b j ) (j = 1, . . . , k) . For i = j we have For this assume that q ∈ Q(n 1 , . . . , n s ; −m) for some s < ℓ. Assume moreover for the moment that q does not have vertical saddle connections. Let (q i ) ⊂ Q be a sequence converging to q. Since the subset of Q of quadratic differentials without vertical saddle connection is dense in Q, we may assume that for each i, q i does not have a vertical saddle connection. Letq i ∈Q be a preimage of q i such thatq i →q. For each i the support µ i of the vertical measured geodesic lamination ofq i is large of type (m 1 , . . . , m ℓ ; −m). We claim that up to passing to a subsequence, the geodesic laminations µ i converge in the Hausdorff topology to a large geodesic lamination ξ of topological type (m 1 , . . . , m ℓ ; −m). The lamination ξ then contains the support ν of the vertical measured geodesic lamination ofq as a sublamination. Since q does not have vertical saddle connections, ν fills up S and ξ can be obtained from ν by adding finitely many isolated leaves. These isolated leaves subdivide some of the complementary components of ν. The number of such limit laminations is uniformly bounded. To see that this claim indeed holds true it is enough to assume that s = ℓ − 1 and that n u = m j + m p for some j < p ≤ ℓ and some u ≤ s [MZ08]-the purpose of this assumption for our argument is to simplify the notations. Then for each sufficiently large i the quadratic differentialq i has a saddle connection s i connecting a zero x i 1 of order m j to a zero x i 2 of order m p whose length (measured in the singular euclidean metric defined byq i ) tends to zero as i → ∞. More precisely, the saddle connections s i converge to a zero x 0 ofq of order n u ≥ 2. The length of any other saddle connection ofq i is bounded from below by a universal positive constant. Sinceq i does not have vertical saddle connections, locally near x i 1 the interior of the saddle connection s i is contained in the interior of an euclidean sector based at x i 1 of angle π bounded by two vertical separatrices α i 1 , α i 2 ofq i which issue from x i 1 . The union α i = α i 1 ∪ α i 2 is a smooth vertical geodesic line passing through x i 1 , i.e. a geodesic which is a limit in the compact open topology of geodesic segments not passing through a singular point. There are two vertical separatrices β i 1 , β i 2 issuing from x i 2 so that the sum of the angles at x i 1 , x i 2 of the (local) strip bounded by α i 1 , s i , β i 1 equals π and that the same holds true for the angle sum of the (local) strip bounded by α i 2 , s i , β i 2 . The vertical length of s i is positive. The union β i = β i 1 ∪ β i 2 is a smooth vertical geodesic line passing through x i 2 . Equip S with the marked hyperbolic metric Pq ∈ T (S). For each i lift the singular euclidean metric on S defined byq i to a π 1 (S)-invariant singular euclidean metric on the universal covering H 2 of S. Lets i be a lift of the saddle connection s i . Sinces i is not vertical, the leaves of the vertical foliation ofq i which pass through s i define a strip of positive transverse measure in H 2 . This strip is bounded by the two liftsα i ,β i of the smooth vertical geodesics α i , β i which pass through the endpoints ofs i . As i → ∞, up to normalization and by perhaps passing to a subsequence, the vertical geodesicsα i ,β i converge in the compact open topology to vertical geodesicsα,β for the singular euclidean metric defined byq which pass through a preimagex 0 of the zero x 0 ofq of order n u = m j + m p ≥ 2. By construction, the geodesicsα,β coincide in a neighborhood ofx 0 . Since there are no vertical saddle connections forq, we necessarily haveα =β. Letγ ⊂ H 2 be the hyperbolic geodesic with the same endpoints asα in the ideal boundary of H 2 (see [L83] and the proof of Propositsion 4.1). The projection ofγ to S subdivides the complementary component of ν containing x 0 into two ideal polygons with m j + 2 and m p + 2 sides, respectively. The union of ν with this geodesic is a large geodesic lamination ξ of type (m 1 , . . . , m ℓ ; −m). This lamination is the limit in the Hausdorff topology of the laminations µ i . Let ξ 1 , . . . , ξ k ∈ LL(m 1 , . . . , m ℓ ; −m) be the (finitely many) large geodesic laminations obtained in this way. Each of the laminations ξ s contains ν as a sublamination, and it is determined by a decomposition of a complementary n u + 2-gon of ν into an ideal m j + 2-gon and an ideal m p + 2-gon. The set ξ 1 , . . . , ξ k is invariant under the action of Stab(q). For sufficiently small ǫ > 0, a train track η j which carries ξ j and ǫ-follows ξ j (for the hyperbolic metric Pq) is a simple extension of a train track τ which carries ν and ǫ-follows ν. The added branch is a diagonal of the complementary m j + m p + 2-gon of τ defined by the zero x 0 ofq of order m j + m p . It decomposes this component into an m j + 2-gon and an m p + 2-gon in a combinatorial pattern determined by ξ j . The vertical measured geodesic lamination ν of q defines a transverse measure on η j which gives full mass to the subtrack τ and hence it is contained in the boundary of the cone V(η j ). We also may assume that the horizontal measured geodesic lamination ofq is carried by the dual bigon track η * j (compare the proof of Lemma 4.2) and that the set η 1 , . . . , η k is invariant under the action of Stab(q). Since the set of geodesic laminations carried by a train track is open and closed in the Hausdorff topology [H09], for each j the train track η j carries a minimal large geodesic lamination of type (m 1 , . . . , m ℓ ; −m) (namely, the support of the vertical measured geodesic lamination of a quadratic differentialq i ∈Q sufficiently close tõ q from the sequence which determines η j ) and hence it follows as in the proof of Proposition 4.1 that η j ∈ LT (Q). Moreover, if s j ∈ R is such that Φ sjq ∈ Q(η j ) then for every ǫ > 0 the set ∪ j ∪ t∈[−sj −ǫ,−sj +ǫ] Φ t Q(η j ) is a closed neighborhood ofq in the closure ofQ. Now if i = j then V(η i ) ∩ V(η j ) = V(τ ) and hence this intersection is contained in an affine subspace of codimension one. Since the measure class of the conditional measures λ u of λ coincides with the Lebesgue measure class defined by the linear coordinates for the cone V(η j ), the equation (14) holds true. As a consequence, for suitable numbers a j < b j , the set is a Stab(q)-invariant closed neighborhood ofq in the closure ofQ. In other words, there is a Stab(q)-invariant finite collection of closed sets with train track product structures which cover a neighborhood ofq in the closure ofQ and containq in their boundary. This completes the proof of the proposition in the case that the support of the vertical measured geodesic lamination ofq is large of type (n 1 , . . . , n s ; −m). If the support of the vertical measured geodesic lamination ofq is not large of type (n 1 , . . . , n s ; −m) then we argue in the same way. In this caseq has a vertical saddle connection whose horizontal length is positive. Consider the action of the group SO(2) on the space of quadratic differentials by rotation. There is a sequence θ j ∈ (0, π/2) with θ j → 0 such that the quadratic differential e iθjq does not have any vertical or horizontal saddle connection. Then the supports of the horizontal and the vertical measured geodesic laminations of e iθjq are large of type (n 1 , . . . , n s ; −m). Let τ ∈ LT (n 1 , . . . , n s ; −m) be a train track as in Lemma 4.2 so that for some σ > 0, Φ σq is an interior point of Q(τ ). For sufficiently small θ, say whenever 0 < |θ| < ǫ, we have e iθq ∈ ∪ s∈[σ−b,σ+b] Φ s Q(τ ) where b > 0 is a fixed number. If θ ∈ (−ǫ, ǫ) is such that e iθq does not have any vertical saddle connection then the argument in the beginning of this proof shows that up to passing to a subsequence, for sufficiently large j the vertical measured geodesic lamination of e iθq j is carried by a simple extension of τ which is large of type (m 1 , . . . , m ℓ ; −m). As before, there are only finitely many such simple extensions, and these simple extensions define train track coordinates on a neighborhood ofq in the closure ofQ as before. From this the proposition follows. As an immediate consequence, we obtain the following. Let q ∈ Q − Q and let K = ∪ k i=1 K i be as in Proposition 4.3. Then for each i ≤ k there is an open subset U i ⊂ K i of a strong unstable submanifold of Q whose closure A i contains q. The set is a compact subset of W su (q) which contains the intersection with W su (q) of every sufficiently small neighborhood of q in Q. Moreover, λ su (A i ∩ A j ) = 0 for i = j. Absolute continuity Let again Q be a connected component of a stratum in Q(S). Then Q is invariant under the Teichmüller flow Φ t . For a periodic orbit γ ⊂ Q for Φ t , the Lebesgue measure supported in γ is a Φ t -invariant Borel measure σ(γ) on Q whose total mass equals the prime period ℓ(γ) of γ. If we denote for R > 0 by Γ(R) the set of all periodic orbits for Φ t of period at most R which are contained in Q then we obtain a finite Φ t -invariant Borel measure µ R on Q by defining Let µ be any weak limit of the measures µ R as R → ∞. Then µ is a Φ t -invariant Borel measure on Q(S) supported in the closure Q of Q (which may a priori be zero or locally infinite). The purpose of this section is to show Proposition 5.1. The measure µ on Q satisfies µ ≤ λ. This means that µ(U ) ≤ λ(U ) for every open relative compact subset U of Q. In particular, the measure µ is finite and absolutely continuous with respect to the Lebesgue measure, and it gives full mass to Q. A point q ∈ Q is called forward recurrent (or backward recurrent ) if it is contained in its own ω-limit set (or in its own α-limit set) under the action of Φ t . A point q ∈ Q is recurrent if it is forward and backward recurrent. The set R ⊂ Q of recurrent points is a Φ t -invariant Borel subset of Q. It follows from the work of Masur [M82] that a forward recurrent point q ∈ Q(S) has uniquely ergodic vertical and horizontal measured geodesic laminations whose supports fill up S. As a consequence, the preimageR of R in Q 1 (S) is contained in the setà defined in (5) of Section 2. Using the notations from Section 2, there is a number p > 1 such that for every q ∈ Q 1 (S) the map t → Υ T (P Φ t q) is an unparametrized p-quasi-geodesic in the curve graph C(S). If q is a lift of a recurrent point in Q(S) then this unparametrized quasi-geodesic is of infinite diameter. Recall from (3) of Section 2 the definition of the distances δ x (x ∈ T (S)) on ∂C(S) and of the sets D(q, r) ⊂ ∂C(S) (q ∈Ã, r > 0). The following lemma is a version of Lemma 2.1 of [H10b]. Lemma 5.2. There are numbers α 0 > 0, β > 0, b > 0 with the following property. Let q ∈R and for s > 0 write σ(s) = d(Υ T (P q), Υ T (P Φ s q)); then The map F :à → ∂C(S) defined in Section 2 is equivariant under the action of the mapping class group onà ⊂ Q 1 (S) and on ∂C(S). In particular, for q ∈à and r > 0 the set D(q, r) ⊂ ∂C(S) is invariant under Stab(q), and the same holds true for F −1 D(q, r). LetQ ⊂ Q 1 (S) be a component of the preimage of Q and let Stab(Q) < Mod(S) be the stabilizer ofQ in Mod(S). The Φ t -invariant Borel probability measure λ on Q in the Lebesgue measure class lifts to a Stab(Q)-invariant locally finite measure onQ which we denote again by λ. The conditional measures λ ss , λ su of λ on the leaves of the strong stable and strong unstable foliation of Q lift to a family of conditional measures on the leaves of the strong stable and strong unstable foliation W ss Q , W sũ Q ofQ, respectively, which we denote again by λ ss , λ su (see the discussion in Section 4). Lemma 5.3. For everyq ∈Q ∩R and for all compact neighborhoods W 1 ⊂ W 2 ofq in W sũ Q (q) there are compact neighborhoods K ⊂ C ⊂ W 1 ofq in W sũ Q (q) with the following properties. (2) There are numbers 0 < r 1 < r 2 < α 0 /2 such that Proof. Let q ∈ Q be a recurrent point and letq ∈Q be a lift of q. Let W 1 ⊂ W 2 ⊂ W sũ Q (q) be compact neighborhoods ofq and let r > 0 be such that B sũ Q (q, 2r) ⊂ W 1 ⊂ W sũ Q (q) is precisely invariant under Stab(q) and projects to a metric orbifold ball in W su Q (q). By Lemma 2.2, the map F :à → ∂C(S) is continuous and closed, and the sets F (B su (q, ν) ∩Ã) (ν > 0) form a neighborhood basis of Fq in ∂C(S). Thus there is a number u 0 > 0 such that For u ≤ u 0 let K u ⊂ W sũ Q (q) be the closure of the set Then K u is a closed neighborhood ofq in W sũ Q (q) which is precisely invariant under Stab(q). Moreover, K t ⊂ K u for t < u, and Lemma 2.2 shows that ∩ u>0 K u = {q}. Since the conditional measure λ su on W sũ Q (q) is Borel regular, for every ǫ > 0 there are numbers r 1 < r 2 < u 0 so that For these number r 1 < r 2 , all requirements in the lemma hold true. This shows the lemma. Remark: Sinceà is dense in Q 1 (S) and the map F :à → ∂C(S) is continuous and closed, the sets K ⊂ C ⊂ W sũ Q (q) have dense interior. Moreover, we may assume that their boundaries have vanishing Lebesgue measures. Let againQ ⊂ Q 1 (S) be a component of the preimage of Q. For q ∈ Q letq be a preimage of q inQ and let |Stab(q)| be the cardinality of the quotient of Stab(q) by the normal subgroup of all elements of Stab(q) which fixQ pointwise (for example, the hyperelliptic involution of a closed surface of genus 2 acts trivially on the entire bundle Q 1 (S)). We note Proof. The mapping class group preserves the Teichmüller metric on T (S) and hence an element h ∈ Mod(S) which stabilizes a quadratic differentialq ∈ Q 1 (S) fixes pointwise the Teichmüller geodesic with initial cotangentq. Therefore the set S is Φ t -invariant, moreover it is clearly open. Since the Teichmüller flow on Q has dense orbits, either S is empty or dense. However, Mod(S) acts properly discontinuously on T (S) and consequently the first possibility is ruled out by the fact that the conjugacy class of an element of Mod(S) which fixes an entire component of the preimage of Q does not contribute towards |Stab(q)|. For a control of the measure µ we use a variant of an argument of Margulis [Mar04]. Namely, for numbers R 1 < R 2 let Γ(R 1 , R 2 ) be the set of all periodic orbits of Φ t which are contained in Q, with prime periods in the interval (R 1 , R 2 ). For an open or closed subset V of Q and numbers R 1 < R 2 define where χ(V ) is the characteristic function of V . To obtain control on the quantities H(V, R 1 , R 2 ) we use a tool from [ABEM10]. Namely, every leaf W ss (q) of the strong stable foliation of Q(S) can be equipped with the Hodge distance d H (or, rather, the modified Hodge distance, [ABEM10]). This Hodge distance is defined by a norm on the tangent space of W ss (q) (with a suitable interpretation). In particular, closed d H -balls of sufficiently small finite radius are compact, and balls about a given point q define a neighborhood basis of q in W ss (q). We also obtain a Hodge distance on the leaves of the strong unstable foliation as the image under the flip F of the Hodge distance on the leaves of the strong stable foliation. These Hodge distances restrict to Hodge distances on the leaves of the foliations W ss Q , W su Q which we denote by the same symbol d H . The following result is Theorem 8.12 of [ABEM10], There is a number c H > 0 such that for all q ∈ Q(S), q ′ ∈ W ss (q) and all t ≥ 0. The next lemma provides some first volume control for the measure µ. Lemma 5.6. For every recurrent point q ∈ Q with |Stab(q)| = 1, for every neighborhood V of q in Q and for every ǫ > 0 there is a number t 0 > 0 and there is an Proof. We use the strategy of the proof of Lemma 6.1 of [Mar04]. The idea is to find for every recurrent point q ∈ Q with |Stab(q)| = 1, for every neighborhood V of q in Q and for every ǫ ∈ (0, 1) some number t 0 > 0 and closed neighborhoods Z 1 ⊂ Z 2 ⊂ Z 3 ⊂ V 0 ⊂ V of q in Q with dense interior such that for all sufficiently large R > 0 the following properties hold. (1) V 0 is connected and has a local product structure. (3) Let z ∈ Z 1 and assume that Φ τ z = z for some τ ∈ (R − t 0 , R + t 0 ). Let E be the component containing z of the intersection Φ τ V 0 ∩ V 0 and let and the length of the connected orbit subsegment of (∪ t∈R Φ t z) ∩ Z 1 containing z equals 2t 0 . (4) There is at most one periodic orbit for Φ t of prime period σ ∈ (R−t 0 , R+t 0 ) which intersects E, and the intersection of this orbit with E is connected. The construction is as follows. Let q ∈ Q be recurrent with |Stab(q)| = 1 and let V be a neighborhood of q in Q. Using the notations from Subsection 4.1, for ǫ > 0 there are numbers a 0 < a Q (q), t 0 < min{t Q (q)/4(1 + ǫ), log(1 + ǫ)/h} such that is a set with a local product structure. Since periodic orbits for Φ t are in bijection with conjugacy classes of pseudo-Anosov elements of Mod(S), up to making a 0 smaller we may assume that the following holds true. For every r > 8t 0 , every component of the intersection Φ r V 0 ∩ V 0 is intersected by at most one periodic orbit for the Teichmüller flow with prime period contained in the interval [r − 2t 0 , r + 2t 0 ], and if such an orbit exists then its intersection with Φ r V 0 ∩ V 0 is connected. As in (11) of Section 4, for z ∈ V 0 let θ z : B ss Q (q, a 0 ) → W ss Q,loc (z) be defined by the requirement that θ z (u) ∈ W u Q,loc (u) for all u. Similarly, as in (10) of Section 4, let ζ z : B su Q (q, a 0 ) → W su Q,loc (z) be defined by ζ z (u) ∈ W s Q,loc (u). We claim that for sufficiently small a 1 < a 0 and for every a 1 ), t 0 ) the following holds true. a) The Jacobian of the embedding θ z : B ss Q (q, a 1 ) → W ss Q,loc (z) and of the embedding ζ z : B su Q (q, a 1 ) → W su Q,loc (z) with respect to the measures λ ss and λ su , respectively, is contained in the interval [(1 + ǫ) −1 , 1 + ǫ]. b) The restriction to V 1 of the function σ defined in (12) takes values in the interval [−(log(1 + ǫ)/h), (log(1 + ǫ))/h]. c) If z ∈ V (B ss Q (q, a 1 ), B su Q (q, a 1 )) and if t > 8t 0 is such that Here and in the sequel, for z ∈ V 1 we denote by V 1 ∩ W s Q,loc (z) the connected component containing z of the intersection V 1 ∩ W s Q (z). To verify the claim, note first that property b) can be fulfilled since σ is continuous and Φ t -invariant and equals one at q. Property a) is fulfilled for sufficiently small a 1 since the measures λ s (or λ u ) are invariant under holonomy along the strong unstable (or the strong unstable) foliation and since dλ s = dλ ss × dt and dλ u = dλ su × dt and hence Jacobians of the maps θ z , ζ z are controlled by the function σ. By Property b) above and by Theorem 5.5, the last property is fulfilled if we choose a 1 > 0 small enough so that for some r > 0 and every z ∈ V 1 the following is satisfied. For every u ∈ V 1 the diameter of θ u (B ss Q (q, a 1 )) with respect to the Hodge distance does not exceed r, and the Hodge distance between θ u (B ss Q (q, a 1 )) and the boundary of θ u (B ss Q (q, a 0 )) is not smaller than c H r. Since h ≥ 1, Property b) implies the following. For all closed sets A i ⊂ B i Q (q, a 1 ) (i = ss, su) and for every z ∈ V (B ss Q (q, a 1 ), B su Q (q, a 1 )) we have Moreover, we have By the estimate (4) in Section 2, there is a number κ > 0 such that for any two points u, x ∈ T (S) with d T (u, x) ≤ 1 the distances δ u , δ x on ∂C(S) are e κ -bilipschitz equivalent. LetQ be a component of the preimage of Q in Q 1 (S). Letq ∈Q be a lift of q. Choose closed neighborhoods K ss ⊂ C ss ⊂ B ss Q (q, a 1 ) ⊂ B ss Q (q, a 0 ) ofq whose images under the flip F satisfy the properties in Lemma 5.3 for some numbers 0 < r 1 < r 2 < α 0 /2e κ where α 0 > 0 is as in Lemma 5.2. Choose also closed neighborhoodsK su ⊂C su ⊂ B sũ Q (q, a 1 ) ⊂ B sũ Q (q, a 0 ) ofq with the properties in Lemma 5.3 for some numbers 0 <r 1 <r 2 < α 0 /2κ. By the choice of the set V 0 , for any two points u, z ∈ V (C ss ,C su , t 0 (1 + ǫ)) the distances δ P u and δ P z are e κ -bilipschitz equivalent. As a consequence, for all u ∈ V (C ss ,C su , t 0 (1 + ǫ)) the δ P u -diameter of F (F C ss ∩ A) and F (C su ∩ A) does not exceed α 0 /2. Let ρ 0 ∈ (0, min{(r 2 − r 1 )/2, (r 2 −r 1 )/2}). By assumption, q is recurrent and hence by Lemma 5.2, applied to bothq and −q = F (q), there is a number R 0 > 0 so that for every R ≥ R 0 and for every z ∈ B sũ Q (q, a 1 ) with d T (P Φ R z, P Φ Rq ) ≤ 1 we have δ P Φ R z ≤ ρ 0 δ P z /α 0 on F (F C ss ∩Ã) and (20) Moreover, there is a mapping class h ∈ Stab(Q) and a number R 1 > R 0 such that Φ R1q is an interior point of hV (K ss ,K su ). By equivariance under the action of the mapping class group, for every u ∈ hV (C ss ,C su ) the δ P u -diameter of F (hV (C ss ,C su ) ∩Ã) is smaller than α 0 /2. In particular, the δ P Φ R 1q -diameter of F (hC su ∩Ã) is smaller than α 0 /2. The second part of inequality (20) then implies that the δ Pq -diameter of F (hC su ∩Ã) does not exceed ρ 0 . Thus by Property c) above, by the choice of ρ 0 and by Lemma 5.3, we have F (hC su ∩Ã) ⊂ F (C su ∩Ã). Define Thenq is an interior point of K su (as a subset of W sũ Q,loc (q)), and K su , C su are precisely invariant under Stab(q) (since a non-trivial element of Stab(q) fixesQ pointwise). The conditional measures λ su are invariant under holonomy along the strong stable foliation and transform under the Teichmüller flow by λ su • Φ t = e ht λ su . Moreover, λ su (K su ) ≥ λ su (C su )(1 + ǫ) −1 and hence properties a) and b) above and the definition of the function σ imply that and let Z i be the projection ofZ i to Q. Note that we have Z 1 ⊂ Z 2 ⊂ Z 3 and by the choice of K ss , C ss , by the estimate in a) above, by invariance of λ under the flow Φ t (which implies that λ(Z 3 ) ≤ λV (C ss , C su , t 0 )(1 + ǫ) 2 ) and by the fact that Z i is mapped homeomorphically onto Z i for i = 1, 2, 3. Moreover, each of the sets Z i is closed with dense interior. Let R > R 1 + t 0 and let z ∈ Z 1 be a periodic point for Φ t of period r ∈ [R − t 0 , R + t 0 ]. Since every orbit of Φ t which intersects Z 1 also intersects V (K ss , K su ) we may assume that z ∈ V (K ss , K su ). LetÊ be the component containing z of the intersection Φ r V 0 ∩ V 0 and let We claim that To see that this is indeed the case, letz ∈Z 1 be a lift of z. By the choice of the set C su and by the first part of the estimate (20), the δ P Φ Rz -diameter of the set F (F Φ R C ss ∩Ã) does not exceed ρ 0 . In particular, since z ∈ Z 1 and Property c) above holds true, we have Let D ⊂ C ss be such that Then by the estimate (18) and by (23), we have Now by the estimate (19 and the fact that Φ r preserves the stable foliation and contracts the measures λ s by the factor e −hr , we conclude that λ(Q 1 ) ≥ e −hr λ ss (K ss )λ su (K su )/2t 0 (1 + ǫ) 6 and similarly λ(Q 2 ) ≤ e −hr λ ss (K ss )λ su (C su )(1 + ǫ) 6 /2t 0 . Together with the estimate (19) this implies the estimate (21). On the other hand, if z = z ′ ∈ Z 0 are periodic points of prime periods r, s ∈ [R − t 0 , R + t 0 ] then by our choice of V 0 the components containing z, z ′ of the intersection Φ r V 0 ∩ V 0 are disjoint. Thus there are at most such intersection arcs which are subarcs of periodic orbits of prime period in [R − t 0 , R + t 0 ]. However, since the Lebesgue measure λ is mixing for the Teichmüller flow [M82, V86], for sufficiently large R we have From this we deduce that for all sufficiently large R > 0. This shows the lemma. Now we are ready for the proof of Proposition 5.1. Proof of Proposition 5.1. Let µ be a weak limit of the measures µ R as R → ∞. Then µ is a (a priori locally infinite) Φ t -invariant Borel measure supported in the closure Q of Q. This measure is moreover invariant under the flip F : q → −q. By Lemma 5.6 it suffices to show the following. Let A ⊂ Q be a closed Φ tinvariant set of vanishing Lebesgue measure. Then for all ǫ > 0, every q ∈ A has a neighborhood U in Q such that µ(A ∩ U ) < ǫ. First let q ∈ A ∩ Q. Choose compact balls B i ⊂ C i ⊂ W i Q,loc (q) about q for the Hodge distance of radius r 1 > 0, r 2 > 2c H r 1 > 0 (i = ss, su) and numbers t 0 > 0, δ > 0 such that V 3 = V (C ss , C su , t 0 (1 + δ)) is a set with a local product structure. In particular, for every preimageq of q in Q 1 (S) the component of the preimage of V 3 containingq is precisely invariant under Stab(q). Then are closed neighborhoods of q in Q. Let moreover We may assume that for one (and hence every) componentṼ 3 of the preimage of V 3 in Q 1 (S) the diameter of the projection PṼ 3 ofṼ 3 to T (S) does not exceed one. As in the proof of Lemma 5.6 we require that moreover the following holds true. That this requirement can be met follows from Theorem 5.5 and the discussion in the proof of Lemma 5.6. If q ∈ Q − Q then we choose closed neighborhoods δ)) of q in Q as in Proposition 4.3 such that ∪ i B j i and ∪ i C j i are the intersections with W j Q,loc (q) of closed balls for the Hodge norm. We require that property ( * ) above holds true (with a slight abuse of notation). Let u ∈ V 1 and let r > 0 be such that Φ r u = u. Let Y be the connected component containing u of the intersection V 3 ∩ Φ r (V 2 ). By the property ( * ), we have Y ⊃ Φ r (V 1 ∩ W s Q,loc (u)). Moreover, the connected component containing u of the intersection V 3 ∩ Φ r (V 2 ∩ W u Q,loc (u)) contains the component containing u of the intersection W ũ Q,loc (u) ∩ V 0 . Thus as in the proof of Lemma 5.6, we observe that for any point u ∈ V 0 and every r > 0 such that Φ r u = u the Lebesgue measure of the intersection Φ r V 2 ∩ V 3 is bounded from below by e −hr χ where χ > 0 is a fixed constant which only depends on V 1 , V 2 , V 3 . Moreover, the number of periodic points z ∈ V 1 of period s ∈ [r − t 0 , r + t 0 ] such that the intersection components Φ r V 2 ∩ V 3 , Φ s V 2 ∩ V 3 containing u, z are not disjoint is bounded from above by the cardinality of Stab(q) whereq is a preimage of q in Q 1 (S).. For q, z ∈ Q and t > 0 write q ≈ t z if there are liftsq,z of q, z to Q 1 (S) such that d(P Φ sq , P Φ sz ) < 1 for 0 ≤ s ≤ t. Write moreover q ∼ u z if there are liftsq,z of q, z to Q 1 (S) such that d(q,z) < 1, d(P Φ uq , P Φ uz ) < 1. Note that if y ≈ t z then also y ∼ t z. For a subset D of Q define U t (D) = {z | z ≈ t y for some y ∈ D} and Y u (D) = {z | z ∼ u y for some y ∈ D}. For j > 0 define Then for all k > 0, j > 0 each j > 0, Z j is an open neighborhood of A ∩ V 1 in V 1 , and W j,k is an open neighborhood of Z j in A ∩ V 1 . Moreover, we have Z j ⊃ Z j+1 for all j and ∩ j Z j ⊃ A ∩ V 1 . If z ∈ ∩ j Z j − A then there is some y ∈ A and there are liftsz,ỹ of z, y to Q 1 (S) such that d(P Φ t (z), P Φ t (ỹ)) ≤ 1 for all t ≥ 0. However, up to removing from ∩ j Z j a set of vanishing Lebesgue measure, this implies that z ∈ W ss Q,loc (y) [M82, V86]. But λ(A) = 0 and therefore λ(∩ j Z j ) = λ(A ∩ V 1 ) = 0 by absolute continuity. Since λ is Borel regular, the Lebesgue measures of the sets Z j tend to zero as j → ∞. Similarly, we infer that λ(Z j ) = lim sup k→∞ λ(W j,k ). Thus for every κ > 0 there are numbers j 0 = j 0 (κ) > 0 and k 0 = k 0 (κ) > j 0 such that we have λ(W j,k ) < κ for all j ≥ j 0 , k ≥ k 0 . Now let R > k 0 + 2ǫ and let w ∈ V 1 ∩ Z j0 be a periodic point for Φ t of prime period r ∈ [R − ǫ, R + ǫ]. Let Z be the component of Φ r V 2 ∩ V 3 containing w. Then every point in Z is contained in W j0,R . By Lemma 5.6 and its proof, the Lebesgue measure of this intersection component is bounded from below by χe −hR where χ > 0 is as above. Moreover, the number of periodic points u = z for which these intersection components are not disjoint is uniformly bounded. In particular, there is a number β > 0 not depending on R, j 0 such that the number of these intersection components is bounded from above by βe hR times the Lebesgue measure of W j0,R , i.e. by e hR βκ. This implies that we have µ(Z j0 ) ≤ βκ/2t 0 . Since κ > 0 was arbitrary we conclude that µ(A ∩ V 1 ) = 0. Proposition 5.1 follows. Proof of the theorem In this section we complete the proof of the theorem from the introduction. We continue to use the assumptions and notations from Sections 2-5. As before, let Q ⊂ Q(S) be a component of a stratum, equipped with the Φ tinvariant Lebesgue measure λ. Let S ⊂ Q be the open dense Φ t -invariant subset of full Lebesgue measure of all points q with |Stab(q)| = 1. Then S is a manifold. Let q ∈ S and let U ⊂ S be an open relative compact contractible neighborhood of q. For n > 0 define a periodic (U, n)-pseudo-orbit for the Teichmüller flow Φ t on Q to consist of a point x ∈ U and a number t ∈ [n, ∞) such that Φ t x ∈ U . We denote such a periodic pseudo-orbit by (x, t). A periodic (U, n)-pseudo-orbit (x, t) determines up to homotopy a closed curve beginning and ending at x which we call a characteristic curve (compare Section 4 of [H10b]). This characteristic curve is the concatenation of the orbit segment {Φ s x | 0 ≤ s ≤ t} with a smooth arc in U which is parametrized on [0, 1] and connects the endpoint Φ t x of the orbit segment with the starting point x. Recall from Section 5 the definition of a recurrent point for the Teichmüller flow on Q. Lemma 4.4 of [H10b] shows Lemma 6.1. There is a number L > 0 and for every recurrent point q ∈ S there is an open relative compact contractible neighborhood V of q in S and there is a number n 0 > 0 depending on V with the following property. Let (x, t 0 ) be a periodic (V, n 0 )-pseudo-orbit and let γ be a lift to Q 1 (S) of a characteristic curve of the pseudo-orbit. Then the curve t → Υ T (P γ(t)) is an infinite unparametrized L-quasi-geodesic in C(S). Remark: Lemma 4.4 of [H10b] is formulated for Q(S) rather than for a component of a stratum. However, the statement and its proof immediately carry over to the result formulated in Lemma 6.1. Note that β(q, t) depends on the choice of the map Υ T (and on the choice of the liftq). However, by Lemma 3.3 of [H10a], there is a continuous functionβ : Q × [0, ∞) → R and a number a > 0 such that |β(q, t) − β(q, t)| ≤ a for all (q, t). In particular, the values lim inf t→∞ 1 t β(q, t) and lim sup t→∞ 1 t β(q, t) are independent of any choices made and coincide with the corresponding values forβ. We use this observation to show Lemma 6.2. There is a number c > 0 such that for λ-almost every q ∈ Q we have lim t→∞ 1 t β(q, t) = c. Proof. It suffices to show the lemma for the continuous functionβ. By the choice of a > 0 and by the triangle inequality, we havẽ for all q ∈ Q, s, t ∈ R. Therefore the subadditive ergodic theorem shows that for λ-almost all q ∈ Q the limit lim t→∞ 1 tβ (q, t) exists and is independent of q. We are left with showing that this limit is positive. By Lemma 2.4 of [H10a], there is a number r > 0 such that for every z ∈ Q 1 (S) and all t ≥ s ≥ 0 we have Let q ∈ Q be a periodic point for Φ t . Then there is a number b > 0 such that for every liftq of q to Q 1 (S) the map t → Υ T (P Φ tq ) is a biinfinite b-quasi-geodesic in C(S) [H10a]. Thus by inequality (2) and continuity of Φ t we can find an open neighborhood U ⊂ Q of q and a number T > 0 such that β(u, T ) ≥ 3r + 3a for all u ∈ U. Now if z ∈ Q and if n > k > 0 are such that the cardinality of the set of all numbers i ≤ n with Φ T i z ∈ U is not smaller than k thenβ(z, nT ) ≥ kr. The measure λ is Φ T -invariant and ergodic, and λ(U ) > 0. Thus by the Birkhoff ergodic theorem, the proportion of time a typical orbit for the map Φ T spends in U is positive. The lemma follows. The next proposition is the main remaining step in the proof of the theorem from the introduction. Proof. Let q ∈ S be recurrent and let V be an open neighborhood of q which satisfies the conclusion of Lemma 6.1 for some n 0 > 0. Let ǫ > 0. With the notations from Section 4, let a 0 < a Q (q), t 0 < min{t Q (q), log(1+ǫ)/2h, ǫ/4} be such that V 0 = V (B ss Q (q, a 0 ), B su Q (q, a 0 ), t 0 ) ⊂ V . Choose a number a 1 < a 0 which is sufficiently small that for every z ∈ V 1 = V (B ss Q (q, a 1 ), B su Q (q, a 1 ), t 0 ) the Jacobian at z of the homeomorphism V (B ss Q (q, a 1 ), B su Q (q, a 1 ), t 0 ) → B ss Q (q, a 1 ) × B su Q (q, a 1 ) × [−t 0 , t 0 ] with respect to the measures λ and λ ss × λ su × dt is contained in the interval [(1 + ǫ) −1 , (1 + ǫ)]. We may assume that any two points in a componentṼ 1 of the preimage of V 1 can be connected inṼ 1 by a smooth curve whose projection to T (S) is of length at most ǫ/2. Let α 0 > 0 be as in Lemma 5.2. Letq be a lift of q to a componentQ of the preimage of Q in Q 1 (S). Recall from Section 2 the definition of the map F :à → ∂C(S). Since q is recurrent, the horizontal and the vertical measured geodesic laminations ofq are uniquely ergodic [M82]. Let be neighborhoods of q as in the proof of Lemma 5.6 and letZ 1 ⊂Z 2 ⊂Z 3 ⊂Ṽ 1 be components of lifts of Z 1 ⊂ Z 2 ⊂ Z 3 ⊂ V 1 toQ which containq. These sets have the following property. (4) There is a number ρ > 0 with the following property. If z ∈Z 1 and if C ⊂ B sũ Q (z, a 1 ) (or C ⊂ B ss Q (z, a 1 )) is an open neighborhood of z such that the δ P z -diameter of F (C ∩Ã) (or of F (F (C) ∩Ã)) is not bigger than ρ then C ⊂Z 3 and the Φ t -orbit of every point of C intersectsZ 3 in an arc of length 2t 0 . Let Π :Q → Q be the canonical projection. By Lemma 6.2 and Lemma 5.2, there is a number T > 0 and there is a Borel subset Z 0 ⊂ Z 1 ∩ Π(Ã) with λ(Z 0 ) > λ(Z 1 )/(1 + ǫ) such that for every z ∈Z 0 =Z 1 ∩ Π −1 (Z 0 ) and every t ≥ T we have δ P z ≤ ρδ P Φ t z /e κ on D(Φ t z, α 0 ) where κ > 0 is as in the estimate (4). We may assume that Z 0 = V (A 0 , K su , t 0 ) for some Borel set A 0 ⊂ K ss . In particular, we conclude as in the proof of Lemma 5.6 (see the estimate (21) that (with some a-priori adjustment of the constant ǫ) the following holds true. Let z ∈ Z 0 and let t ≥ T be such that Φ t z ∈ Z 1 . LetÊ be the connected component containing Φ t z of the intersection Φ t V 1 ∩ V 1 . Then the Lebesgue measure of the intersection Φ t Z 2 ∩ Z 3 ∩Ê is not bigger than e −ht λ(Z 1 )(1 + ǫ) 3 ≤ e −ht λ(Z 0 )(1 + ǫ) 4 . Together this implies that the number of such intersection components is at least e ht λ(Z 0 )/(1 + ǫ) 5 . Next we claim that for sufficiently large n ≥ T and for a point z ∈ Z 0 with Φ n z ∈ Z 1 there is a periodic orbit for the flow Φ t which intersects Z 3 in an arc of length at least 2t 0 and whose period is contained in the interval [n − ǫ, n + ǫ]. To this end let n 1 > max{n 0 , T }; then the conclusion of Lemma 6.1 is satisfied for every periodic (Z 1 , n 1 )-pseudo-orbit beginning at a point z ∈ Z 0 ⊂ V . Let u ∈ Z 0 be such that Φ n u ∈ Z 1 for some n > n 1 . Let γ be a characteristic curve of the periodic (Z 1 , n 1 )-pseudo-orbit (u, n) which we obtain by connecting Φ n u ∈ Z 0 with u ∈ Z 0 by a smooth arc contained in Z 1 . Up to replacing n by R = n + τ for some τ ∈ [−2t 0 , 2t 0 ] ⊂ [ǫ/2, ǫ/2] we may assume that u ∈ V (K ss , K su ), Φ R u ∈ V (K ss , K su ). Letγ be a lift of γ toQ with starting pointγ(0) ∈Z 0 . Thenγ is invariant under a mapping class g ∈ Mod(S) whose conjugacy class defines the homotopy class of γ in S. A fundamental domain for the action of g onγ projects to a smooth arc in T (S) of length at most R + ǫ/2 < n + ǫ. By Lemma 6.1 and the choice of Z 0 , R the curve t → Υ T (Pγ(t)) is an unparametrized L-quasi-geodesic in C(S) of infinite diameter. Up to perhaps a uniformly bounded modification, this quasi-geodesic is invariant under the mapping class g ∈ Mod(S), and g acts on the quasi-geodesic Υ T (Pγ) as a translation. As a consequence, g acts on C(S) with unbounded orbits and hence it is pseudo-Anosov. By invariance ofγ under g, the attracting fixed point of g is just the endpoint of Υ T (Pγ) in ∂C(S). Since g is pseudo-Anosov, there is a closed orbit ζ for Φ t on Q(S) which is the projection of a g-invariant flow lineζ for Φ t in Q 1 (S). The length of the orbit is at most R + ǫ. The image under the map Υ T P of the orbitζ in Q 1 (S) is an unparametrized p-quasi-geodesic in C(S) which connects the two fixed points for the action of g on ∂C(S). Together this implies the above claim. As a consequence, the attracting fixed point ξ for the action of the pseudo-Anosov element g on ∂C(S) is contained in the ball D(γ(0), ρ), moreover it is contained in the closure of the set F (W sũ Q (q) ∩Ã) ⊂ F (à ∩Q). The same argument also shows that the repelling fixed point of g is contained in the intersection of D(−γ(0), ρ) with the closure of F (F W ss Q (q) ∩Ã) ⊂ F (à ∩Q). Since the map F is closed we conclude that the axis of g is contained in the closure ofQ. Sinceγ(0) ∈ Z 1 , by property 4) above, this axis passes through the liftZ 3 of Z 3 containingq. In other words, the projection of this axis to Q passes through Z 3 , and, in particular, it is contained in Q. Moreover, it intersects the component of Φ R Z 1 ∩ Z 3 which contains Φ R u. As a consequence, the length of the axis is contained in [R − ǫ/2, R + ǫ/2] ⊂ [n − ǫ, n + ǫ]. To summarize, there is an injective assignment which associates to every R > n 0 and to every connected component of the intersection Φ R Z 1 ∩ Z 1 for R > n 0 > T which contains points in Φ R Z 0 ∩ Z 0 a subarc of length 2t 0 of the intersection with Z 3 of a periodic orbit for Φ t whose period is contained in [n − ǫ, n + ǫ]. Together with the above discussion, this completes the proof of the proposition. We use Proposition 6.3 to complete the proof of our theorem from the introduction. Theorem 6.4. The Lebesgue measure on every stratum Q is obtained from Bowen's construction. Proof. By Proposition 5.1 and Proposition 6.3, it suffices to show the following. Let q ∈ Q be birecurrent and let ǫ > 0. For R > 0 let Γ(R) be the set of all periodic orbits of Φ t in Q of period at most R. Then there is a compact neighborhood K of q in Q and there is a number n > 0 such that for every N > n the measure µ N = e −hN γ∈Γ(R) δ(γ) assigns the mass µ N (K) ∈ [(1 − ǫ)λ(K), (1 + ǫ)λ(K)] to K. However, this holds true by Proposition 5.1 and Proposition 6.3. This completes the proof of the theorem. Acknowledgement: This work was carried out in fall 2007 while I participated in the program on Teichmüller theory and Kleinian groups at the MSRI in Berkeley. I thank the organizers for inviting me to participate, and I thank the MSRI for its hospitality. I also thank Juan Souto for raising the question which is answered in this note.
2011-01-10T15:01:01.000Z
2010-07-14T00:00:00.000
{ "year": 2010, "sha1": "2f70dc2977a32792b90398a3b5fb4c78753f8936", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/jmd.2013.7.489", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "3a399eeb9c4e6f2e539e2251bcc18a3e9327a047", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
259163996
pes2o/s2orc
v3-fos-license
Relationships of temperature and biodiversity with stability of natural aquatic food webs Temperature and biodiversity changes occur in concert, but their joint effects on ecological stability of natural food webs are unknown. Here, we assess these relationships in 19 planktonic food webs. We estimate stability as structural stability (using the volume contraction rate) and temporal stability (using the temporal variation of species abundances). Warmer temperatures were associated with lower structural and temporal stability, while biodiversity had no consistent effects on either stability property. While species richness was associated with lower structural stability and higher temporal stability, Simpson diversity was associated with higher temporal stability. The responses of structural stability were linked to disproportionate contributions from two trophic groups (predators and consumers), while the responses of temporal stability were linked both to synchrony of all species within the food web and distinctive contributions from three trophic groups (predators, consumers, and producers). Our results suggest that, in natural ecosystems, warmer temperatures can erode ecosystem stability, while biodiversity changes may not have consistent effects. more robust to parameter changes, which is indicative of a higher structural stability 5,6 . Temporal stability focuses on temporal variation of species abundance; a smaller temporal variation indicates higher temporal stability 7,8 . At present, these two stability indices are often examined separately, and their joint responses are largely overlooked. Temperature increases are known to alter ecological parameters of a system such as species interactions and intrinsic growth rates. When, for example, temperature increases consumption rates [9][10][11] , a change of structural stability can ensue, as shown by a recent study on a coastal community of competing species 5 . These findings might not readily translate to multitrophic community types such as food webs, where different trophic levels have different sensitivities to temperature changes 12,13 . To our knowledge, no study examines the effects of temperature on the structural stability of food webs. Ecological parameter changes following a temperature increase can also lead to effects on species abundance 12,13 , and lead to different species showing different degree of variability in population abundance 14,15 . Indeed, temperature can alter temporal stability of food webs, and negative 14 , neutral 15 , and positive 16 effects have been documented in experimental and simulation studies 14,16,17 . Changes of synchrony of all species within food webs 18,19 can explain these effects, and sometimes there are disproportionate changes in temporal stability of specific trophic groups (e.g., producers, consumers, and predators) 15 that drive these effects. How consistent these mechanisms are across food webs is uncertain, as available studies have focused on single food webs. Warming goes hand in hand with biodiversity change [20][21][22] , another key factor shaping ecological stability [23][24][25] . To our knowledge, direct evidence for effects of biodiversity metrics in general on the structural stability in food webs is absent, while higher biodiversity has been shown to have positive 7,26 or neutral 27 effects on temporal stability. How warmer temperatures and biodiversity changes jointly affect ecological stability is uncertain, as most studies only focus on one of these two factors. The co-occurrence of both global changes in natural ecosystems warrants an integrative approach to studying their joint effects [20][21][22] , so as to realistically forecast stability, and thus related functional ecosystem features and services [28][29][30] . Current conclusions on the effects of warming or biodiversity on ecological stability have mostly been based on short-term experiments or model simulations that consider a limited range of temperature 31,32 . Translating these results to natural ecosystems is challenging. First, species interactions and their responses to temperature can change through time 33,34 . Focusing on short-term effects, therefore, precludes the adaptation of thermal reaction norms and the waxing and waning of species interactions through time 32,35,36 , which has been repeatedly observed in natural ecosystems in response to environmental change 34,37,38 . Second, structural stability is traditionally computed from predefined model equations 39 , often assuming systems to converge to point equilibria. However, communities in natural ecosystems often exhibit more complex dynamics 24 , with species interactions varying with system state, making structural stability a dynamic property 5 . Techniques to study this dynamic behaviour are now available, and their application to field data has allowed studying the effect of environmental variables on structural stability 5,40,41 . In this study, we quantified structural and temporal stability of natural food webs by collating a total of 19 long-term data sets from Europe and North America (seven from freshwater lakes, three from marine and nine from river estuaries), each spanning 10 to nearly 30 years of data. First, we applied empirical dynamic modelling (EDM) to the time series of the 19 planktonic food webs to infer trophic interactions among consumers and resources from recorded population dynamics, thereby reconstructing the interaction networks. Next, we estimated time-specific net species interactions by using the multiview distance regularised S-map, as such reconstructing the dynamics of the Jacobian matrix, showing how structural stability changes through time (Fig. 1). Temporal stability in each of the 19 food webs was estimated as the coefficient of variation of total community abundance. Finally, we examined the relationships between temperature/biodiversity and the two stability indices (structural and temporal stability). We found that higher temperatures were associated with lower structural and temporal stability, while biodiversity indices had no consistent associations. We interpret these associations as evidence of temperature and biodiversity effects, and we use both terms ('associations' and 'effects') hereafter to represent the associations. While we acknowledge the correlational nature of temperature and biodiversity effects on stability in our study, we believe that our interpretation is supported by existing knowledge on temperature and biodiversity effects and the nature of our study design (Fig. 1). Finally, different trophic groups (predators, consumers, or producers) had different contributions to structural and temporal stability. Synchrony of all species within the food web had a positive effect on the food web's temporal stability. Results We first quantified the time-varying Jacobian matrix of each food web with the multiview distance regularised S-map 41 , from which the structural stability of each food web was measured as the volume contraction rate (VCR), which is the divergence of a vector field and is equivalent to the trace (TrðJÞ) of the Jacobian matrix 5,6 (see methods). Smaller values of TrðJÞ (i.e., VCR) indicate lower sensitivity to parameter perturbations 6 , i.e., a higher structural stability 5,6 . Next, we computed temporal stability of each food web as the coefficient of variation of total community abundance, by using a time window of 1.5 years. A larger coefficient of variation indicates lower temporal stability. Across food webs, we found that temperature was consistently associated with lower structural and temporal stability of food webs, as temperature increases resulted in a higher TrðJÞ and a higher coefficient of variation (Fig. 2a, d). In contrast, species richness was associated with lower and higher structural and temporal stability, respectively (Fig. 2b, e). Simpson diversity was only associated with higher temporal stability (Fig. 2c, f). These trends were robust to changing the length of the time window to compute temporal stability of food webs (Fig. S1), and to the inclusion of rare species-which are normally excluded from similar analyses (Fig. S2). Within food webs, temperature and biodiversity effects on structural and temporal stability were system dependent (Fig. 2). In 13 (11) out of 19 food webs, temperature had negative effects on structural (temporal) stability (Fig. 2a, d). Similarly, in 14 (17) out of 19 food webs, species richness had negative (positive) effects on structural (temporal) stability (Fig. 2b, e). Simpson diversity in 13 out of 19 food webs had positive effects on temporal stability (Fig. 2f). Structural stability of food webs did not vary along latitudes, while temporal stability was higher at higher latitudes (Fig. S3). Finally, we found that temperature had no effect on either biodiversity index (Fig. S4). Thus, warmer temperatures mainly reduced stability directly, and less so indirectly by changing biodiversity (e.g., temperature → biodiversity → stability). The effects of temperature on structural stability of food webs were mostly driven by temperature effects on the contribution from predators. This contribution, which is the sum of those diagonal elements of Jacobian matrix J that belongs to predators and includes the aggregated effects of other species on predator species, increased (Fig. 3a), thus increasing TrðJÞ and therefore decreasing structural stability. We did not find temperature effects on contributions from consumers ( Fig. 3d) or producers (Fig. S5a). Species richness increased the contributions from consumers (Fig. 3e), while we found species richness had no effects on the contributions from predators (Fig. 3b) or producers (Fig. S5b). Effects of warmer temperatures and biodiversity on temporal stability of food webs were related to the synchrony of all species within the food web ( Fig. 3g-i). Warmer temperature increased this synchrony (Fig. 3g), while species richness and Simpson diversity decreased it ( Fig. 3h-i). The increase or decrease of this synchrony was associated with lower or higher temporal stability of food webs, respectively (Fig. S5d). Furthermore, we found that temperature and species richness mostly affected the temporal stability of producers ( Fig. S6a-b), which then altered the temporal stability of the whole food web (Fig. S7a). Conversely, Simpson diversity comparably increased the temporal stability of all trophic groups (producers, consumers, predators) (Fig. S6c, f, i), which then increased the temporal stability of the whole food web (S7a-c). In addition, the contrasting effects of temperature and biodiversity on the temporal stability of trophic groups (Fig. S6a-c, f, i) were also related to contrasting effects on species synchrony of corresponding trophic groups (Fig S8a-c, f, i-l). Moreover, the synchrony of producers determined the synchrony of the whole food web more than did consumers or predators (Fig S7d-f). Finally, temperature and biodiversity also altered specific structural aspects (i.e., link density L=S, with number of links L and number of network nodes S), which in turn affected temporal food web stability (Fig. S9a, b, d). Higher mean temperature was associated with a lower link density, which reduced temporal stability of food webs (Fig S9a, d). In contrast, higher mean species richness was related to higher link density, which then increased temporal stability of food webs (Fig S9b, d). We did not find that other structural aspects (i.e., connectance L=S 2 and food chain length) had effects on the temporal or structural stability of food webs ( Fig S10). Discussion Our results show that warmer temperatures are associated with lower structural and temporal stability, while we found biodiversity had no consistent effects on either stability property. The contributions from predators and consumers, the synchrony of all species within the food web, and temporal stability of different trophic groups explain these results. Temperature has been found a strong driver of structural stability in a competitive coastal community 5 ; here, we show that temperature reduces structural stability across 19 planktonic food webs. Within food webs, temperature effects on structural stability were system dependent, albeit mostly negative. Lower structural stability in warmer temperatures indicates a lower robustness to parameters perturbations. Temperature effects on those diagonal entries of the Jacobian matrix that belong to predators are possible explanations to the decrease of structural stability we found (Fig. 3a). The biological mechanism explaining the temperature effect on the diagonal entries for predators could involve effects on density-dependent (e.g., consumption rates and self-limitation) or density-independent contributions (e.g., intrinsic growth rates, generally negative for predators) to the per-capita growth rate of predators (see Supplementary Information. Part 1). Increases of the per-capita growth rate of predators could be attributed to 1) increases of predators' consumption rates, or 2) decreases of predators' intrinsic growth rates, or 3) decreases of predators' self-limitation, or 4) increases of predators' consumption rates We reconstruct the interaction network using convergent cross mapping (CCM). c We compute structural stability for each time point as the trace of the local Jacobian matrix TrðJÞ. The Jacobian matrices are inferred by multiview distance regularised S-map (MDR S-map). The size of the Jacobian is fixed over time, as s × s (row × column), where s is the number of network nodes in the food web. d The temporal stability of the food web and of each trophic group is computed as the coefficient of variation (CV) of the total community abundance and each trophic group, respectively, using a time window of 1.5 years. Species asynchrony of the food web and of each trophic group is computed using the moving window. e Timevarying species richness (i.e., the sampled species richness based purely on species presence and absence) and Simpson diversity are calculated from the data, and combined with time-varying temperature data. that are greater than the increases of the other parameters 2)−3). Numerous studies have shown that temperature can increase predator consumption rates 9,42-44 , decrease self-limitation 45 , or increase predator consumption rates more than it does other parameters 46,47 . For example, Lang et al. (2012) 48 have shown that warming increased consumption by a predacious ground beetle to a greater extent than it increased energy losses. Simulation studies have further shown that such parameter changes can drive predator-consumer systems from stable equilibria to a limit cycle 46,49 or chaos 50,51 . We found that species richness decreased structural stability. A similar result is found when adopting an alternative approach to compute structural stability in ecological communities 52 : the dimensionality of parameter space grows as more species are added, which reduces the fraction of that space leading to positive population densities. In addition, the negative effects of species richness on structural stability found in this study (Fig. 2b) can be explained by positive effects of species richness on the contribution from consumers (Fig. 3e). The mechanism behind this result could be again attributed to changes of growth and consumption rates, and selflimitation (see Supplementary Information. Part 2). In contrast, Simpson diversity did not affect structural stability and the diagonal entries of any trophic group (Figs. 2c, 3c, 3f; Fig S5c). Previous experimental and simulation studies have shown that warmer temperatures can lead to lower temporal food web stability 14,16,17 . Here we show that this finding extends to natural food webs, analysing multiple long-term data sets. Lower temporal stability in warmer temperatures indicates a greater degree of variability in species abundance with respect to its mean. Our finding that the synchrony of all species within the food web, temporal stability of producers, and link density underpins this result, supports previous empirical findings 15,18,19,[53][54][55][56] . Warmer temperature was linked to higher synchrony of all species within the food web, which might be caused by warmer temperature increasing consumer-producer interactions, tightening the control of consumers on producers and resulting in synchronous changes in abundance dynamics 57,58 . In addition, higher mean temperature was associated with a lower link density, which could be expected when higher temperature reduces the number of links by favouring predators to be specialists rather than generalists 56 . Higher temperature favouring specialisation in predators is supported by evidence that increasing temperature can increase predator attack rates more than it decreases handling time by altering activation energy in Arrhenius function 56,59 . In contrast to temperature, we found that biodiversity reduced the synchrony of all species within the food web, but increased temporal stability of trophic groups and link density, which therefore increased temporal food web stability, again supporting previous experiments, field studies, and theory 7,26,60-62 . Finally, species synchrony (and temporal stability) of producers determined the synchrony (and temporal stability) of the whole food web more than did consumers or predators, supporting recent findings established in short-term experiments 15 . Natural food webs are inevitably under-sampled. Most notably, our study focuses on planktonic species, reproducing on time scales of days. Larger (e.g., fish, mammals) and smaller biota (e.g., bacteria) are excluded from our analyses because they were not reported, or only measured infrequently. We acknowledge that . a-f The bold black lines and error bands depict the significant best-fit trendline and the 95% confidence interval in the linear mixed model (two-sided) across 19 food webs, respectively. The non-bold coloured lines indicate the best-fit trendline in the linear models (two-sided) within each food web. For statistical results see table S1. these biota can play critical roles in ecosystem functioning and can mediate planktonic species dynamics 63 . Because their abundance could not be accounted for in a consistent way, their contribution in this study is only implicitly present. Given that some studies showed warmer temperatures can reduce the synchrony between fish and lower trophic planktonic species 64,65 , we expect that the explicit inclusion of fish in this analysis might weaken the negative effect of temperature on temporal stability. We have applied time series analysis to quantify the effects of temperature and biodiversity on two types of stability in complex food webs. We found that warmer temperatures, in natural ecosystems, were associated with lower structural and temporal stability, while biodiversity had no consistent effect on stability. This suggests that the methods assuming ecosystems to exhibit static equilibria may not be sufficient to evaluate how global change affects the stability of networks. Given the increasing amount of available data from natural ecosystems [66][67][68] , our work paves the way for the application of longterm monitoring data sets to investigate the effects of (a)biotic factors on ecosystems' structure, function, and stability. Time series data of food webs We used 19 long-term time series data sets representing 7 freshwater ecosystems (lakes), 3 marine ecosystems (Western English Channel, Wadden sea and Narragansett Bay), and 9 rivers (estuary of rivers) to test the responses of food web stability to biodiversity and temperature (Table S2) 74 and Waikato Regional Council 75 . The last dataset from the Western English Channel is archived at the British Oceanographic Data Centre www. bodc.ac.uk and was obtained upon request from Plymouth Marine Laboratory. We selected the data sets using the following search criteria: (1) the number of trophic levels was at least 2, (2) taxa were identified to species level or to finest taxonomical level as possible (generally species level), and (3) temperature was available. Next, we removed the data sets that were only sparsely sampled (e.g., yearly or semi-annually). We only kept data sets with seasonal sampling of all variables (if multiple samplings were conducted per season, e.g., monthly and bimonthly sampling data sets, those were averaged). Here, the seasonal resolution (trimonthly) is the shared consensus that can be applied to all 19 data sets, and the seasonal average here is also the most representative measure across all data sets, because the equal sampling interval is necessary for comparison across systems and also for EDM analysis 76 . After the seasonal average, there were two missing points across the whole dataset (accounting for 0.17% of whole data). One missing point was from Narragansett Bay (winter of 1979), and the other one was from Wadden Sea (summer of 1989). Those 2 missing data points were linearly interpolated using na.approx function in the package of zoo 77 . Then, the abundance of each species across all data sets was scaled to the same unit (individuals per litre). Note that fish species were excluded from analysis, because they were either not reported or yearly sampled only, and because the unit (catch per unit effort) of fish species changed over sampling time. We thus found 19 long-term seasonal data sets consisting mainly of plankton species and spanning from 10 to 30 years and originating from the continents of North America and Europe (Table S2). Recent work showed that the plankton species from natural ecosystems had a greater proportion of chaotic time series than others (e.g., fishes) 78 , which indicated that the plankton species could fit nicely for EDM analysis. Then, we divided all species into producers, herbivores, omnivores, and predators by their diet 79,80 . Next, we retained the species which were encountered at least once per 1.5 years (at least 1 nonzero abundance data out of 6 data points) for the analyses, to exclude the low-frequency species which include too many zero values in their time series. Too many zero values is a general statistical problem in time series analysis 76 . Before EDM analysis, all time series data of abundance and temperature were normalised to have zero mean and unit variance, while raw data was used to compute temporal stability because the mean and variance are parts of the equation to compute temporal stability. Inferring causal interactions among species For each dataset, we identified the causal links (e.g., competition and predation) across all potential species pairs in each dataset using convergent cross mapping (CCM) 40 (see three brief videos for a summary introduction link1: http://tinyurl.com/EDM-intro). CCM is based on Takens's theorem, which proves that as a generic property it is possible to construct a shadow version of the original attractor of a dynamical system by substituting time lags of the observable variables for the unknown variables 40,81 . An important consequence of this is that if the causal variable X and affected variable Y belong to the same dynamical system, information on the causal variable X is encoded in the affected variable Y. Thus one can predict the states of causal variable X using the affected variable Y. CCM infers causality by measuring the extent to which the causal variable X has left signatures in the time series of the affect variable Y; a procedure known as crossmapping (cross-prediction) 40 . In this study, the appropriate embedding dimensions for cross-mapping E were determined by univariate simplex projection 82 , examining values of E from 2 to square root of n, where n is the length of the time series 77 . Following Deyle et al. 83 , we examined the same range of E across all studied data sets. Thus, we computed n as the geometric mean time series length across all data sets. E was finally examined from 2 to 9 across all data sets. We used simplex projection to select the best E that gave the lowest mean absolute error 34,77 . The first necessary criterion to test for causality among variables requires that cross-mapping skill in the real time series need to be higher than the ones in null surrogates-generated time series containing associations or temporal patterns that are conservatively asserted to be non-causal. Because our data sets come from field seasonally-monitoring, following Deyle et al. 83 , we used null surrogates designed to factor out seasonality as a contributing factor in CCM. Specifically, for any causal variable X, we calculated the yearly averages of X and seasonal anomalies as the difference between the observed X and this yearly average. Then, we randomly shuffled the seasonal anomalies and added them back to the yearly averages to generate surrogates with randomised time dependence between anomalies. Thus, the new surrogate time series (X sur ) have the same seasonal average as X, but with randomly shuffled anomalies. The conservative reasoning described by Deyle et al. 83,84 is that if X indeed causes Y in a manner that extends beyond the effects of seasonality, then Y should be sensitive not only to the seasonal components of X, but also to the anomalies; thus Y should cross predict the real time series X better than the surrogate time series X sur 83 . The analyses in this study are based on generating 100 null seasonal surrogates for X 85 . The second necessary criterion for testing causality is the convergence towards higher cross-mapping skill as library length (i.e., the number of points used for state space reconstruction) increases 40 . Because longer library length increases the density of points in the reconstructed attractor, the nearest neighbouring points used to make predictions from an attractor will be more accurately determined 82 , which in turn leads to improved predictions 40 . Convergence can be examined by testing whether there is a significant monotonic increasing trend in cross-mapping skill ρ with an increase of library length by Kendall's τ test 86 , and whether ρ at the largest library length is significantly higher than the one at the smallest library length (ρ) by Fisher's Z test 86 . In this study, library length was set from the smallest (10) to the largest length (i.e., the length of the entire time series). Throughout this paper, an interaction link (e.g., X→Y) was regarded as causal if both of the two criteria above were satisfied: (1) predictive skill ρ in the real-time series was higher than the 95% confidence intervals of surrogates 34 ; (2) both Kendall's τ test and Fisher's Z test were statistically significant (P < 0.05) for testing convergence 86 . As additional consideration to (1) above, to accommodate the fact that the causal variables (e.g., prey species) can exhibit time-delayed effects on the affected variables (e.g., predator species) 87 , we carried out 0 to 6 month (0~2 time point) lagged CCM analyses 88 , in which we retained the CCM with the time lag resulting in the highest ρ 89 . Furthermore, because causality is transitive and can occur indirectly through a transitive causal chain 40 and to narrow our focus on direct linkages, we used the 0 to 6 month (0~2 time point) lagged CCM analyses to detect and remove the suspected indirect link 90 . Briefly, if the variable X unidirectionally causes Y (X→Y, e.g., producers→herbivores) and then the variable Y unidirectionally causes Z (Y→Z, e.g., herbivore-s→predators), an indirect causality (X⇢Z, e.g., producers⇢predators) may thus emerge if the effect of X on Y is sufficiently strong 40,90 . The indirect link (X⇢Z) is detected and removed when it has both: (1) a larger negative time lag, and (2) a lower predictive skill ρ than the direct link (X→Y), due to transitivity 40 . Overall, we found the number of links per reconstructed interaction network was between 91 (Estuary of Magothy river) and 557 (Western English Channel) links, with an average of 207 links per food web (Fig. S11-S12). Link density (i.e., L=S, with L the number of links and S the number of network nodes 91 , Fig. S11-S12) was between 4.55 (Estuary of Magothy River) and 13.92 (Western English Channel), and on average 7.91. Foodweb connectance 61 (i.e., L=S 2 ) was between 0.21 (Narragansett Bay) and 0.68 (Wadden Sea), and 0.32 on average. Quantifying time-varying interaction strength among species Once the causal links among variables were established, we attempted to quantify the time-varying strength and direction of effects among causal variables using the multiview distance regularised S-map (MDR S-map) 41 . Here, the regression coefficients approximate the interaction strength in the discrete-time Jacobian matrix ∂x i ðt + 1Þ ∂x j ðtÞ 41 . x i ðt + 1Þ is the abundance of species i at time point t + 1 and x j ðtÞ is species j abundance at time t. The MDR S-map was used here because it had a higher accuracy to recover Jacobian matrix than other techiques 41 , and because the embedding dimension E in this study was smaller than the number of species (causal variables) in each food web (Fig. S11-S12). The MDR S-map is a nonparametric method to reconstruct highdimensional time-varying interaction networks for complex systems 41 . It works by linking two methods (multiview embedding and regularised S-map) 92,93 , but shows a higher accuracy than each one alone 41 . The MDR S-map consists of two steps. The first step is to recover the neighbourhood relationships among high-dimensional data points from numerous low-dimensional state space reconstructions (multiview SSR). Then, one computes Euclidean distances between data points under the optimal embedding dimension E. Next, one collects these distances to achieve the multiview distances d E and to further compute the data weights W E t . For the second step, the data weights W E t from first step and regularisation are incorporated into S-map to estimate high-dimensional interaction strength. Specifically, the formula that calculates interaction strength (B t ) is described aŝ where W E t is the local weight matrix, which is the weight obtained from the exponential decay function of Euclidean distance, with w c dðX c t μ , X c t ν À Á Þ. w c is proportional to the forecast skill ρ and P 8c w c = 1. The c denotes any combination consisting of causal variables for a target network node in SSR. In practice, there are too many combinations because of network dimensionality m is larger than E (m > E). Thus, in this study, we randomly generated 1000 SSR from combinations of causal variables and a target network node, and finally kept the top 100 SSR with the highest forecast skills regarding the target network node, with the consideration of computational efficiency 41 . In addition, λ is the penalised factor, and α is the adjusted parameter to balance the regularisation. Thus, the solution of interaction strength B t in the MDR S-map algorithm depends on state-dependent parameter θ, the penalised factor λ, and the adjusted parameter α. The elements in B t at Eq. (1) approximate interaction strengths among species ∂x i ðt + 1Þ ∂x j ðtÞ . Finally, the best parameter combination (θ, λ, α) for each network node and estimated interaction strengths are the ones that minimise rMSE of the one-step forecast on the target network node in t + 1, based on cross-validation 41 . Computing structural and temporal stability We first computed the time-varying structural stability, considering non-equilibrium time series generated by nonlinear dynamical systems: Here, f is an unspecified vector field (or dynamic model). X is a vector of state variables (i.e., species abundance). η is a vector of environment-dependent parameters (e.g., rates of interactions such as consumption rate). Structural stability is measured as volume contraction rate VCR 5,6 , which is the divergence of a vector field and equivalent to trace TrðJÞ of the Jacobian matrix ð∇ Á f X,η ð Þ= TrðJÞ Þ 5,6 . Smaller values of TrðJÞ (i.e., VCR) indicates lower sensitivity to parameter perturbations, i.e., higher structural stability 5,6 . Based on the Jacobian matrices calculated by multiview distance regularised S-map at each time point (see the previous section), structural stability (i.e., TrðJÞ ) was directly computed as the trace of the Jacobian matrices at each timestep 5,6 . Second, we computed the temporal stability as the coefficient of variation of whole community abundance CV (temporal variation). CV was calculated using a moving window with 1.5 years (window width = 6time points) for species abundance of each food web. A smaller CV indicates larger temporal stability. We changed the width of the time window to 3 years (12-time points), and our results were robust (Fig. S1). Finally, we computed the temporal stability of each trophic group (i.e., producers, consumers, and predators), by using a moving window with 1.5 years (window width = 6-time points). Specifically, the temporal stability of producers was computed as CV of producer's population abundance. Similarly, the temporal stability of consumers was computed as CV of primary consumer's population abundance. Temporal stability of predators was calculated as the CV of predator's population abundance (i.e., including the omnivores, secondary and higher consumer's trophic level). Quantifying the effects of temperature and biodiversity on stability Before quantifying these effects, we computed biodiversity (i.e., Simpson diversity and species richness) over time. Simpson diversity on each time point was computed as 1 À P p i 2 , where p i is the proportional abundance of species i. Species richness in each time point was computed as the sampled species richness, i.e., the number of species that was observed to have positive values of abundance at that time point. Sampled species richness is useful as it reflects changes in the underlying relative abundances of species (historically referred to as community structure [94][95][96]. We also considered the Shannon diversity index (À P p i lnðp i Þ), but it exhibited high correlations with Simpson diversity for all 19 data sets (0.91~0.98 in Pearson's correlation, Table S3) and was thus omitted. Next, we applied linear mixed models to test for the effect of temperature, species richness, and Simpson diversity on structural stability. We treated year, sampling locations (e.g., lake Mendota and Trout Lake), and season as random factors to exclude the potential confounding effects of them. We adopted the same approach for the analysis of temporal stability. Specifically, we applied linear mixed models to test for the effect of temperature, species richness, and Simpson diversity on temporal stability of whole community and each trophic group, but only treated the sampling locations as a random effect. Because temporal stability was calculated using a moving window, the year and season were factored out. For each of the two stability indices, temperature and species richness were natural log transformed before analysis. Given that values of temperatures were negative in winter, we transferred temperature in all 19 sites from Celsius to Fahrenheit (°C to°F), before natural log transformation. Temporal stability was also natural log transformed before analysis to minimise the variance across sampling sites. Quantifying the contribution of trophic groups on structural stability Since structural stability TrðJÞ for each food web was computed as the sum of each diagonal element; that is, sum across all trophic levels (i.e., producers, consumers, or predators); the effects of temperature/biodiversity on structural stability can be understood how the diagonal elements from each trophic groups were changed by the two effects. Specifically, the contribution of TrðJÞ by producers, including the aggregated effect of other species on producer's species (hereafter called "contribution from producers") was thus computed as the sum of these diagonal elements that only belonged to producers. Similar computations for the contribution of TrðJÞ by primary consumer's trophic group on structural stability ("contribution from consumers"), and contribution of TrðJÞ by omnivores, secondary and higher consumer's trophic level on stability ("contribution from predators"). Next, we applied linear mixed models to test for the effect of temperature, species richness and Simpson diversity on the contribution from producers, by treating the year, sampling locations (e.g., lake Mendota and Trout Lake), and season as random effects to exclude the potential confounding effects. We adopted the same approach for the analysis of the contribution from consumers and the contribution from predators. Quantifying the contribution of synchrony on temporal stability of whole community The degree of synchrony φ of all species within the food web was quantified as σ 2 is the variance of whole community abundance and σ i is the s.d. of abundance of species i in a food web with s species. Species synchrony φ is ranging from 0 (perfectly asynchronized of species fluctuations) to 1 (perfectly synchronised of species fluctuations) 15,97,98 . We did the same computation for species synchrony of each trophic group (producers, consumers, or predators). Then, we applied linear mixed models to test for the effect of synchrony of all species within the food web on temporal stability of food web, by treating sampling locations (e.g., lake Mendota and Trout Lake) as random effects. Similarly, linear mixed models were employed to test for effects of species synchrony of each trophic group on temporal stability of each trophic group, by treating sampling locations as random effects. Finally, we applied linear mixed models to test for the effects of temperature, species richness and Simpson diversity on the synchrony of all species within the food web, or synchrony within each trophic group, again treating sampling locations as random effects. Quantifying the effects of temperature on biodiversity Across 19 food webs, we applied linear mixed models to test for the effect of temperature on either species richness or Simpson diversity, by treating year, sampling locations (e.g., lake Mendota and Trout Lake), and season as random effects. By doing so, one can infer the indirect effects of temperature on structural (or temporal) stability, via direct temperature effects on biodiversity. Sensitivity analysis Given that previous studies showed that some of low-frequency rare species may contribute to stability patterns 99,100 , we extended the analysis to include a large number of rare species; these were species which appeared at least once per 2 years (1 nonzero abundance data out of 2 years). The inclusion of rare species in the analysis did not change the conclusions (Fig. S2). Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The raw data used in this study are publicly available (see Methods and Table S2). Station L4 marine data (Western English Channel) are archived and available from the British Oceanographic Data Centre BODC (www.bodc.ac.uk) with the most recent versions and are freely available upon request to Dr. Claire Widdicombe (clst@pml.ac.uk) and Dr. Angus Atkinson (aat@pml.ac.uk) at Plymouth Marine Laboratory. The structural and temporal stability data generated in this study have been deposited in the Zenodo database (https://doi. org/10.5281/zenodo.7877806) 102 . These data are also available on GitHub (https://github.com/QZhao16/aquatic.foodweb.stability).
2023-06-16T06:16:23.279Z
2023-06-14T00:00:00.000
{ "year": 2023, "sha1": "df924c57c92b9aebe5f9bd48179180dcb4893652", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41467-023-38977-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b850dc1b55955718915f608ada5122a343bad98", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
270280186
pes2o/s2orc
v3-fos-license
Two Cases of Postpartum HELLP Syndrome: A Rare Presentation of Preeclampsia-Associated Liver Disease and Hepatocellular Dysfunction Background: HELLP syndrome (hemolysis, elevated liver enzymes, and low platelets) is a severe and sometimes fatal pregnancy condition characterized by hemolysis, increased liver enzymes, and low platelet count. While most cases occur before delivery, approximately 25% of cases manifest within 48 hours after delivery. One rare but life-threatening complication of HELLP syndrome is intraparenchymal liver hematoma. Case: In this report, we present two postpartum HELLP syndrome patients with diverse clinical manifestations. Case 1 involves a 30-year-old woman who presented with a significant subcapsular liver hematoma as a complication of postpartum preeclampsia-associated liver illness. Following conservative treatment, she improved significantly, and no surgical intervention was required. This type of subcapsular hematoma occurs in less than 2% of pregnancies complicated by HELLP. Case 2 describes an unusual HELLP presentation in a 38-year-old woman who was diagnosed with postpartum HELLP syndrome with acute liver injury and hepatic hemorrhage. Conclusion: These unique presentations underscore the significance of early detection and prompt treatment of postpartum HELLP syndrome, as it can significantly reduce risks to the mother’s health and improve overall outcomes. Introduction HELLP syndrome (hemolysis, elevated liver enzymes, and low platelets) is a serious and potentially lifethreatening condition that can affect postpartum women.It is a variant of preeclampsia, a hypertensive disorder that occurs during pregnancy.HELLP syndrome affects a small percentage of pregnancies, occurring in around 0.2% to 0.6% of all cases.Superimposed HELLP, which is a combination of preeclampsia or eclampsia with HELLP, develops in 4% to 12% of women with these conditions. HELLP typically presents during the antepartum period in about 70% of cases, with the bulk of instances taking place between weeks 27 and 37 of pregnancy [1].However, it can also present during the postpartum period in 25%-30% of cases, especially within the first 48 hours.Complications such as disseminated intravascular coagulation (DIC), subcapsular liver hematoma, renal failure, pulmonary oedema, and renal failure are among the consequences that are more common in women with postpartum HELLP [2]. Postpartum HELLP syndrome presents significant complications, leading to considerable maternal morbidity, particularly during the immediate postpartum period.Therefore, timely detection and proper management are crucial to minimize potential difficulties and ensure optimal outcomes for affected women.In this report, we present two cases of acute postpartum HELLP syndrome with unique presentations that occurred after delivery following severe preeclampsia. Case 1 A 38-year-old G2P0010 woman at 39 weeks of gestation presented with complaints of no fetal movements starting the previous evening.Her prenatal course to that point had II. The patient was transferred to the intensive care unit (ICU) due to severely elevated liver enzymes, thrombocytopenia, and a positive hemolysis panel concerned for HELLP syndrome with impending organ failure (Fig. 1).She was started on empiric acyclovir for possible herpes simplex virus infection.A computed tomographic (CT) scan of the abdomen showed a heterogenous nodular liver, raising suspicion of hepatocellular disease.Infectious disease workup including blood cultures, cytomegalovirus (CMV) culture, and herpes simplex (HSV) polymerase chain reaction (PCR) were negative and antiviral medication was discontinued. The patient was managed conservatively as she was stable with no signs of bleeding or hemodynamic instability.The patient's condition continued to improve, with liver enzymes showing significant improvement on day 5.A repeat ultrasound at this time showed stable hemorrhage compared to the previous CT but with small changes around the liver due to hemorrhage or microabscesses.Given the patient's stable condition, hemodynamic instability and no signs of infection, there was a low suspicion of infectious causes.Repeat abdominal CT 3 days later only showed the stable known hemorrhage.Her condition continued to improve, and she was discharged after 10 days.Outpatient abdominal ultrasound outpatient confirmed general hepatocellular disease likely hepatosteatosis but revealed no other abnormalities. Case 2 A 30-year-old woman, G2P1001, with a history of pre-eclampsia without severe features during pregnancy, presented to the labor and delivery unit and had a successful spontaneous vaginal delivery soon after.However, six hours post-delivery, the patient developed severe hypertension with arterial blood pressure of 195/91mmHg, without tachycardia, tachypnea, or fever.She reported persistent right upper quadrant (RUQ) pain.On physical exam, tenderness on the RUQ was appreciated without signs of rebound tenderness or guarding.Vital signs were seen in Table III and corresponding laboratory values in Table IV. The patient was admitted to the medical unit for a diagnosis of elevated liver enzymes and low platelets suspicious of HELLP syndrome.Transabdominal ultrasound revealed a complex multiseptated lesion in the right hepatic lobe and a hypoechoic lesion in the left hepatic lobe, along with the presence of a subcapsular hematoma.Computed abdominal tomography (CT) scan showed 2 large subcapsular hematomas along the right and left hepatic lobes, measuring 13 × 17 cm in size.(Fig. 2) Other liver-related disorders, additional testing including acute hepatitis panel, antinuclear antibody (ANA), anti-smooth muscle antibody, antimitochondrial antibody, ceruloplasmin, and alpha-1 antitrypsin deficiency (A1AT), were performed, all of which yielded negative results. The patient was hemodynamically stable and no active intra-abdominal bleeding was observed.As the subcapsular hematomas were stable on imaging and the patient's hemoglobin levels were steady, she was managed conservatively.On day 3 of admission, a repeat ultrasound showed stable subcapsular hematomas.Her transaminases improved on day 4 along with her platelets and hemoglobin.The patient was discharged from the hospital after the 2-week postpartum period.Outpatient magnetic resonance imaging (MRI) abdomen with and without contrast performed one month after showed a decreasing size of chronic subcapsular hepatic hematomas.No other adenomas or lesions were appreciated. Discussion HELLP syndrome and hypertensive disorders are leading causes of maternal mortality.HELLP syndrome affects around 0.5%-0.9% of all pregnancies and accounts for 10%-20% of severe preeclampsia cases.The pathogenesis of liver involvement in HELLP syndrome is complex and unclear.An interesting theory proposes that intravascular fibrin deposition may lead to liver sinusoidal obstruction, which is associated with hypovolemia in patients with preeclampsia, and later develops into HELLP syndrome.Hepatic ischemia from this process may cause hepatic infarction, subcapsular hematomas, and intraparenchymal hemorrhage, with potential hepatic rupture in severe cases [3].Spontaneous rupture of subcapsular liver hematoma syndrome [4], presenting a significant risk with over 50% of maternal and fetal deaths associated with this complication [4]. Treatment approaches for subcapsular liver hematoma (SLH) can be either conservative or invasive, guided by the American Association for the Surgery of Trauma (AAST) staging.In hemodynamically stable patients, conservative management with intravenous fluids and blood products corrects coagulopathy effectively.Serial imaging investigations are necessary to monitor subcapsular liver hematoma (SLH) size [5].For patients with poor resolution and uncontrolled bleeding, surgical management may be necessary, involving perihepatic packing, surgical site drainage, ligation of portal vein or hepatic artery branches, omentum patching, and partial liver resection, and even liver transplantation has been considered [5]. Case 1 is intriguing as the patient developed SLHs in the setting of elevated liver enzymes and thrombocytopenia; however, there was no definitive evidence of hemolysis to confirm a diagnosis of HELLP syndrome.Subcapsular liver hematomas as a complication of preeclampsia alone are rare, making this case even more uncommon.To our knowledge, there have been two prior cases reported similar to ours.Anyfantakis et al. report a case of a 32-year-old primiparous female developing preeclampsia and fetal distress requiring an emergent caesarian delivery [6].Following delivery, the patient experienced acute hemolysis and mildly elevated liver enzymes without thrombocytopenia, with an associated subcapsular hematoma of the right hepatic lobe observed on abdominal CT.Luhning reports a more complicated case of a 40-year-old female who was also diagnosed with preeclampsia at 39 weeks' gestation requiring a cesarean delivery [5].She developed a 16 cm subcapsular hepatic hematoma postpartum.Her hospital course was also complicated by an infected pleural effusion, requiring video-assisted thoracoscopic surgery.All patients eventually made a complete recovery. The literature review on stable subcapsular hematomas reveals several relevant studies.Sibai conducted a 13-year retrospective review of three patients with subcapsular liver hematoma (SLH) [7].Two of them were treated conservatively and discharged from the hospital, while the third patient underwent hepatic resection but unfortunately succumbed to multiple organ failure.In another study, Wicke et al. [8] conducted a 10-year retrospective review of 5 patients with subcapsular liver hematoma.Out of these, three patients were treated conservatively, while two required urgent surgical intervention, with one of them undergoing liver transplantation.Carlson et al. [9] reported a case of a hemodynamically stable patient with ruptured subcapsular liver hematoma during pregnancy.In their report, the patient received non-surgical conservative management. In severe cases, subcapsular liver hematoma may rupture, a feared complication carrying a mortality that ranges from 16% to 59%.Shames et al. subsequently developed multi-organ failure and underliver transplantation with a complete recovery.Simic et al. report another case of HELLP syndrome complicated by ruptured subcapsular liver hematoma causing shock and multi-organ failure [12].The patient was taken to emergent surgery, and the liver bleeding was stopped, but despite resuscitation attempts and permanent transfusion, the outcome of surgery was lethal. Invasive radiology techniques may be considered in unstable patients.In cases of severe bleeding or hemorrhage, transarterial embolization may be used to block blood vessels supplying the bleeding site [13].This technique can help control bleeding and prevent further complications.Furthermore, interventional radiologyguided drainage can be used for hematoma or liver infarction.If there are thrombotic complications leading to blood vessel blockages, angioplasty (with or without stenting) may be considered to restore blood flow. Conclusion Postpartum HELLP syndrome is a relatively rare yet clinically significant condition that can manifest in diverse ways.Timely identification and appropriate management are crucial to reducing severe maternal morbidity and mortality.The cases presented in this study underscore the importance of considering postpartum HELLP syndrome as a potential differential diagnosis in postpartum patients presenting with elevated liver function tests and thrombocytopenia.To ensure accurate diagnosis and optimal treatment for affected individuals, healthcare practitioners should remain vigilant for atypical presentations and conduct thorough evaluations. Two Cases of Postpartum HELLP Syndrome Do et al. Fig. 1 . Fig. 1.Patient 1: CT scans and ultrasound (4) of the abdomen-pelvis showing heterogeneous infiltrative appearance throughout the right hepatic lobe, possibly reflecting hepatic necrosis and/or hemorrhage in the setting of HELLP syndrome, with hepatomegaly and serpiginous hypodensity extending throughout the majority of the right hepatic lobe.* L-Liver. [10] report a total of 8 liver transplants in the United States performed for complications related to HELLP syndrome between 1987 and 2003.As of the most recent follow-up, 6 of the 8 patients are alive, with both deaths occurring within 1 month of transplantation, and 2 patients have required retransplantation.HELLP syndrome sometimes presents as diffuse generalized liver hemorrhage and necrosis as in case 2. Our patient had controlled hemorrhage with minimal necrosis of the liver and made a complete recovery.In contrast, Mikolajczyk et al. report a case of hepatic infarction in a 30-year-old woman diagnosed with HS[11].The patient Two Cases of Postpartum HELLP Syndrome Do et al. TABLE I : Patient 1. Vitals on and 6 Hours Postpartum TABLE II : Patient 1. Laboratory Values on Admission and 6 Hours Post Delivery TABLE III : Patient 2. Vitals on Admission and 6 Hours during pregnancy occurs about 1 in 40,000 to 1 in 50,000 births and affects 1% to 2% of individuals with HELLP TABLE IV : Patient 2. Laboratory Values on Admission and 6 Hours Post-Delivery Two Cases of Postpartum HELLP Syndrome
2024-06-06T15:08:25.738Z
2024-05-24T00:00:00.000
{ "year": 2024, "sha1": "822b5f55039dd40ba0cf6d6a724b42cd90a79863", "oa_license": "CCBY", "oa_url": "https://www.ejmed.org.ejece.org/index.php/ejmed/article/download/2051/1613", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8eb126f1a443b6d414880e855b3d2de38dfcb44a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
239889314
pes2o/s2orc
v3-fos-license
Suppression of ventricular arrhythmias by targeting late L-type Ca2+ current Angelini et al. show that reducing late L-type Ca2+ current with the purine analogue roscovitine is sufficient to suppress ventricular arrhythmias in myocytes and ex vivo hearts. By preserving the early component of ICa,L, this strategy is expected to largely maintain contractility, an advantage over Ca2+ channel blockers. Introduction Sudden cardiac death due to ventricular fibrillation (VF) is a worldwide public health problem estimated to account for 1-5 million deaths annually (Chugh et al., 2008;Cygankiewicz, 2020), with 300,000-450,000 cases in the US alone (Kong et al., 2011;George, 2013;Benjamin et al., 2018). VF is typically initiated by an abnormal electrical excitation of the myocardium associated with a premature ventricular complex (PVC). PVCs, which are events occurring at the tissue level, can be caused by oscillations of the cellular membrane potential, called early afterdepolarizations (EADs), that interrupt the normal repolarization phase of the cardiac action potential (AP), causing its prolongation. When EADs from a sufficiently large group of cells synchronize (Sato et al., 2009), they can cause triggered activity (and PVCs), torsades de pointes, polymorphic ventricular tachycardia (VT), and VF (Cranefield and Aronson, 1991;Morita et al., 2009;Qu et al., 2013;Weiss et al., 2015;Wit, 2018). In addition, the AP prolongation caused by EADs may amplify the heterogeneity of tissue repolarization, predisposing the heart to reentrant tachyarrhythmias (Antzelevitch and Burashnikov, 2011). Therefore, EADs are a potent cellular-level abnormality of AP repolarization that can have dire consequences for cardiac function. Late I Ca,L is the current conducted by channels that are in quasi-steady state ( Fig. 1 A). Therefore, the behavior and extent of late I Ca,L , are largely determined by the I Ca,L steady-state activation and inactivation curves. The overlapping of these two curves defines the "window current" region ( Fig. 1): a range of membrane potentials in which LTCCs are not inactivated and are available for activation. Oxidative stress (experimentally mediated by H 2 O 2 perfusion; Ward and Giles, 1997;Xie et al., 2009;Madhvani et al., 2011;Karagueuzian et al., 2013;Madhvani et al., 2015) facilitates LTCC activation by altering LTCC steady-state activation and inactivation properties, causing an overall widening of the I Ca,L window current region Song et al., 2010;Madhvani et al., 2011;Yang et al., 2013). Prolongation of the AP increases the time when the membrane potential lingers within the I Ca,L window current voltage range, thus increasing the probability of inward I Ca,L to be flowing, causing EADs. It follows that a reduction of the window current region, achieved by altering I Ca,L steady-state properties, will reduce late I Ca,L , consequently suppressing EADs and their arrhythmogenic effects (Madhvani et al., 2011;Qu and Chung, 2012;Madhvani et al., 2015). Under dynamic clamp, by injecting a virtual, tunable I Ca,L into isolated myocytes, we recently demonstrated that a potent EAD-suppressing action can be produced by decreasing the pedestal component of the LTCC steady-state inactivation curve, effectively reducing the I Ca,L window current region ( Fig. 1; and Fig. S4, A-C; Madhvani et al., 2015;Markandeya and Kamp, 2015). The efficacy of I Ca,L window current reduction as an anti-EAD strategy was recently confirmed by another dynamic-clamp study in atrial myocytes (Kettlewell et al., 2019), in a human anatomical computational model of long QT (LQT) syndromes (Liu et al., 2019), and in single-cell computational modeling (Qu and Chung, 2012;Kimrey et al., 2020), emphasizing the potency of this antiarrhythmic strategy. The abovementioned body of work supports a potentially groundbreaking antiarrhythmic strategy centered on the modulaton of I Ca,L window current that, unlike conventional LTCC blockers (class IV antiarrhythmics), does not block the LTCC channel but selectively reduces the late I Ca,L , largely preserving contractility. In the present study, we have tested and validated this idea: pharmacologically using roscovitine, a drug found to enhance voltage-dependent inactivation of LTCC channels (Yarotskyy and Elmslie, 2007;Yarotskyy et al., 2009;Yarotskyy et al., 2010;Yazawa et al., 2011), to effectively reduce the late I Ca,L . Remarkably, roscovitine preferentially decreases the late I Ca,L without affecting the peak I Ca,L (Yarotskyy and Elmslie, 2007;Yarotskyy et al., 2010), offering a unique opportunity to directly evaluate antiarrhythmic action associated with late I Ca,L reduction. We found that not only did roscovitine potently suppress EADs and EATs (early after Ca 2+ transients) in isolated ventricular myocytes, but critically, it also suppresses and/or prevents EAD-mediated VT/VF in whole, isolated rabbit and rat hearts. While roscovitine, a purine analogue, is not clinically appropriate as an antiarrhythmic drug due to its kinase-inhibiting activity, our results provide a proof of concept at the preclinical level for a new class of antiarrhythmic drugs, LTCC gating modifiers, for treating EAD-mediated VT/VF. Unlike current, conventional class IV antiarrhythmics, the antiarrhythmic action of LTCC gating modifiers is expected to largely preserve cardiac inotropy. Ethical approval All animal protocols were approved by the University of California, Los Angeles (UCLA) Institutional Animal Care and Use Committee and conformed to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health. Ventricular myocytes isolation Hearts were removed from New Zealand young adult male rabbits (3-5 mo old) anesthetized by intravenous injection of 1,000 U heparin sulfate and 100 mg/kg sodium pentobarbital. Ventricular myocytes were enzymatically isolated using a retrograde Langendorff perfusion system at 37°C. Nominally Ca 2+ -free Tyrode's solution containing (in mM) 136 NaCl, 5.4 KCl, 0.33 NaH 2 PO 4 , 1 MgCl 2 , 10 glucose, and 10 HEPES, pH 7.4, and supplemented with ≈1.4 mg/ml collagenase (type II; Worthington Biochemical) and 0.1 mg/ml protease (type XIV) was perfused for 25-30 min. The ventricular tissue was then rinsed and minced in a culture dish with Tyrode's solution containing 0.2 mM Ca 2+ . [Ca 2+ ] was gradually increased to 1.8 mM, and the cells were stored at room temperature and used within ∼8 h. Patch clamp Ventricular myocytes were patch-clamped in the whole-cell configuration in voltage-or current-clamp mode. Voltageclamp recordings were performed using square voltage steps or a modified ventricular AP waveform (AP clamp), described below. 1-2 MΩ borosilicate pipettes (Warner Instruments) were fabricated using a P97 micropipette puller (Sutter Instrument). Cells were superfused with a modified Tyrode's solution in which K + was replaced by Cs + to suppress K + conductance. This solution contains 1.8 mM Ca 2+ . Na V channels were blocked by 10 µM extracellular tetrodotoxin (TTX) and inactivated by a 50-ms depolarization to −40 mV. I Ca,L was isolated by subtracting the nifedipine-resistant current from the total current. The ionic currents were acquired in the absence or presence of 20 µM (R)-roscovitine (termed roscovitine throughout the manuscript). I Ca,L steady-state activation curves were constructed by dividing the peak I-V data by the driving force. The data points were fitted to a Boltzmann function: where z is the effective charge, Vhalf is the half-activation potential, Vm is the membrane potential, F and R are the Faraday and gas constants, and T is the absolute temperature (294 K). Steady-state inactivation curves were constructed from a typical double-pulse protocol by plotting the normalized peak current during a test pulse at +10 mV against conditioning pulses (200 ms) ranging from −40mV to +30 mV in 10-mV increments. Myocytes were held at −80-mV holding potential. The data points were fitted to a Boltzmann function: where pdest represents the fraction of channel that does not inactivate, z is the effective charge, F and R are the Faraday and gas constants, T is the absolute temperature, Vhalf is the halfactivation potential, and Vm is the membrane potential. The fitting parameters are reported in Table S1. To characterize I Ca,L under AP clamp, a rabbit ventricular AP waveform was modified by immediately preceding its onset with a 50-ms depolarization to −40 mV to inactivate Na V channels and was used as a voltage command. The analytical procedure to isolate nifedipine-sensitive current under AP clamp is shown in Fig. S1. Briefly, to reconstruct the nifedipinesensitive current in control and with 20 µM roscovitine, we subtracted the current after nifedipine from the current recorded at baseline and after roscovitine addition, respectively. We have excluded from our analysis experiments where rundown was occurring during the first 5 min in whole-cell mode. All current recordings were done at room temperature. Current-clamp recordings APs were elicited by 2-ms-long supra-threshold depolarizing pulses. Stable EAD regimes were induced by exposing the myocytes to oxidative stress (600 µM H 2 O 2 ) or combining oxidative stress (100 µM H 2 O 2 ) with hypokalemia (2 mM external K + ) using a pacing cycle length (PCL) of 6 s. These two stressors are known (Madhvani et al., 2011;Madhvani et al., 2015). I Ca,L activates rapidly, generating a "peak," and inactivates incompletely, generating a small but sustained current that persists until the end of the depolarization (late I Ca,L ). Late I Ca,L is enhanced under pathological conditions (e.g., under oxidative stress, red trace; Madhvani et al., 2011), promoting EADs. As such, late I Ca,L represents an ideal target to suppress EAD-mediated arrhythmias (Madhvani et al., 2015). The antiarrhythmic strategy tested in this study is based on a reduction of the late I Ca,L , as shown in A (green trace). (A and B) The two I Ca,L current traces (A) have been simulated using different parameter values for the I Ca,L steady-state (SS) inactivation (red curve = 10% pedestal; green curve = 4% pedestal) shown in B. In fact, as late I Ca,L flows in quasi-steady state, its amplitude is governed by the position and shape of the curves that describe its steady-state activation and inactivation properties (B). The area subtended by the overlapping steady-state activation and inactivation curves is traditionally referred to as the I Ca,L window current (B, color-filled area). A reduction of late I Ca,L (and I Ca,L window current) is achieved by lowering the I Ca,L steady-state inactivation pedestal component. Notably, lowering the I Ca,L pedestal (B) has no effect on the peak I Ca,L , but it limits the amplitude of late I Ca,L (A). to promote EADs by favoring Ca 2+ and Na + overload and activating CaMKII Pezhouman et al., 2015). These interventions, along with the long PCL that mimics a condition of bradycardia, favor the formation of a stable EAD regime Madhvani et al., 2011;Madhvani et al., 2015;Nguyen et al., 2015;Pezhouman et al., 2015). 20 µM roscovitine was perfused (in the presence of the stressors) to study its effect on EADs. The reported AP voltages were corrected for liquid junction potentials. All current-clamp experiments were performed at 34-36°C. AP duration at 90% repolarization (APD 90 ) and EAD occurrence (defined as percentage of APs that display a positive voltage deflection, dV/dt, of ≥5 mV; Fig. 4, Fig. 5, and Fig. 6) were reported as an average of seven consecutive APs measured at steady state (right before the start of the subsequent experimental condition). All voltage-and current-clamp recordings were performed using an Axopatch 200B amplifier (Molecular Devices) and acquired using custom-made software (G-Patch; Analysis). The internal solution contained (in mM) 120 K-glutamate and 10 HEPES, pH 7.0. The voltage dependence of channel opening (steady-state activation) and quasi-steady-state inactivation was determined using Boltzmann equations, as described above. The fitting parameters are reported in Table S2. All recordings were performed using a CA-1 amplifier (DAGAN Corp.) and acquired using custom-made software (G-Patch). Intracellular Ca 2+ and cell shortening measurements Changes in cytosolic [Ca 2+ ] of rabbit ventricular myocytes were recorded from cells incubated for ≈20 min with 10 µM Ca 2+ indicator Fluo-4 AM (Thermo Fisher Scientific) at room temperature. Cells were then washed, placed in a heated chamber on an inverted microscope, and field-stimulated by a pair of platinum electrodes carrying square-wave pulses of 2-ms duration, at 2 nA, every 5 or 6 s at 35°C. Intracellular calcium transients (Ca 2+ transient) were recorded using scientific CMOS (Hamamatsu Photonics) or electron multiplying charge-coupled device (Princeton Instruments) cameras, operating at ≈50-100 frames/s. Ventricular myocytes were paced for at least 2 min in Tyrode's solution before acquisition of the Ca 2+ transients. EAT occurrence is presented as percentage of Ca 2+ transients displaying EATs. The Ca 2+ transient amplitudes were calculated as ΔF/F0. ΔF = Fmax − F0, where Fmax is peak fluorescence intensity and F0 is fluorescence intensity before stimulation. Ca 2+ transients shown in Fig The values of ΔF/F0 in the presence of roscovitine or DMSO were normalized to the respective ΔF/F0 of Ca 2+ transients recorded before drug application and reported as the percentage of control (no-drug) condition. Ca 2+ transient duration was measured as the full duration at half-maximal fluorescence amplitude. Cell shortenings were measured in field-stimulated myocytes exposed to either 20 µM roscovitine or vehicle solution, as the percentage of shortening relative to resting cell length (% RCL) from acquired videos using ImageJ (Schneider et al., 2012). Isolated perfused heart To extensively evaluate the antiarrhythmic action of roscovitine, three ex vivo heart models were used: young and aged male Fisher344 rats (young: 3-4 mo old, aged: 22-24 mo old) and New Zealand young adult male rabbits (6-8 mo old). Hearts were isolated from anesthetized animals and perfused in a Langendorff apparatus at 37°C with Tyrode's solution containing (in mM) 125 NaCl, 24 NaHCO 3 , 4.5 KCl, 1.8 NaH 2 PO 4 , 0.5 MgCl 2 , 1.8 CaCl 2 , and 5.5 glucose, pH 7.4, gassed with 95% O 2 -5% CO 2 . Aged rats have a high level of fibrosis in the heart tissue, a wellknown pro-arrhythmic factor (Nguyen et al., 2014). In these ex vivo hearts, 100 µM H 2 O 2 alone was sufficient to induce EADmediated VT/VF . In young rat hearts, arrhythmia was induced by hypokalemia (2 mM K + ; Pezhouman et al., 2015) as it was difficult to induce a stable arrhythmia using H 2 O 2 , possibly due to lower levels of fibrosis ). In addition, the antiarrhythmic action of roscovitine was evaluated in adult rabbit, a species that has an AP morphology similar to that of human. In ex vivo rabbit hearts, a new model of arrhythmia was used, which comprises hypokalemia (1 mM K + ) combined with oxidative stress (100 µM H 2 O 2 ). Spontaneously beating hearts were instrumented to continuously record local bipolar left-atrial and right-ventricular electrograms and a pseudo-electrocardiogram (p-ECG; Morita et al., 2009). Microelectrode recordings were obtained from the left ventricle as previously described Morita et al., 2011a;Morita et al., 2011b;Bapat et al., 2012;Pezhouman et al., 2015). Recordings were acquired using a Digidata 1440A interface and Axoscope 10 software (Molecular Devices). Chemicals and reagents Chemicals and reagents were purchased from Sigma-Aldrich Co. unless otherwise indicated. (R)-roscovitine (LC Laboratories) was dissolved in either ethanol or DMSO to make a 100-mM stock solution. Final ethanol or DMSO concentrations never exceeded 0.1%. TTX was from Tocris Bioscience. Data and statistical analysis Data are presented as means ± SEM. n is the number of experimental replicates, and N indicates the number of animals. Box plots indicate first and third quartiles, median (red line), and mean (X); whiskers indicate 5 and 95 percentiles. Statistical significance was determined using two-tailed paired or unpaired Student's t tests and log-rank (Mantel-Cox) test for the Kaplan-Meier plot. Online supplemental material Fig. S1 shows the experimental and analytical procedure to isolate nifedipine-sensitive current under AP clamp in ventricular myocytes. Fig. S2 shows that roscovitine reduces late I Ca,L in human Ca V 1.2 channels. Fig. S3 shows that roscovitine suppresses hypokalemia-induced VT/VF in young rat hearts. Fig. S4 shows that I Ca,L pedestal reduction selectively reduces late versus peak I Ca,L , unlike class-IV antiarrhythmics. Table S1 lists fitting parameters ± SEM of I Ca,L steady-state activation and inactivation in ventricular myocytes (control versus roscovitine). Table S2 lists fitting parameters ± SEM of I Ca,L steady-state activation and inactivation for human Ca V 1.2 complex expressed in oocytes (control versus roscovitine). Results Roscovitine selectively reduces late I Ca,L in rabbit ventricular myocytes without affecting peak current Changes in the I Ca,L properties that increase the window current ( Fig. 1) have been associated with an increased susceptibility to EADs and arrhythmia (e.g., LQT; January and Riddle, 1989;Antoons et al., 2007;Qi et al., 2009;Madhvani et al., 2011;Qu and Chung, 2012;Madhvani et al., 2015;Kettlewell et al., 2019;Liu et al., 2019). To evaluate the ability of roscovitine to reduce I Ca,L window current by limiting the late I Ca,L , we first studied its action on native I Ca,L in rabbit ventricular myocytes exposed to oxidative stress (600 µM H 2 O 2 ) to enhance late I Ca,L . In fact, H 2 O 2 exposure was shown to alter I Ca,L steady-state activation and inactivation properties, causing an overall widening of the I Ca,L window current region (Madhvani et al., 2011). Under voltage clamp, we measured nifedipine-sensitive Ca 2+ currents in the absence (Fig. 2 A) and in the presence (Fig. 2 B) of extracellular 20 µM roscovitine, and we found that roscovitine significantly reduced the late I Ca,L (noninactivating component at the end of the 200-ms depolarization), leaving the I Ca,L peak unperturbed (Fig. 2 C). The effect of roscovitine on the I Ca,L window current can be well appreciated from the steady-state inactivation curve, which displays a significant reduction of the noninactivating component (pedestal) for potentials above −10 mV (Fig. 2 D, green triangles versus black diamonds). For example, the fraction of noninactivating I Ca,L following a depolarization to 10 mV was reduced from 12% ± 1% to 3% ± 1% in the presence of 20 µM roscovitine (P = 0.0001, two-tailed unpaired t test; Fig. 2 D, green triangle versus black diamond). The roscovitine effect on I Ca,L was not associated with substantial changes in the voltage dependence of activation or inactivation (see Table S1). Finally, we investigated the effect of roscovitine on late I Ca,L during an AP using a modified AP waveform as a voltage command for AP-clamp experiments. Roscovitine reduced the amplitude of Ca 2+ current flowing during the late phases of the AP, minimally affecting peak I Ca,L (Fig. 3 A and Fig. S1). On average, 20 µM roscovitine had no significant effect on the peak current (3% ± 2% reduction, P = 0.083) but significantly reduced late I Ca,L by 25% ± 5%, (P = 0.001, measured at 150 ms during the AP clamp; Fig. 3 B). These results demonstrate that roscovitine selectively suppresses native late I Ca,L in ventricular myocytes also during a physiological stimulus. This selective action on late I Ca,L is expected to suppress EADs sustained by exaggerated activation of I Ca,L during phases 2 and 3 of the AP (Madhvani et al., 2011). Roscovitine modifies the human Ca V 1.2 gating properties, reducing the late current with minimal effect on peak current As the study of roscovitine has the potential to direct the development of derivatives with selective antiarrhythmic potency, we have evaluated its activity on the human Ca V 1.2 channel, assembled with α 2 δ-1 and β 2b , the accessory subunits most abundantly expressed in the human heart (Hullin et al., 2003). Following the expression of this macromolecular complex in Xenopus oocytes, we found that extracellular application of roscovitine (100 µM) accelerated the rate of voltage-dependent inactivation, effectively reducing the noninactivating component or late I Ca,L with negligible effect on the peak current ( Fig. S2 A). Roscovitine-induced reduction of human Ca V 1.2 window current is evident in the quasi-steady-state inactivation curve, which approached values close to zero in roscovitine-modified channels (Fig. S2 B). Roscovitine had practically no effect on the voltage dependence of activation and/or inactivation (see Vhalf of the steady-state curves in Fig. S2 B and Table S2). Thus, the roscovitine effect on the human clone closely recapitulated the effects we observed on the rabbit native ventricular I Ca,L (Fig. 2) and previously reported for the rabbit Ca V 1.2 clone (Yarotskyy and Elmslie, 2007;Yarotskyy et al., 2010). Extracellularly applied roscovitine suppresses EADs of different etiologies in isolated rabbit ventricular myocytes We postulated that drugs reducing the late I Ca,L should suppress EADs (Madhvani et al., 2015). As roscovitine selectively reduced late I Ca,L (Fig. 2, Fig. 3, and Fig. S2), we evaluated its efficacy at suppressing EADs in rabbit ventricular myocytes. We recorded cardiac APs at a PCL of 6 s, a condition of bradycardia that, in combination with oxidative stress (600 µM H 2 O 2 ), favors the initiation of a stable EAD regime Madhvani et al., 2011;Madhvani et al., 2015;Nguyen et al., 2015). Under these conditions, EADs appeared in 85.4% ± 4.1% of the APs, prolonging the APD 90 from 246 ± 30 ms to 910 ± 121 ms. To further test the robustness of the "anti-EAD" effect of roscovitine, we added an additional stressor, combining oxidative stress (100 µM H 2 O 2 ) with hypokalemia (2 mM K + ). Reduced serum [K + ] (hypokalemia) is a clinically relevant condition known to cause significant QT interval prolongation with subsequent risk of promoting lethal cardiac arrhythmias among cardiac patients (Osadchii, 2010;Weiss et al., 2017;Skogestad and Aronsen, 2018). The proarrhythmic action of hypokalemia is mainly due to the suppression of K + channel conductance (Sanguinetti and Jurkiewicz, 1992) and the inhibition of Na + -K + adenosine triphosphatase, resulting in AP prolongation and reduced repolarization reserve along with progressive cellular Na + and Ca 2+ overload (Eisner et al., 1978;Pezhouman et al., 2015;Tazmini et al., 2020). In isolated ventricular myocytes, the perfusion of this dualstressor solution (H 2 O 2 + hypokalemia) caused a hyperpolarization of the diastolic membrane potential from approximately −80 mV to approximately −110 mV, increased APD 90 from 236 ± 34 ms to 772 ± 156 ms, and induced a stable EAD regime such that 97.6% ± 2.4% of the APs displayed at least one EAD within 6.7 ± 3.4 min (Fig. 5). Extracellularly perfused roscovitine (20 µM) completely suppressed EADs and decreased APD 90 toward normal levels (304 ± 33 ms; Fig. 5; control versus roscovitine, P = 0.202, two-tailed paired t test) in the continuous presence of the stressors. Thus, pharmacological reduction of late I Ca,L , brought about by roscovitine, can effectively suppress EADs in isolated ventricular myocytes (Fig. 4 and Fig. 5). We also found that the EAD-suppressing action of roscovitine is reversible: following roscovitine washout from the bath solution, EADs reappeared within minutes and could be suppressed by a second application of roscovitine (Fig. 5, A and B). On the other hand, intracellular application of roscovitine failed to prevent the occurrence of EADs, suggesting an extracellular site of action. We dialyzed isolated myocytes with 20 µM roscovitine, carried by pipette intracellular solution (Fig. 6). After the rupture of the membrane patches and application of positive pressure to the patch pipette, the myocytes were paced at PCL = 6 s for ∼8 min to allow for the diffusion of roscovitine into the cytoplasm. The presence of intracellular roscovitine did not prevent H 2 O 2 -induced EADs or AP prolongation, and EADs appeared within 7.3 ± 1.4 min. APD 90 increased from 250 ± 16 ms in Tyrode's solution to 860 ± 200 ms in the presence of H 2 O 2 , with 76.2% ± 8.0% of APs showing EADs (Fig. 6 C). These effects are similar to those observed in the absence of intracellular roscovitine ( Fig. 4 and Fig. 5). However, in these myocytes, the extracellular application of 20 µM roscovitine rapidly suppressed EADs and restored normal APD 90 levels (211 ± 17 ms; Fig. 6 C; black square versus green dot, P = 0.230, two-tailed paired t test). The washout of roscovitine from the bath solution caused AP prolongation and reappearance of a stable EAD regime, which could also be suppressed by a second application of roscovitine (Fig. 6, A and B). Together, these findings are consistent with the existence of an extracellular site for the EAD-suppressing action of roscovitine; this view is also in agreement with other studies that found that roscovitine exerts its effect by binding extracellularly to the repeat-I of Ca V 1.2 (Yarotskyy and Elmslie, 2007;Yarotskyy et al., 2010). Furthermore, previous studies have revealed that in electrically stimulated rabbit ventricular myocytes, H 2 O 2 causes Ca 2+ oscillations during the late phase of the Ca 2+ transient; these events have been associated with EADs and reported as EATs Zhao et al., 2012;Li et al., 2013). While the data in Fig. 4, Fig. 5, and Fig. 6 show that roscovitine corrects abnormal electrical signaling abolishing an EAD regime, it remains to be established whether it also corrects aberrant Ca 2+ transients promoted by oxidative stress. In field-stimulated ventricular myocytes loaded with Ca 2+ indicator Fluo-4 AM, the exposure to 200-600 µM H 2 O 2 produced aberrant Ca 2+ transients displaying EATs within minutes (Fig. 7 A). The addition of roscovitine (20 µM or 40 µM) to the extracellular solution significantly reduced the incidence of EATs in Ca 2+ transients from 86% ± 3% to 15% ± 6% despite the persistent presence of H 2 O 2 (Fig. 7, A and B; P < 0.001, two-tailed paired t test). The beneficial effect of roscovitine on EATs is likely to derive from its EAD-suppressing action associated with late I Ca,L reduction. Roscovitine reduces late I Ca,L without affecting Ca 2+ transients and cell shortening in isolated rabbit ventricular myocytes A recognized limitation of current class IV antiarrhythmics (LTCC blockers), which reduces their clinical value, is their negative inotropic effect (Elliott and Ram, 2011). This drawback is directly related to their pore-blocking action on the Ca V 1.2 channel that produces overall reduction of the Ca 2+ influx, equally affecting both peak and late I Ca,L , resulting in decreased contractility. Since roscovitine suppressed EADs without reducing peak I Ca,L (Fig. 2, Fig. 3, and Fig. 4), we expected minimal effects on contractility and Ca 2+ transient, which largely depend on peak I Ca,L . In fact, roscovitine did not alter resting cell length (RCL vehicle : 126.1 ± 4.8 µm, n = 21; RCL roscovitine = 126.9 ± 4.6 µm, n = 24; P = 0.907, two-tailed unpaired t test) or cell shortening (vehicle: 9.0% ± 0.83%; roscovitine: 9.22% ± 0.73%; P = 0.840, twotailed unpaired t test), as shown in Fig. 7 C. In agreement with these data, roscovitine at the concentration that suppressed EADs (Fig. 4, Fig. 5, and Fig. 6) did not significantly alter Ca 2+ transient duration in cells exposed to roscovitine compared with vehicle solution (Fig. 7, D and E; duration at half-maximal amplitude: vehicle: 279 ± 22 ms; roscovitine: 280 ± 15 ms; P = 0.955, two-tailed unpaired t test). However, roscovitine caused a small but statistically significant reduction of the Ca 2+ transient amplitude (ΔF/F0) compared with ΔF/F0 reduction caused by the vehicle (DMSO; Fig. 7 F; vehicle: 93.89% ± 1.43%; roscovitine: 88.74% ± 1.23%; P = 0.01, two-tailed unpaired t test). Based on these results, we speculate that the antiarrhythmic (EAD-suppressing) action of late I Ca,L is unlikely to be complicated by negative inotropy, highlighting a potential advantage of Ca V 1.2 gating modifiers over class IV antiarrhythmics. Assessing the antiarrhythmic potential of late I Ca,L reduction in ex vivo hearts The data reported so far offer strong evidence that pharmacological reduction of late I Ca,L potently suppresses EADs in isolated myocytes. In the next series of experiments, we study the antiarrhythmic potential of late I Ca,L reduction at suppressing and/or preventing EAD-mediated arrhythmias in intact hearts. To test the robustness of the intervention, we use the following animal models of EAD-mediated VT/VF: (1) aged rat hearts exposed to H 2 O 2 (100 µM), (2) young rat hearts exposed to hypokalemia (2 mM K + ), and (3) adult rabbit hearts exposed to hypokalemia (1 mM K + ) + H 2 O 2 (100 µM). These three experimental paradigms generate VT/VF within minutes under stressing conditions that we have shown to induce stable EAD regimes in single myocytes. Notably, in these models, EAD-induced VT/VF do not terminate spontaneously unless intervened with drugs Pezhouman et al., 2015). Late I Ca,L reduction suppresses oxidative stress-and hypokalemia-induced VT/VF in rat hearts Fig. 8 shows representative simultaneous recordings of bipolar electrograms from a perfused aged rat heart exposed to H 2 O 2 (100 µM). Three electrode pairs were placed in (1) left atrium, (2) right ventricle, and (3) right atrium-left ventricle (p-ECG). Also, a glass microelectrode was inserted in the left ventricle to monitor the cell membrane potential during the experiment (Fig. 8 B). The microelectrode captured a series of EADs Note that roscovitine suppressed EADs and restored a normal APD 90 , unlike vehicle alone (0.02% ethanol, light blue triangle), which had no significant effect on the EAD regime (roscovitine: n = 7, vehicle: n = 4, N = 9; mean ± SEM). preceding the initiation of VT (Fig. 8 B): note that the sixth sinus beat suddenly gave rise to EAD-induced triggered activity, which then degenerated to VF. In eight out of eight hearts, the perfusion of 100 µM H 2 O 2 induced VT/VF within 23.7 ± 4.3 min. Following the onset of VT/VF, the perfusion of roscovitine (20 µM) converted H 2 O 2 -induced VF to sinus rhythm in all aged rat hearts studied, within 13 ± 3 min (Fig. 8 C). Following a similar experimental strategy, we probed the antiarrhythmic efficacy of late I Ca,L suppression in an arrhythmia model induced by hypokalemia (2 mM K + ) in young rat hearts. We found that roscovitine (20 µM) suppressed hypokalemia-induced VT/VF in five out of six young rat hearts within 28 ± 9 min (Fig. S3). Late I Ca,L reduction prevents oxidative stress-and hypokalemia-induced VT/VF in ex vivo perfused rabbit hearts To further assess the antiarrhythmic action of late I Ca,L reduction in a species with ventricular AP morphology closely resembling that of the human heart, we studied the effect of roscovitine in isolated rabbit hearts. We used a new model of arrhythmia, which combines hypokalemia (1 mM K + ) with oxidative stress (100 µM H 2 O 2 ). These stressors reliably induced EAD and VT/VF in young adult rabbits, allowing us to test the antiarrhythmic action of roscovitine in both isolated myocytes (Fig. 5) and in the whole heart. In this animal model, instead of attempting VT/VF suppression as in rat hearts, we tested the ability of late I Ca,L pharmacological reduction to prevent arrhythmia. In the absence of roscovitine, the exposure of isolated rabbit hearts to hypokalemia + H 2 O 2 induced VT/VF in five out of five rabbit hearts within 22 ± 6 min, which persisted longer than 60 min of observation (Fig. 9 B). To test whether pretreatment with roscovitine prevented VT/VF, we perfused the hearts with roscovitine for 15 min before exposure to the proarrhythmic stressors ( Fig. 9 A). The pretreatment with 20 µM roscovitine prevented VT/VF during the subsequent 60 min in two out of five hearts studied (P = 0.058, log-rank test), after which the experiment was terminated (Fig. 9, C and D). Raising roscovitine concentration to 50 µM prevented VT/VF initiation in four out five hearts (P = 0.015, log-rank test; Fig. 9, C and D). Thus, using three models of cardiac arrhythmias in two different species, these results provide the first evidence that selective pharmacological reduction of late I Ca,L suppresses and prevents EAD-mediated VT/VF induced by different stressors (hypokalemia and/or oxidative stress). Discussion The relevance of the I Ca,L window current to EAD formation was hypothesized almost three decades ago (January and Riddle, 1989), but surprisingly, no antiarrhythmic therapies based on this premise have been developed to date. In a dynamic-clampbased study, we previously demonstrated that the selective reduction of noninactivating I Ca,L (late I Ca,L ), which essentially decreases the area of the I Ca,L window current region (Fig. 1), potently suppresses ventricular EADs (Madhvani et al., 2015). We also predicted that such a maneuver would have a minor effect on excitation-contraction coupling, thus representing an . In spite of the presence of intracellular roscovitine, 600 µM H 2 O 2 was still able to induce a robust EAD regime (A, b and B, b), which was reversibly suppressed by the application of 20 µM extracellular roscovitine (A, c and B, c). The acute effect of roscovitine was reversible: following a washout of the drug from the extracellular solution, the APs were prolonged and EADs reappeared within a minute (A, d and B, d). The EAD regime was again abolished by a second application of roscovitine (A, e and B, e). (C) The plot quantifies the average changes in APD 90 and EAD incidence for experiments performed as in A (n = 6, N = 4; mean ± SEM). Note that roscovitine acted reversibly from the extracellular side of the cells to potently suppress EADs. antiarrhythmic strategy with a significant advantage over the current class IV antiarrhythmics (LTCC blockers), which may cause adverse inotropic effects due to peak I Ca,L blockade (Szentandrassy et al., 2015;Godfraind, 2017). The antiarrhythmic effect associated with ventricular I Ca,L window current reduction has so far been supported by computational studies both at the cellular level (Madhvani et al., 2011;Madhvani et al., 2015) and more recently at the organ level, using a human anatomical ventricle model (Liu et al., 2019). However, an experimental validation of these concepts has never been pursued. The present work fills in this gap in knowledge by experimentally testing the hypothesis that drug-induced reduction of late I Ca,L can effectively suppress EADs and EAD-mediated arrhythmias, providing a mechanistic link between the antiarrhythmic effect of roscovitine and its action on the I Ca,L window current. We demonstrated that a selective reduction of both late I Ca,L and the window current region can be achieved by extracellular perfusion of roscovitine in native rabbit ventricular I Ca,L and in the human Ca V 1.2 channel (Fig. 2, Fig. 3, Fig. S1, and Fig. S2). Pharmacological reduction of late I Ca,L was highly effective at suppressing EADs induced by different underlying mechanisms (oxidative stress and hypokalemia) in isolated ventricular myocytes (Fig. 4, Fig. 5, and Fig. 6), experimentally confirming previous predictions (Madhvani et al., 2015;Markandeya and Kamp, 2015). In addition to I Ca,L , the Na + /Ca 2+ exchanger has also been shown to contribute to EAD formation (Szabo et al., 1994;Wit, 2018). Therefore, it is conceivable that a reduction of the late I Ca,L could also decrease Na + /Ca 2+ exchanger activity during the AP, contributing to the beneficial suppression of EADs. Furthermore, our findings establish that the cellular effects of roscovitine translate to the whole organ level, where it potently suppresses and prevents EAD-mediated VT/VF in isolated perfused hearts of two species with different AP characteristics (rabbit and rat; Fig. 8, Fig. 9, and Fig. S3). Since roscovitine largely preserves Ca 2+ transient amplitude and duration and myocyte shortening (Fig. 7, C-F), we expect that heart contractility will be minimally, if at all, affected. Consistent with these findings, the application of roscovitine in induced pluripotent stem cell-derived cardiomyocytes from patients with Timothy syndrome (LQT8) was found to restore normal electrical activity and Ca 2+ handling at the single-cell level (Yazawa et al., 2011;Song et al., 2015;Song et al., 2017). The efficacy and safety of this strategy in comparison with LTCC blockade (class IV antiarrhythmics) is illustrated in Fig. S4. Specifically, we simulated I Ca,L in a virtual myocyte exposed to oxidative stress and exhibiting EADs, as well as in a cell under normal conditions exhibiting normal APs. Under oxidative stress conditions, LTCCs with a reduced noninactivating component (i.e., with reduced late I Ca,L ; Fig. S4 A) conduct less current during late phases of the AP, when EADs occur (Fig. S4 B). Accordingly, were this a real cell (myocyte), the reduced inward current would likely facilitate AP repolarization and suppression of the EAD regime, as was shown in dynamic-clamp experiments (Madhvani et al., 2015) and the pharmacological reduction of the late I Ca,L (this work). Note that late I Ca,L reduction did not affect the early component of I Ca,L under oxidative stress conditions (Fig. S4 B) or the whole I Ca,L during normal conditions (Fig. S4 C). These results offer an explanation as to why Ca 2+ transients and myocyte contractility were largely preserved during roscovitine application (Fig. 7, C-F). In contrast, a simulated 20% LTCC blockade (Fig. S4 D) resulted in reduced I Ca,L during all AP phases in both oxidative stress (Fig. S4 E) and normal (Fig. S4 F) conditions, consistent with the action of LTCC blockers (class IV antiarrhythmics), which may potently abolish arrhythmias at the expense of compromised contractility (reduced early I Ca,L ; Markandeya and Kamp, 2015;Godfraind, 2017;Karagueuzian et al., 2017). The roscovitine site of antiarrhythmic action Our results have provided evidence for an extracellular action of roscovitine on Ca V 1.2, which reduces late I Ca,L and, in turn, suppresses EAD-sustained arrhythmias. Roscovitine has been reported to also inhibit intracellular cyclin-dependent kinases (CDKs; Meijer et al., 1997;Bach et al., 2005;Sánchez-Martínez et al., 2015;Song et al., 2017) and other kinases including CaMKII (Meijer et al., 1997;Bach et al., 2005;Sánchez-Martínez et al., 2015). While it is possible that some of the effects on LTCCs are mediated by intracellular effectors (e.g., CaMKII), our data from isolated myocytes show that (1) intracellular application of 20 µM roscovitine did not prevent or suppress EADs (Fig. 6) and (2) the suppression of EADs by roscovitine could be rapidly reversed by washing out the drug (Fig. 5 and Fig. 6). Together, this evidence suggests that possible kinase inhibition by roscovitine is not sufficient to account for its EAD-suppressing effects. These findings are in agreement with a previous study that found that intracellular application of 300 µM roscovitine did not modify LTCC gating properties, prompting the authors to suggest that the roscovitine-induced modification of Ca V 1.2 channels is not mediated by a kinase-dependent mechanism (Yarotskyy and Elmslie, 2007). In a chimera-based study, an Figure 9. Roscovitine prevents the induction of VT/VF by hypokalemia and oxidative stress in ex vivo rabbit hearts. (A) Experimental protocol used to test the ability of roscovitine to prevent VT/VF in ex vivo perfused aged rabbit hearts. (B) Bipolar electrograms from the indicated chambers of the heart. p-ECGs were obtained from right atrial-left ventricular leads. Left panel shows a representative experiment in control condition (Tyrode's solution). Right panel shows the initiation of VT/VF in the same heart 14 min after perfusion of a hypokalemic (HypoK; 1 mM K + ) Tyrode's containing 100 µM H 2 O 2 . (C) Representative bipolar electrograms in control condition (left), after 15-min pretreatment with 50 µM roscovitine (middle), and 60 min after the perfusion of hypokalemia + 100 µM H 2 O 2 in the presence of roscovitine (right). Note that roscovitine prevented the initiation of VT/VF. (D) Kaplan-Meier plot comparing time to onset of VT/VF for hearts exposed to hypokalemia and H 2 O 2 in control (black) or in the presence of 20 or 50 µM roscovitine (n = 5 hearts per condition; *, P < 0.05). extracellular roscovitine binding site responsible for enhanced LTCC inactivation (i.e., late I Ca,L reduction) has been located in the Repeat I of the Ca V 1.2 pore-forming subunit (Yarotskyy et al., 2010). Roscovitine has been shown to block other cardiac ion channels. Heterologously expressed hERG channels were inhibited by roscovitine with a half-maximal inhibitory concentration (IC 50 ) of 27 µM in HEK cells and an IC 50 of ∼200 µM in oocytes (Ganapathi et al., 2009;Cernuda et al., 2019). K V 4.2 voltage-dependent potassium channels have also been shown to be sensitive to roscovitine (Buraei et al., 2007). Block of potassium channels is more likely to promote (rather than suppress) EAD-mediated arrhythmias by further reducing repolarization reserve and increasing AP duration (Sanguinetti and Tristani-Firouzi, 2006;Roden, 2016;Weiss et al., 2017). This suggests that the antiarrhythmic properties of roscovitine are largely due to its action on the LTCC. Notably, in clinical trials assessing its anti-cancer potential, roscovitine was well tolerated, and no instances of cardiac proarrhythmia or other cardiac side effects were reported (Fischer and Gianella-Borradori, 2003;Benson et al., 2007). LTCC blockers versus gating modifiers: A new class of antiarrhythmics? Currently, class IV antiarrhythmics, such as diltiazem and verapamil, are in clinical use for the treatment and prevention of various cardiac arrhythmias (Rosen et al., 1975;Grace and Camm, 2000;Szentandrassy et al., 2015;Godfraind, 2017). The primary action of these drugs is to block LTCC conductance, indiscriminately reducing both late and peak I Ca,L (Fig. S4, D-F). While suppressing EADs (January et al., 1988;Shimizu et al., 1995;Hensley et al., 1997), the overall suppression of Ca 2+ influx causes an adverse, negative-inotropic effect that limits their therapeutic value, especially in patients with compromised cardiac function (Rosen et al., 1975;Russell, 1988;Elliott and Ram, 2011). Importantly, while suppression of peak I Ca,L is not safe, it is also not necessary to suppress EAD-mediated arrhythmias. We propose that drugs selectively reducing late I Ca,L , or the LTCC window current, constitute a new class of antiarrhythmic drug action. Roscovitine, which selectively blocks late I Ca,L , represents a prototypical member of this class (Karagueuzian et al., 2017). In conclusion, we believe that the results of the present study set the framework for the development of a conceptually new class of antiarrhythmics (LTCC gating modifiers) that selectively reduce late I Ca,L and suppress EAD-mediated arrhythmias. More generally, these results raise the exciting possibility that ion channel gating modification, without ion channel blockade, holds promise for the design of next-generation antiarrhythmics (Antzelevitch et al., 2004;Belardinelli et al., 2013;Pezhouman et al., 2014;Liin et al., 2015;Bengel et al., 2017;Bossu et al., 2018;Larsson et al., 2018;Salari et al., 2018). Study limitations Roscovitine is a purine-based compound that has been used in clinical trials for its anti-cancer action and displays a broad spectrum of effects. In its native form, its CDK-inhibiting activity might preclude its clinical use as an antiarrhythmic. However, we are encouraged that (1) the Ca 2+ channel gating modification is extracellular, whereas the CDK effects are intracellular (Sánchez-Martínez et al., 2015), and (2) roscovitine analogues with reduced CDK inhibition are already known, raising the possibility that the Ca 2+ channel and CDK effects could be separated in novel derivative compounds (Liang et al., 2012;Wu et al., 2018), providing effective antiarrhythmic action. Figure S2. Roscovitine reduces late I Ca,L (noninactivating component) in human Ca V 1.2 channels. (A) Superimposed Ba 2+ currents from Ca V 1.2 channels expressed in oocytes (α 1C + β 2b + α 2 δ − 1) before (control) and after 100 µM roscovitine. (B) Average steady-state (SS) activation and quasi-steady-state inactivation curves obtained before and after roscovitine extracellular application. Continuous lines are Boltzmann fits to the activation and inactivation data points (fitting parameters are reported in Table S2). Note that roscovitine selectively reduced the late I Ca,L , enhancing the extent of the steady-state inactivation (i.e., reduced pedestal of the steady-state inactivation curve, black diamond versus green triangle) without affecting the voltage dependence of activation (black circle versus green square). Data points are mean ± SEM; n = 7; *, P < 0.05. Figure S3. Roscovitine suppresses hypokalemia-induced VT/VF in young rat hearts. (A) Experimental protocol used to test the ability of roscovitine to suppress VT/VF in ex vivo perfused young rat hearts. (B and C) Bipolar electrograms and microelectrode recordings from the indicated chambers of the heart. p-ECGs were obtained from right atrial-left ventricular leads. Single-cell APs were measured by a glass microelectrode inserted in the left ventricle epicardium. (B) The recordings show the initiation of VF after 12 min of exposure to hypokalemia (HypoK; Tyrode's with 2 mM K + ). Note the initiation of VF by cellular EADmediated triggered activity, which arose suddenly during sinus rhythm (microelectrode, *). (C) Recordings from the same heart as in B, showing suppression of VF 8 min after the addition of roscovitine (20 µM) to the perfusate in the continuous presence of hypokalemia. Provided online are two tables. Table S1 lists I Ca,L fitting parameters of steady-state activation and inactivation in ventricular myocytes. Table S2 presents fitting parameters of steady-state activation and inactivation. Figure S4. I Ca,L pedestal reduction selectively reduces late versus peak I Ca,L , unlike class IV antiarrhythmics. (A) LTCC steady-state (SS) activation (Act) and inactivation (Inact) curves. The green inactivation curve demonstrates a 50% reduction of I Ca,L pedestal. (B) The UCLA myocyte model (Mahajan et al., 2008) was modified to accept membrane potential input as in our previous dynamic-clamp studies (Madhvani et al., 2011;Madhvani et al., 2015). In addition, its LTCC parameters were modified to simulate the channel under oxidative stress conditions. The model was "voltage-clamped" with the AP waveform from an isolated rabbit ventricular myocyte under oxidative stress, exhibiting an EAD and increased AP duration (blue). The black current trace is the UCLA model I Ca,L output produced by the blue AP without further modification; the green current is the UCLA model current output with 50% reduced pedestal (as in A and Madhvani et al., 2015). Under EAD-favoring conditions, pedestal reduction (green) results in marked decrease of I Ca,L flowing in phases 2 or 3 of the AP (late I Ca,L ) but did not affect peak I Ca,L flowing after the AP upstroke. (C) As above for the UCLA model under normal (non-oxidative stress) conditions clamped by a normal AP waveform (blue). Pedestal reduction (green current) does not affect I Ca,L . (D) The LTCC steady-state activation and inactivation curves were modified to simulate 20% blockade (red; as in class IV antiarrhythmics). (E and F) As in B and C. Red current represents the UCLA I Ca,L output with 20% blockade under EAD-favoring (E) or normal (F) conditions. Note that blockade indiscriminately reduced both peak (red arrow) and late I Ca,L , which in turn would result in negative inotropy (Godfraind, 2017;Karagueuzian et al., 2017). By contrast, pedestal reduction (green current in B and C) preserved peak I Ca,L , so it follows that Ca 2+ release and cell shortening would be unaffected.
2021-10-27T06:18:25.391Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "88b49cdd7d30583b1dcadd617f523321f0dc24ae", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1085/jgp.202012584", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa6123b0b0e459d6b2307f293f9f56d74c40110b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249164855
pes2o/s2orc
v3-fos-license
NSF-Based Analysis of the Structural Stressing State of Trussed Steel and a Concrete Box Girder This paper analyses the characteristics of the mechanical behavior of a trussed steel and concrete box beam under bending conditions based on the structural stressing state theory and the numerical shape function method. Firstly, the parametric generalized strain energy density was introduced to characterize the structural stressing state of trussed steel stud concrete box girders, and the strain energy density sum was plotted. Then the Mann-Kendall criterion was used to discriminate the leap point of the curve change and to redefine the structural failure load. By analyzing the strain and displacement, the existence of a sudden change in the structural response during the load-bearing process was again demonstrated. Afterwards, the numerical shape function method was used to extend the strain data, and further in-depth analyses of strain/stress fields and internal forces were carried out to show in detail the working characteristics of each under load. Through an in-depth analysis from different angles, the rationality of updating the failure load was verified. Finally, the effects of different structure parameters on the evolution of the structural stresses of the members were analyzed in a transversal comparison. The analysis results of the stress state of a steel-concrete truss structure reveal the working behavior characteristics of a steel-concrete truss structure from a new angle, which provides a reference for the design of a steel-concrete truss structure in the future. Introduction Through a significant number of engineering practices, it has been found that adding section steel to concrete structures can effectively improve the load-bearing capacity, stiffness, and ductility of components [1,2], etc., which is of great significance for the construction of modern engineering structures. Hence, the steel reinforced concrete structure as a type of structure which puts the steel into the traditional reinforced concrete can be used in the construction of large-span structures to improve the bearing performance [3][4][5]. As early as the beginning of the 20th century, experts and scholars began the research on steel reinforced concrete to promote the application of this technology in engineering. Chen [6] analyzed the effects of sectional steel disposition, the thickness of the concrete cover, and the concrete strength on the torsional performance of angle steel concrete beams. Xu [7,8] applied prestress technology to steel-reinforced concrete beams. Through experiments and numerical simulation, it was found that the mechanical properties of prestressed steel-reinforced concrete beams are better than ordinary steel reinforced concrete beams. Kozlov [9] used a scale model test to explore the shear performance and normal stress of a single-span steel-reinforced concrete beam. It was found that the test results were in good agreement with the calculated data. Aiming at the problem of structural corrosion, Meng [10] discussed the application of stainless-steel concrete structures in depth. Wu [11] carried out a finite element analysis of the new steel-concrete composite Virender beam by using ABAQUS. The results showed that the bending capacity and deformation performance of the new steel-concrete composite Virender beam were greatly improved compared with the ordinary reinforced concrete beam. Based on the test results, Yong [12] proposed two new stiffness calculation methods for partially prefabricated steel-reinforced concrete beams. Nguyen [13] discussed the influence of the restraint of steel bars and steel in concrete on the behavior of steel-reinforced concrete beams after yielding. Yang [14] conducted an in-depth study on the shear capacity of steel-reinforced concrete through experiments and revealed the shear failure mechanism of steel-reinforced concrete beams. Xue [15] proposed a theoretical model for predicting the shear strength of steel-reinforced concrete deep beams and short columns and verified the accuracy and safety of the model calculations based on experiments. Jeong [16] tried to analyze the changes in the neutral axis of steel-reinforced concrete beams, using a strain compatibility analysis method and proved its efficiency by comparing experiments and analysis values. Hong [17] proposed a new method that could more accurately predict the working behavior of steel-concrete mixed composite precast beams. In addition, domestic and foreign scholars have also carried out research on the performance of steel-reinforced concrete structures under special conditions [18]. From the above literature, it can be seen that traditional steel concrete structures are simply superimposed on steel and reinforced concrete structures, which improves some of their properties but increases the construction process. The truss steel-reinforced concrete box girder is a new type of steel-concrete composite structure and it is necessary to study its mechanical properties. The mechanical property test of large components is expensive, and the experimental data have not yet been fully applied, resulting in a large amount of the invisible information on the structural working behavior characteristics being ignored. Due to the limitations of the test point arrangement, the data measured are often limited and the limited test data are not sufficient to support an in-depth analysis of the elements and are not conducive to further research into steel-reinforced concrete. Moreover, currently, the ultimate load capacity of a steel-reinforced concrete structure is usually predicted using a semi-empirical and semi-theoretical approach, which often leads to increased costs and unreasonable structural designs based on safety considerations. In order to gain a deeper understanding of the mechanical properties of the truss-type steel-reinforced concrete box girder, this paper attempts to further reveal the working characteristics of truss-type steel-reinforced concrete box beams subjected to bending loads by applying the structural stressing state theory. Then, the response data (strain, displacement, etc.) of beams are constructed and drawn, so that more of the stressing state changing characteristics of the beams can be analyzed from the curves in depth. The Mann-Kendall criterion is used to differentiate the characteristic loads, and the strain/stress fields and internal forces constructed based on the NSF method are used to further analyze the structural performance evolution characteristics. Based on the limited test data, this paper analyzes the force evolution process of the structure in depth, reveals the sudden change characteristics of the response, and provides a reference for the improvement of the structural design in the future. Method of Modeling a Structural Stressing State The description of the structural stressing state on a structure is important to effectively reflect on its changing characteristics under load. In nature, everything, including a structure, changes according to the law of quantitative to qualitative change, and when the qualitative change occurs, things will deviate from their previous trajectory and enter a completely new stage of development. The response of a structure usually includes displacement, strain, load, and failure image. In contrast, displacement and strain are the most direct embodiments of the change in the stress state of the structure and reflect the stress evolution process of structure to a certain extent. However, the displacements and strains embedded with directions, namely vectors, affect the accuracy of the numerical model expression. Hence, the generalized strain energy density (GSED) [19] associated with stress and strain is proposed to describe the structural stressing state, and an analysis of the direction influences is also avoided due to the converting of a vector into a scalar. The formula for the GSED of the i-th position of the j-th load step can be expressed as: where E ij is the GSED value of the i-th element of the j-th load step and σ ij is the legal stress of the i-th position of the j-th load step andis the legal strain of the i-th position of the j-th load step. The stressed structural state of the whole structure can be expressed by accumulating the GSED values of each part with the following equations: where E j is the GSED value of the section measured in the j load step and N is the total number of measurement points. In order to exclude the influence of the unit, the GSEDs are normalized into the massless E j and the norm is characterized by the stress state of the structure, as follows: where E j,norm is the load F j 's normalized GSED and E M is the largest GSED and the entire loading process. By constructing this parameter, the E-F curve can be drawn to describe the stress variation characteristics of the structure. The Application of the Mann-Kendall Criterion The Mann-Kendall (M-K) criterion is a nonparametric method commonly used in trend analysis. This is a method used to reasonably infer the form of population distribution by using sample data. The Mann-Kendall test can be applied to determine whether there is a mutation in the sequence and if it exists, the time when the mutation occurs. In order to find the mutation point of the structural force state through the E j -F curve, the M-K method in statistics was introduced into the structural force state analysis. It was assumed that the {E(i)} sequence (load step i is 1, 2, . . . , n) was statistically independent, based on the curve, which defines the cumulative number m i as: where "1" means that if the inequality on the right side of the j comparison is satisfied, 1 is added to the existing value. The k load step is then defined with a new random variable d k : The mean value E(d k ) and variance Var(d k ) of dk are calculated by: Then, a new statistic GF K is defined by: From this, the GF k -F j curve can be obtained, and then the GB k -F curve can be formed by applying the same process to it in reverse. The two curves can intersect at the mutation point of the E j -F j curve, thus generating the identification structure stress state criteria for the transition points. Specimen Design Liu Qiang [20] designed the fabrication of six test beams for testing and verifying the performance of truss-type steel-reinforced concrete box beams, as shown in Figure 1. In order to ensure that the flexural failure of the box girder conforms to the failure mode of the appropriate reinforcement beam, the truss joint adopts a thickened gusset plate and three side circumferential seam welding to improve the bearing capacity of the joint. The concrete grade is C30, the measured compressive strength of concrete cube is 30.9 MPa. The modulus of elasticity is 3.00104 GPa, the precast truss steel is Q235, and the measured values of mechanical properties are shown in Table 1, the thickness of angle protection layer is 30 mm, and other specific parameters of the six beams are shown in Table 2. Then, a new statistic GFK is defined by: From this, the GFk-Fj curve can be obtained, and then the GBk-F curve can be formed by applying the same process to it in reverse. The two curves can intersect at the mutation point of the Ej-Fj curve, thus generating the identification structure stress state criteria for the transition points. Specimen Design Liu Qiang [20] designed the fabrication of six test beams for testing and verifying the performance of truss-type steel-reinforced concrete box beams, as shown in Figure 1. In order to ensure that the flexural failure of the box girder conforms to the failure mode of the appropriate reinforcement beam, the truss joint adopts a thickened gusset plate and three side circumferential seam welding to improve the bearing capacity of the joint. The concrete grade is C30, the measured compressive strength of concrete cube is 30.9 MPa. The modulus of elasticity is 3.00104 GPa, the precast truss steel is Q235, and the measured values of mechanical properties are shown in Table 1, the thickness of angle protection layer is 30 mm, and other specific parameters of the six beams are shown in Table 2. Measurement Point Arrangement and Loading Scheme The loading test method and measurement points were designed as shown below in Figure 2. The test specimens were simply supported specimens with a span of 3100 mm. The three-point loading was adopted, and the strain of angle steel was measured by a TS3890 static resistance strain gauge (S1-S4). The model of the resistance strain gauge was BFH120-3AA-D100, the resistance value was 120 Ω, the sensitivity coefficient K = 2.0 ± 1%, and B1-B5 were dial indicators (accuracy: 0.01 mm) to measure displacement. The strain gauge was used to measure the displacement of the loading point, span, and support, respectively. Q1-Q10 were the dial indicators (accuracy: 0.001 mm) to measure the concrete surface strain. They were symmetrically distributed, with an upper and lower spacing of 75 mm as shown in Figure 2. Measurement Point Arrangement and Loading Scheme The loading test method and measurement points were designed as shown below in Figure 2. The test specimens were simply supported specimens with a span of 3100 mm. The three-point loading was adopted, and the strain of angle steel was measured by a TS3890 static resistance strain gauge (S1-S4). The model of the resistance strain gauge was BFH120-3AA-D100, the resistance value was 120 Ω, the sensitivity coefficient K = 2.0 ± 1%, and B1-B5 were dial indicators (accuracy: 0.01 mm) to measure displacement. The strain gauge was used to measure the displacement of the loading point, span, and support, respectively. Q1-Q10 were the dial indicators (accuracy: 0.001 mm) to measure the concrete surface strain. They were symmetrically distributed, with an upper and lower spacing of 75 mm as shown in Figure 2. The six beam specimens in this test were simply supported specimens, that is, one end of the support was a fixed hinge support, one end was a movable hinge support, and 100mm was reserved at both ends of the support to prevent the support from sliding. The test loading method adopted three-point loading, which was realized through the distribution beam and Jack. The load was controlled by the pressure sensor. Steel plates The six beam specimens in this test were simply supported specimens, that is, one end of the support was a fixed hinge support, one end was a movable hinge support, and 100 mm was reserved at both ends of the support to prevent the support from sliding. The test loading method adopted three-point loading, which was realized through the distribution beam and Jack. The load was controlled by the pressure sensor. Steel plates were padded at the loading point and supported to increase the local compression area. The pre-loading test was firstly carried out to check the test apparatus and to reduce the experimental errors brought about by the test apparatus, and in the formal loading stage, the beams were loaded at 10 kN per stage until failure. The E j -F j Curve of the Test Beam and Failure Load Analysis In this section, taking the A3 beam as an example, the variation characteristics of stress state of the structure during the whole loading process are analyzed by using the above theoretical methods. The E j -F j curve of beam-A3 and the characteristic points P and Q corresponding to the intersection of GF-F j and GB-F j curves obtained by the M-K method were drawn as shown in Figure 3. It can be seen that the development of the E j -F j curve was roughly divided into three stages by the two characteristic points P and Q. Before the load reached the P point, the E-F curve changed very gently, which indicates that the concrete was not cracked at that time, and the beam was basically in a relatively stable linear elastic stage. The tensile performance of the concrete was very poor, which was far lower than its compressive performance. Therefore, with the increase in load, the concrete in the tensile area will soon reach its tensile strength limit, resulting in the first cracking. Between P and Q, the curve raised slightly, and more cracks appeared in the tensile zone, and the stress of the tensile angle steel increased. The tensile stress would be transferred to the uncracked concrete through the cohesive force leading to some new cracks and extensions of existing cracks. Although the corresponding E j -F j curve never increased in line and the beam entered the elasto-plastic stress stage, it still maintained a stable stress state macroscopically. Compared with the linear elastic stage, the beam began to develop in a part of the plasticity and experienced local damage, and it could not completely restore its original state despite removing the load. After the Q point, the curve became very steep and the beam entered the unstable damage phase, leading to large deformations at very small load increments, which are not conducive to continuous loading until the final failure. This abrupt change reveals that the structural stress state of the beam jumped from the previous elasto-plastic state to a new unstable developing stress state, where the beam will not recover after force deformation. In other words, after the Q point, the structural stress state of the beam began to change qualitatively. According to the analysis of the development characteristics of the Ej-Fj curve, it was found that the characteristic points P and Q played important roles in the loading process. These can be summarized as follows: (1) Characteristic point P represented the transition point of the elastic stressing state to the plastic development for the beam. (2) Characteristic point Q was the starting point of the structural failure and was the critical point in the process of qualitatively changing the structural stressing state of the beam, which essentially reflected the internal law of the structure under load. Figure 4 shows the change in the measured strain value of beam A3 with load, and a strain pattern diagram was added to show the strain change more vividly before and after loading. The characteristic loads are indicated by dashed lines, and the results show that the curves had the same trend as the Ej-Fj curve, which was divided into three different The first characteristic point P here reveals the transition of the structure from the elastic stress state to plastic development, which is the transition from concrete never cracking to cracking. After point P, the structure entered the elastic-plastic stress stage. The whole structure maintained a stable stress state macroscopically before the characteristic load Q, and the stress state of the beam always maintained a quantitative change rather than a qualitative change. The second characteristic point Q reveals that the structure changed from the previous elastic-plastic state to the new stress state of unstable development, and the members will no longer recover after stress and deformation. This is the inevitable working behavior feature of the structure in the stress process; that is, the stress state of the structure changed qualitatively at point Q, and the load at point Q was the starting point of the structural failure process and the critical point in the change process of the stress state of the members. After the Q point, the structure began to enter the failure development stage. The determined critical load was the result of the internal mutation of the characteristics of the stress state of the structure, not the assumption. The whole process of the stress state change represents the inherent leap characteristics of the structural stress state, which reveals the natural law of stress state development after structural stress. In other words, the sudden change in the load of the stress state of the truss steel-reinforced concrete beam determined by this method can be used as a reference for determining the design load of the truss steel-reinforced concrete structure. Strain-Based Characterization of the Stressing State for Beam-A3 According to the analysis of the development characteristics of the E j -F j curve, it was found that the characteristic points P and Q played important roles in the loading process. These can be summarized as follows: (1) Characteristic point P represented the transition point of the elastic stressing state to the plastic development for the beam. (2) Characteristic point Q was the starting point of the structural failure and was the critical point in the process of qualitatively changing the structural stressing state of the beam, which essentially reflected the internal law of the structure under load. Figure 4 shows the change in the measured strain value of beam A3 with load, and a strain pattern diagram was added to show the strain change more vividly before and after loading. The characteristic loads are indicated by dashed lines, and the results show that the curves had the same trend as the E j -F j curve, which was divided into three different stages of development by points P and Q. As can be seen from the curve of strain versus load, the strain values at each measurement point were usually very small before the characteristic point P. The curves almost overlapped, were very close to each other, and remained essentially linear. It can be seen that the beam was in a state of elastic stress and the tensile strain of the concrete at the tensile edge had not yet reached the ultimate tensile strain. After that, the strain curve of the beam grew significantly and separated, and the beam entered the plastic development stage. The tensile zone was subjected to angular tensile stresses and the concrete was withdrawn from the works. From characteristic point P to Q, the strain increased more rapidly compared to the previous stage. The angle steel had good mechanical properties, and the outsourced concrete could effectively prevent the local buckling of the angle steel, which could effectively control the structural deformations. Therefore, the stress-strain curve changed smoothly. When it was over the characteristic load Q, the curves changed in unstable trends, which shows a certain abrupt change characteristic. The beam began to be in a state of instability and stress and entered the failure development stage. Eventually, the concrete in the compression zone was crushed and the beam was destroyed. Displacement-Based Characterization of the Stressing State for Beam-A3 The changing characteristics can also be reflected in the displacement to a certain extent; hence, the displacement-load curve at the mid-span and loading point of beam-A3 Displacement-Based Characterization of the Stressing State for Beam-A3 The changing characteristics can also be reflected in the displacement to a certain extent; hence, the displacement-load curve at the mid-span and loading point of beam-A3 is drawn in Figure 5. In order to show the change law more vividly, the mid-span displacement increment diagram was supplemented. The displacement curve can reflect the change trend of displacement before and after a two-level load. With the increase in the load, the displacement of the mid-span section was greater than that of the loading point, which was consistent with the stress and deformation characteristics of the structure. In addition, the displacement and its incremental curve had similar variation characteristics, which can verify the correctness and validity of the three stress state stages divided by the two characteristic loads. Displacement-Based Characterization of the Stressing State for Beam-A3 The changing characteristics can also be reflected in the displacement to a certain extent; hence, the displacement-load curve at the mid-span and loading point of beam-A3 is drawn in Figure 5. In order to show the change law more vividly, the mid-span displacement increment diagram was supplemented. The displacement curve can reflect the change trend of displacement before and after a two-level load. With the increase in the load, the displacement of the mid-span section was greater than that of the loading point, which was consistent with the stress and deformation characteristics of the structure. In addition, the displacement and its incremental curve had similar variation characteristics, which can verify the correctness and validity of the three stress state stages divided by the two characteristic loads. Stress State Analysis Based on Strain Interpolation In structural analysis, the behavior state characteristics of the structure are often reflected by the measured data. The description of the response of the measured data to the structure is scientific and reasonable, and it is also one of the most accurate methods. However, due to the limitation of measuring instruments, measuring conditions, and methods, the measured data are limited and the finite data are often not enough to prove Stress State Analysis Based on Strain Interpolation In structural analysis, the behavior state characteristics of the structure are often reflected by the measured data. The description of the response of the measured data to the structure is scientific and reasonable, and it is also one of the most accurate methods. However, due to the limitation of measuring instruments, measuring conditions, and methods, the measured data are limited and the finite data are often not enough to prove the response mechanism and state characteristics of the structure. Therefore, a method with precise physical significance is needed to expand the test data to obtain more information about the stress state of the structure. The numerical shape function interpolation (NSF) [21] method emerges. Numerical Shape Function Method The current interpolation method does not consider the specific physical model and is mainly used for an internal supplement when the data are missing. The NSF method is based on the concept of shape function in the finite element method. It uses experimental data as weights and node data as the basis. The relevant data field of the entire component is obtained through the method of interpolation, and then the basic configuration and configuration of the force state of the entire structure are determined. It is a kind of numerical shape function based on a finite element numerical simulation and the concept of shape function to construct a numerical shape function in line with the physical characteristics of the model. It can not only overcome the shortcomings of traditional interpolation methods, but can also obtain data close to the real test data field to ensure the accuracy of the in-depth test and analysis. In order to introduce this method, the large-scale general ANSYS [22] software is used for modeling and meshing. The element type is shell 181 and the size is 5 mm, as shown in Figure 6a. Then the z-axis unit strain is applied at a corresponding measuring point to the section, and z-directional constraints are imposed on other nodes, limiting its rigid body displacement, and then a static analysis is performed to obtain the z-directional strain field at the corresponding measurement point, as shown in Figure 6b,c. According to Castigliano's theorem, in this case, the constructed strain field is independent of the load path, and the results of the simulation can be linearly superimposed. Similarly, Formula (9) can be used to obtain the strain field of the entire model: where D is the deflection field of the section, N i is the numerical shape function of the i measuring point, N i (x i ) is the function value of the element node x j , n is the total number of element nodes, and m is the total number of element nodes. data as weights and node data as the basis. The relevant data field of the entire component is obtained through the method of interpolation, and then the basic configuration and configuration of the force state of the entire structure are determined. It is a kind of numerical shape function based on a finite element numerical simulation and the concept of shape function to construct a numerical shape function in line with the physical characteristics of the model. It can not only overcome the shortcomings of traditional interpolation methods, but can also obtain data close to the real test data field to ensure the accuracy of the in-depth test and analysis. In order to introduce this method, the large-scale general ANSYS [22] software is used for modeling and meshing. The element type is shell 181 and the size is 5 mm, as shown in Figure 6a. Then the z-axis unit strain is applied at a corresponding measuring point to the section, and z-directional constraints are imposed on other nodes, limiting its rigid body displacement, and then a static analysis is performed to obtain the z-directional strain field at the corresponding measurement point, as shown in Figures 6b,c. According to Castigliano's theorem, in this case, the constructed strain field is independent of the load path, and the results of the simulation can be linearly superimposed. Similarly, Formula (9) can be used to obtain the strain field of the entire model: where D is the deflection field of the section, Ni is the numerical shape function of the i measuring point, Ni(xi) is the function value of the element node xj, n is the total number of element nodes, and m is the total number of element nodes. When constructing the constitutive relationship, the compression constitutive curve of concrete is [23]: where f c is the axial compressive strength of concrete (N/mm 2 ); ε c is the peak compressive strain of concrete corresponding to f c ; α a is the parameter value of the stress-strain rising section under uniaxial compression, and α d is the parameter value of the stress-strain drop section under uniaxial compression. The compression constitutive curve of concrete is: where f t is the axial tensile strength of concrete (N/mm 2 ); ε t is the peak compressive strain of concrete corresponding to f t , and α t is the parameter value of uniaxial tension descending section. Moreover, the constitutive curve of angle steel is shown in Figure 7, and the relationship expression is shown in formula 12. where f ya is the tensile yield strength of angle steel (N/mm 2 ); f ya is the yield strength of compression angle steel (N/mm 2 ); E s is the elastic modulus of angle steel (N/mm 2 ): ε ya is the tensile yield strain of angle steel, and ε ya is the compressive yield strain of angle steel. compressive strain of concrete corresponding to c f ; a α is the parameter value of the stress-strain rising section under uniaxial compression, and d α is the parameter value of the stress-strain drop section under uniaxial compression. The compression constitutive curve of concrete is: where t f is the axial tensile strength of concrete (N/mm 2 ); t ε is the peak compressive strain of concrete corresponding to t f , and t α is the parameter value of uniaxial tension descending section. Moreover, the constitutive curve of angle steel is shown in Figure 7, and the relationship expression is shown in formula 12. Extended Data Accuracy Analysis Take measurement points 1 and 2 as examples to clearly observe the fitting degree between the interpolation data and measurement data, as shown in Figure 8. It can be clearly seen in Figure 8a that the two curves achieved a high degree of fit throughout the loading process and even overlapped in most stages, which indicates that the data derived from applying this difference method have a fairly high accuracy, a high degree of fit to the experimental results, and relatively small errors, which can meet the application requirements. Figure 8b shows the statistical information of its errors represented by a box-line plot, where the fit can be more obviously seen. The height of the quadrature spacing boxes of measurement point 1 and measurement point 2 was relatively small, so the data show a certain concentration phenomenon and the average error was small. It echoes with the previous curve fitting results, which further verifies the rationality and scientific effectiveness of the NSF interpolation method. By comparison, all measurement points were within the error tolerance range, indicating that the extension of the test data by this interpolation method is scientific and reasonable, and is an extension of the structural analysis method, which can be used as an important tool for analyzing the stress state of beams. echoes with the previous curve fitting results, which further verifies the rationality and scientific effectiveness of the NSF interpolation method. By comparison, all measurement points were within the error tolerance range, indicating that the extension of the test data by this interpolation method is scientific and reasonable, and is an extension of the structural analysis method, which can be used as an important tool for analyzing the stress state of beams. Strain/Stress Field Analysis The experimental data can reflect the performance characteristics of precast trusstype steel-reinforced concrete around the sudden load to a certain extent, but the limited data can only reflect the strain/stress distribution and development of each measurement point. The NSF method was used to expand the experimental data of the section of the beam, and then stress was calculated based on the constitutive relation model of materials, including concrete and steel. Therefore, the strain/stress fields of the section were constructed and used to reveal the changing characteristics of the structural stressing state Strain/Stress Field Analysis The experimental data can reflect the performance characteristics of precast truss-type steel-reinforced concrete around the sudden load to a certain extent, but the limited data can only reflect the strain/stress distribution and development of each measurement point. The NSF method was used to expand the experimental data of the section of the beam, and then stress was calculated based on the constitutive relation model of materials, including concrete and steel. Therefore, the strain/stress fields of the section were constructed and used to reveal the changing characteristics of the structural stressing state of the beam. By integrating the data obtained, the evolution process of the structural stress state under the vertical load in the mid-span section of the component is displayed, and the jump process of the structural stress state before and after the characteristic point was verified, which can intuitively show the structural stressing state in the loading process. Figure 9 depicts the concrete strain field diagram obtained by the interpolation method near the P-value and Q-value load of the mid-span section, and the same section uses the same colorimetric and scale, marking the main grade scale value and 0 value position on the scale. The boundary line where the strain is 0 is marked with a magenta line, the peak tensile strain is marked with a purple line, and the peak compressive strain is marked with a red line. The specimen was directly subjected to the downward load, and the cross-section presents a state of upper compression and lower tension. The red area indicates the maximum tension, and the blue area indicates the maximum compression. It can be seen that the area of the tension zone of the entire cross-section was larger than the compression zone. Before the load P, the color of the strain field was lighter, and the concrete had not yet reached the ultimate tensile strain, meaning that the concrete was still in the elastic working stage at this time, and no cracks had occurred. When the load was at the P-value, due to the appearance of cracks, the tensile strain peak line appeared on the cloud diagram. Compared with the previous, the strain field changed significantly, and the peak line gradually moved upward with the increasing load, leading to the upward development of the crack. It can be concluded that the stress state of the structure changed after 50 kN, but the development of the strain field remained relatively stable. In addition, in Figure 9d the appearance of the peak line on the figure of the angle steel indicates that the tension zone had reached the tensile strain at the yield, and as the load increased, the position of the peak line gradually moved upward. As the neutral axis moved upward, more and more concrete lost its ability to resist tension, as the tensile stress on the angle steel continued to increase. After the limit value and Q value, the color of the concrete gradually darkened, and a compressive strain peak line appeared, indicating that the force state of the member was no longer stable at this time. The maximum tensile/compressive strain value of the cross-section increased with the increase in the load, the tensile area was continuously reduced, and its changed forms all had a sudden change after the Q value. The compressive strain peak line of the angle steel was also generated, and the super large strain value appeared near the lower part of the section. The compressive strain peak line of the angle steel was also generated, and the super large tensile strain value appeared near the lower part of the section, which shows that the structure was in an unstable state and had potential risks. The structural stress state of the beam jumped, and the continuous loading entered the failure stage. Finally, the concrete in the compression zone was crushed, resulting in the complete failure of the beam. In order to further observe the changing characteristics of the structural stressing state around the characteristic loads, the stress fields of concrete and angle steel were plotted in Figure 10, respectively. The maximum compressive stress of concrete was 30.7 MPa, which was close to the compressive strength of the cube. The corresponding peak line was generated after load Q and developed downward from the upper edge of the section. After load P, the minimum tensile stress in the concrete reached a maximum value (1.43 MPa), resulting in cracks. From then on, the concrete in the tensile zone stopped working and the stresses were redistributed. Some characteristics of the changes in the stress field can be observed before and after load P. The tensile stress in the angle continued to increase with the increase in load. In the concrete stress field, it can be seen that as the load continued to increase, the area surrounded by the maximum tensile strain peak line of the concrete and the strain line with a strain of 0 gradually decreased. After load Q, the stress field was characterized by abrupt changes, such as the compressive zone of concrete and the tensile zone of angle steel. Through the above, it was found that loads P and Q could indeed define the change characteristics of the structural stressing state for the beam more accurately. verified, which can intuitively show the structural stressing state in the loading process. Figure 9 depicts the concrete strain field diagram obtained by the interpolation method near the P-value and Q-value load of the mid-span section, and the same section uses the same colorimetric and scale, marking the main grade scale value and 0 value position on the scale. The boundary line where the strain is 0 is marked with a magenta line, the peak tensile strain is marked with a purple line, and the peak compressive strain is marked with a red line. The specimen was directly subjected to the downward load, and the cross-section presents a state of upper compression and lower tension. The red area indicates the maximum tension, and the blue area indicates the maximum compression. It can be seen that the area of the tension zone of the entire cross-section was larger than the compression zone. Before the load P, the color of the strain field was lighter, and the concrete had not yet reached the ultimate tensile strain, meaning that the concrete was still in the elastic working stage at this time, and no cracks had occurred. When the load was at the P-value, due to the appearance of cracks, the tensile strain peak line appeared on the cloud diagram. Compared with the previous, the strain field changed significantly, and the peak line gradually moved upward with the increasing load, leading to the upward development of the crack. It can be concluded that the stress state of the structure changed after 50 kN, but the development of the strain field remained relatively stable. In addition, in Figure 9d the appearance of the peak line on the figure of the angle steel indicates that the tension zone had reached the tensile strain at the yield, and as the load increased, the position of the peak line gradually moved upward. As the neutral axis moved upward, more and more concrete lost its ability to resist tension, as the tensile stress on the angle steel continued to increase. After the limit value and Q value, the color of the concrete gradually darkened, and a compressive strain peak line appeared, indicating that the force state of the member was no longer stable at this time. The maximum tensile/compressive strain value of the cross-section increased with the increase in the load, the tensile area was continuously reduced, and its changed forms all had a sudden change after the Q value. The compressive strain peak line of the angle steel was also generated, and the super large strain value appeared near the lower part of the section. The compressive strain peak line of the angle steel was also generated, and the super large tensile strain value appeared near the lower part of the section, which shows that the structure was in an unstable state and had potential risks. The structural stress state of the beam jumped, and the continuous loading entered the failure stage. Finally, the concrete in the compression zone was crushed, resulting in the complete failure of the beam. In order to further observe the changing characteristics of the structural stressing state around the characteristic loads, the stress fields of concrete and angle steel were plotted in Figure 10, respectively. The maximum compressive stress of concrete was 30.7 MPa, which was close to the compressive strength of the cube. The corresponding peak line was generated after load Q and developed downward from the upper edge of the section. After load P, the minimum tensile stress in the concrete reached a maximum value (1.43 MPa), resulting in cracks. From then on, the concrete in the tensile zone stopped continued to increase with the increase in load. In the concrete stress field, it can be seen that as the load continued to increase, the area surrounded by the maximum tensile strain peak line of the concrete and the strain line with a strain of 0 gradually decreased. After load Q, the stress field was characterized by abrupt changes, such as the compressive zone of concrete and the tensile zone of angle steel. Through the above, it was found that loads P and Q could indeed define the change characteristics of the structural stressing state for the beam more accurately. Internal Forces Analysis Under the action of a load, the test beam was mainly subjected to axial pressure and an in-plane bending moment, and they were separated from each other in order for the changing trend of the structural stressing state under different internal forces to be studied. The axial force and in-plane bending moment were direct manifestations of the structural stressing state, and they are plotted in Figure 11. It can be seen that the axial force first increased then decreased, while the in-plane bending moment occurred all the time. The maximum axial force and bending moment in the whole loading process were at load Q about 1500 kN and at the ultimate load about −740 kN, respectively. The sudden deviation of the changing trend for the two curves can also be clearly identified before and after loads P and Q; hence, three structural stressing state stages were divided, which were the elastic stage, plastic stage, and failure stage, respectively. The changing characteristics are shown in the Ej-Fj curve and once again verify the previous discovery. Internal Forces Analysis Under the action of a load, the test beam was mainly subjected to axial pressure and an in-plane bending moment, and they were separated from each other in order for the changing trend of the structural stressing state under different internal forces to be studied. The axial force and in-plane bending moment were direct manifestations of the structural stressing state, and they are plotted in Figure 11. It can be seen that the axial force first increased then decreased, while the in-plane bending moment occurred all the time. The maximum axial force and bending moment in the whole loading process were at load Q about 1500 kN and at the ultimate load about −740 kN, respectively. The sudden deviation of the changing trend for the two curves can also be clearly identified before and after loads P and Q; hence, three structural stressing state stages were divided, which were the elastic stage, plastic stage, and failure stage, respectively. The changing characteristics are shown in the E j -F j curve and once again verify the previous discovery. Internal Forces Analysis Under the action of a load, the test beam was mainly subjected to axial pressure and an in-plane bending moment, and they were separated from each other in order for the changing trend of the structural stressing state under different internal forces to be studied. The axial force and in-plane bending moment were direct manifestations of the structural stressing state, and they are plotted in Figure 11. It can be seen that the axial force first increased then decreased, while the in-plane bending moment occurred all the time. The maximum axial force and bending moment in the whole loading process were at load Q about 1500 kN and at the ultimate load about −740 kN, respectively. The sudden deviation of the changing trend for the two curves can also be clearly identified before and after loads P and Q; hence, three structural stressing state stages were divided, which were the elastic stage, plastic stage, and failure stage, respectively. The changing characteristics are shown in the Ej-Fj curve and once again verify the previous discovery. Analysis of Different Truss-Type Steel Stud Concrete Stress State Patterns Based on GSED With reference to the analysis method of beam-A3, the evolution law of the structural stressing state of other test beams under the same loading conditions were compared and analyzed in turn. Figure 12 shows the E j -F j curves of all the beams, and it is still obvious that the curve change in the entire loading process can be divided into three stages by their respective characteristic loads. The magnitude of the lower chord angle was an important factor affecting the strain energy response of the members, but by comparing loads P for A1, A2, and A3, it had almost no effect on the elastic-plastic boundary point of the angular concrete beam, indicating that the determination of the mutation point may be related to the material properties of the concrete only. As for the second load Q, its value increased as the size of the lower chord angle steel increased. In other words, the change in the size of the bottom chord angle steel can affect the failure load of the component; thus, the larger the size, the greater the bearing capacity. Analysis of Different Truss-Type Steel Stud Concrete Stress State Patterns Based on GSED With reference to the analysis method of beam-A3, the evolution law of the structural stressing state of other test beams under the same loading conditions were compared and analyzed in turn. Figure 12 shows the Ej-Fj curves of all the beams, and it is still obvious that the curve change in the entire loading process can be divided into three stages by their respective characteristic loads. The magnitude of the lower chord angle was an important factor affecting the strain energy response of the members, but by comparing loads P for A1, A2, and A3, it had almost no effect on the elastic-plastic boundary point of the angular concrete beam, indicating that the determination of the mutation point may be related to the material properties of the concrete only. As for the second load Q, its value increased as the size of the lower chord angle steel increased. In other words, the change in the size of the bottom chord angle steel can affect the failure load of the component; thus, the larger the size, the greater the bearing capacity. The vertical web spacing determines the number of vertical webs and diagonal webs in pure bending, and the number of webs has an effect on the concrete restraint. Comparing the three beams A1, A4, and A5 with different web bar spacing, it can be found that their Q values were all 110 kN, but the values of load P reduced with the decrease in the vertical web rod space. It can be indicated that the distance between the vertical web members of the pure bending section had little effect on the flexural bearing capacity of the truss-type steel-reinforced concrete box girder, but it affected the elastic phase of concrete to some extent, such as negatively affecting the crack resistance. Compared with A2 and A6, the only difference between them is whether there was an oblique web angle steel in the pure bending section, which generally bears the shearing effect in the specimen. In the Ej-Fj curve, the P values were both 50 kN, and the load Q of specimen A6 was 120 kN, which was slightly smaller than that of specimen A2 of 130 kN. Although there were some differences between the two beams, the influence of oblique web members on the stressing state of the truss-type steel-reinforced concrete box girder under the same load was not obvious. The vertical web spacing determines the number of vertical webs and diagonal webs in pure bending, and the number of webs has an effect on the concrete restraint. Comparing the three beams A1, A4, and A5 with different web bar spacing, it can be found that their Q values were all 110 kN, but the values of load P reduced with the decrease in the vertical web rod space. It can be indicated that the distance between the vertical web members of the pure bending section had little effect on the flexural bearing capacity of the truss-type steel-reinforced concrete box girder, but it affected the elastic phase of concrete to some extent, such as negatively affecting the crack resistance. Compared with A2 and A6, the only difference between them is whether there was an oblique web angle steel in the pure bending section, which generally bears the shearing effect in the specimen. In the E j -F j curve, the P values were both 50 kN, and the load Q of specimen A6 was 120 kN, which was slightly smaller than that of specimen A2 of 130 kN. Although there were some differences between the two beams, the influence of oblique web members on the stressing state of the truss-type steel-reinforced concrete box girder under the same load was not obvious. Analysis of Strains and Displacements of Different Steel Reinforced Concrete In order to further study the influence of changes in steel frame design parameters on the performance of beams under load, the following figure summarizes the changes of relevant displacement and strain of beams with three groups of different steel frame parameters with load. The displacement can reflect the change characteristics of the stress state of the beam in another way. It can be seen in Figure 13 that the changing characteristics of displacement were similar to those of GSED, and the characteristic loads and stressing state stages could also be separate from the curves, which can also verify the accuracy and efficiency of the M-K method. However, there were still some differences between them around ultimate loads, which could reflect the changing characteristics of different types of beams during the failure stage. It can be seen from A1-A5 that the maximum displacement would increase with the increase in web bar spacing and the decrease in the distance between vertical web members, but they had a similar displacement at characteristic loads. Through a comparison with A2 and A6, it can be also found that the oblique web angle steel in the pure bending section could effectively improve the ductility and bearing ability. In order to further study the influence of changes in steel frame design parameters on the performance of beams under load, the following figure summarizes the changes of relevant displacement and strain of beams with three groups of different steel frame parameters with load. The displacement can reflect the change characteristics of the stress state of the beam in another way. It can be seen in Figure 13 that the changing characteristics of displacement were similar to those of GSED, and the characteristic loads and stressing state stages could also be separate from the curves, which can also verify the accuracy and efficiency of the M-K method. However, there were still some differences between them around ultimate loads, which could reflect the changing characteristics of different types of beams during the failure stage. It can be seen from A1-A5 that the maximum displacement would increase with the increase in web bar spacing and the decrease in the distance between vertical web members, but they had a similar displacement at characteristic loads. Through a comparison with A2 and A6, it can be also found that the oblique web angle steel in the pure bending section could effectively improve the ductility and bearing ability. Due to the lack of a qualitative analysis of structural failure, the current structural design generally adopts measures such as the excessive use of materials to improve the safety factor, and the failure process of the structure cannot be accurately controlled. The theory of the structural stress state is of great significance to judge the damage to structures and can provide a reference for structural design. In structural health monitoring, it is necessary to control the whole process of the structural response. The scientific evaluation of the structural bearing capacity is the focus of structural health monitoring. The application of this method can also be used as the basis to judge whether the structure can be used normally. In structural analysis, the critical point of the structural response to qualitative change can be scientifically evaluated by using this method, which can be used as an important index to evaluate the mechanical properties of structures. Based on the research idea of this paper, through the analysis of the whole process of the structure and optimizing the seismic analysis model, it is helpful to judge the working state of the structure under different ground motion levels. Moreover, based on the theoretical method of this paper, expanding the test data is helpful in order to analyze the overall performance of the structure in depth and to scientifically predict the failure position, elastic-plastic deformation degree, and damage degree of the component. Conclusions In this study, six steel truss-reinforced concrete box beams with different parameters were introduced. Based on the structural stressing state theory, the stressing state changing characteristics of the specimens during the entire loading process were analyzed, and a new Generalized Strain Energy Density (GSED) parameter was introduced to describe the response of the structure. The application of the M-K criterion revealed the jump characteristics of the structural stress state and redefined the failure load. The test data were extended by the NSF method to ensure the accuracy of the in-depth analysis of the structural test. Based on the interpolation data of the NSF method, the stress/strain field of the section was analyzed, which further reflected the sudden change in the internal stress state of the member before and after the characteristic load and verified the rationality of the M-K criterion. In addition, the stress state sub-mode of internal force showed the development trend of the axial compression and the in-plane bending moment. Finally, through a horizontal comparison, the influence of different structural parameters on the characteristic load of the specimen was shown. The results obtained improved the lack of test data and improved the accuracy of analysis. The results provide a new method for test data processing and analysis. The characteristic load judged according to the M-K criterion reflected the inherent characteristics determined in the process of the structural work. The failure load was accurately determined, which can provide a technical reference for the design of truss-type steel-reinforced concrete in the future. In future work, I will continue to compare the structural performance of a steelconcrete composite truss box girder and conventional structures through relevant experiments and numerical simulation, and analyze the performance advantages of reinforced concrete composite truss structures compared with ordinary structures in depth. In order to meet the seismic requirements, we will further explore the ductility performance, and provide an in-depth analysis of the seismic performance of a steel-concrete composite truss structure under earthquake (such as structural damage resistance, deformation capacity, energy dissipation capacity, etc.) by means of a test and a numerical simulation. This test adopts the form of simply supported members, and the connection mode of angle steel is welding. If conditions permit, the mechanical properties under other connection modes (such as the bolt connection) will be explored, and other forms of boundary conditions will be analyzed.
2022-05-30T15:09:32.505Z
2022-05-26T00:00:00.000
{ "year": 2022, "sha1": "87a9aa57d73ddf939e0ad1692dd7fb9f2b2868ee", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/11/3785/pdf?version=1653531760", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d119924b9b9ba14ba71fccb59f61f30092630a45", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
106402897
pes2o/s2orc
v3-fos-license
Method Research of Earthquake Prediction and Volcano Prediction in Italy This paper adopts the earthquake catalogue of the European Mediterranean Seismological Centre (EMSC), in accordance with the principles of Seismo-Geothermics Theory and the concept of seismic cone; it discusses the integrity of the earthquake catalogue and the overview of Mediterranean seismic cones; it focuses on the structural details and structural feature of the Italian branch of the Mediterranean seismic cone; it deduces the precursory process of subcrustal earthquake activities before two earthquakes magnitude over 6 and the eruptions of Etna volcano since 2005; then it summarizes the prediction working method of Seismo-Geothermics on estimating the general shell strength, the general period, and the rough location of future earthquake or volcano activities; and finally it discusses and explains some possible problems. The principle and working process of this method were testified in card No. 0419 in 2012, the author’s prediction card, which can apply to predict for intracrustal strong earthquakes and volcano activities within the global twenty four seismic cones. The purpose of this paper is to develop the tools and methods of the prediction of future earthquake and volcano. Introduction Earthquake prediction and volcano prediction are scientific problems recognized worldwide.Although seismic scientists around the world have developed various prediction theories and methods, there is a long way to go as there still exist tough problems to figure out.According to the principle of the Seismo-Geothermics Theory and the concept of seismic cone [1] [2], which was called as seismic cylinder or seismic mantle plume before this paper [3]- [5], the author has put forward a set of Seismo-Geothermics earthquake prediction theories and methods scattered in the web of science net "Chen Lijun's blog".Using this method, the author presented a prediction card of global crust strong earthquakes (coastal above magnitude 7 and inland magnitude above 6.5 or 6) and volcanoes within the future three years on April 19, 2012 to the relevant government departments.Besides, the author has found some relatively good results through a 3-year test [6] [7].In this paper, using the European Mediterranean Seismographic Centre (EMSC) earthquake catalogue and taking the Italian seismic cone as an example, the authors studied the spatial distribution image of Italian Seismic Cone, the earthquake focal depth sequence diagram, deduction of intracrustal strong earthquake prediction, the relationship between the monthly frequency of seismic activity in pillar and volcano eruption, and made a clear summary and explanation of the prediction method, hoping to contribute to the method research of future earthquake and volcano. The Seismic Activity in the Mediterranean Region since 2004 EMSC (European-Mediterranean Seismological Centre), founded in 1975, has provided an earthquake catalogue of the Mediterranean and the surrounding areas since October, 2004 to the public.The distribution of all earthquakes magnitude 2 and above obtained from EMSC catalogue is shown in Figure 1. Figure 1(a) studies the earthquake distribution area.The seismic surface distribution is mainly in accordance with the Mediterranean giant latitudinal tectonic belt advocated by Li Siguang [8] [9]; so is the distribution of intracrustal strong earthquake magnitude above 6.Each seismic branch is also a cone completely independent with all physical attributes and structural attributes, just like other big seismic cones.Therefore, this paper just focuses on the research of Italian branch, hereinafter referred to as the Italian Seismic Cone, in order to facilitate the ideas and methods of earthquake prediction. Study on the Italian Seismic Cone The influence area of Italian Seismic Cone is from 3˚E -18˚E, 30N˚ -50˚N. The Spatial Distribution Image of Italian Seismic Cone In Figure 1(b), the relationship between the distribution of intracrustal strong earthquakes and volcano activity and the distribution of subcrustal earthquakes is clear. Figure 2 is the distribution of earthquakes of Italian cone in Figure 1(c) by longitudinal and latitudinal profile image.Earthquake colors show different depth, so that the eye.Figure 2 shows the real features of the Italian Seismic Cone.In Figure 2(a), seen from south to north, the cone is a funnel shape at the depth of 100 km and fully erects within the depth of 100 -300 km, with a skirt shape at the depth of more than 300 km.In Figure 2(b), seen from east to west, the image of the cone is just similar to Figure 2(a) but northward sloping slightly within the depth of 250 -300 km. The distribution of earthquakes below 350 km depth is scattered, which may be caused by earthquake location error. There is evidence that Italian Seismic Cone is an anomaly body of high P wave velocity [3].It can be predicted from the evolution of its surrounding volcano activity that the abnormal body is declining, moving slowly toward southeast, which in any case could cause the deep cone to move westward and northward. Figure 2 is a special phenomenon that there is obvious lack of seismic activity below the Etna volcano (37.73˚N, 15˚E) within 50 -150 km depth, which is likely to be the volcano lava sac location.Etna volcano erupted on August 25, 2010 with the maximum eruption index VEI = 3, and then began erupting again from February 19th, 2013 to February 27, 2014. The Earthquake Focal Depth Sequence Diagram in Italian Cone The earthquake focal depth sequence diagram in Italian cone is shown in Figure 3. Figure 3 are formed on the basis of ANSS global earthquake catalog and EMSC Mediterranean seismic catalogue respectively.In spite of the inconsistent time duration, the two figures show the characteristic of "driving bottom-up layer by layer" (wide dashed line in the figures) of deep seismic activity in cones because the heat energy caused by the deep earthquake could not be dissipated but passed up layer by layer. According to such characteristic, one can look for precursory signs of intracrustal strong earthquake by subcrustal earthquake activity; and one can also estimate the intensity of future activity of seismic cone by its historical seismic activity. Deduction of Intracrustal Strong Earthquake Prediction in the Italian Cone In accordance with the above mentioned principle, the author tried to deduce the precursory process of two earthquakes magnitude 6 in Italy since 2009, as shown in Figure 4. Figure 4(a) and Figure 4(b) depict the earthquake magnitude 6.3 on April 6, 2009 in L'Aquila [10].Both figures have shown certain similar seismic activities while the L'Aquila subcrustal earthquake activity is dense, which may indicate an imminent strong earthquake.The earthquake activity under lower crust which is at the depth of more than 20 km selected by the same method appears to be more obvious, whereas the disturbance will increase correspondingly (image omitted).This is a lesson from earthquake prediction in Italy [11]. The Relationship between the Monthly Frequency of Seismic Activity in Pillar of Italian Seismic Cone and Volcano Eruption The Italian Seismic Cone is adopted from 12˚ -18˚E, 36˚ -42˚N.The relation between the seismicity of cone of monthly frequency and volcano eruption is shown in Figure 5.In Figure 5, the black solid line is the total frequency (N1) of the pillar body, and the thin line is the subcrustal seismic frequency (N2) below 35 km depth.The general trend of the two curves is in agreement.The thick pillar refers to the volcano eruption time and its eruption index (VEI max , see Table 1).The gray union sets of black solid line and thin line are the forerunner of volcano eruptions [12] [13]. From Figure 5 and Table 1, before each eruption of Etna volcano, the trend of each monthly frequency is slightly falling after rising, and the time of falling is relatively short from 3 months to 1 month.In Figure 5, before the last volcano eruption, there was unusually strong seismic activity in the cone, and the eruption time was much longer.According to the website http://www.sxdaily.com.cn/, the last eruption of Etna volcano started on February 19, 2013, while in Table 1, it is shown that the last eruption began on September 3, 2013 and ended on February 27, 2014, with its VEI (Volcano Eruption Index) undetermined. Working Method of Earthquake and Volcano Prediction The core idea of earthquake and volcano prediction method in Seismo-Geothermics is mainly observing cone activity and bonding research of the structural system theory put forward by Li Siguang. Study on the Process of Defining Seismic Cones The author divided global deep earthquakes into twenty four seismic cones of which No.19 and No.20 seismic cones have already been confirmed by different earthquake catalogue.Every seismic cone has certain physical properties and structural properties [3]. Study on the Activity of Seismic Cones Activity intensity of seismic cones will be estimated through activity rhythm based on historical earthquakes till the last date in Figure 3, and the maximum activity intensity refers to the maximum magnitude of historical earthquake plus 0.5.The general period of cone activities will be estimated through "driving bottom-up layer by layer" rhythm in Figure 3.The lower limit earthquake in Figure 3 is demanded to clearly reflect the "driving bottom-up layer by layer" rhythm [14] [15]. The rough location of seismic cone activity will be estimated through the intensive areas of subcrustal seismic activity in Figure 4 or the lower crust seismic activity. Volcano eruption time will be estimated by the monthly frequency curve of the pillar body in the seismic cone. Information about the seismic cone layout, three-dimensional diagram, sequence diagram, Benioff section microtomy and other methods are detailed in "Chen Lijun's blog", which will not be explained in this paper [16]- [18]. Study on the Intracrustal Structure System The method in this paper takes the study of intracrustal structure system as the evidence of testifying the subcrustal seismic activity areas.As for intracrustal strong earthquake, structure system is the key to the occurring of strong earthquake in the intensive areas of subcrustal seismic activity [5] [19]. Study on the Prediction of Volcano Both of intracrustal strong earthquake and volcano eruption are like twins, which mean there is a close relationship between them.If the intracrustal strong earthquake is stronger, then maybe the strength of volcano eruption will be weakened and vice versa.Therefore, the author once researched the relationship between earthquake and volcano [20]. Most seismic cones are upright pillars, and a few are inclined, pointing to the site on the ground.The significant difference between earthquake and volcano prediction lies in that active volcanoes are closer to the site (except mud volcano) whereas most intracrustal strong earthquakes deviate from the site, even reaching to the edge of cone-affected areas.If one can find lava sac location as what Italy did, then one can focus on researching the law of seismic cone activity and volcano eruption. Following methods mentioned above, the author submitted a medium-term prediction card of global strong earthquake and volcano on April 19, 2012, namely No. 0419 card, which can be found in relevant government departments [6].The author has made a test in past 3 years [7].The results show that the earthquake magnitude above 6.5 and volcano eruption in the world in past 3 years nearly all occurred in the given prediction circle or its surroundings, or at least in the active cones estimated by the author. Discussion and Conclusion This paper introduces the prediction principle and working method of Seismo-Geothermics, and some possible questions are presented and discussed as follows: Energy Source of Intracrustal Strong Earthquakes It has been believed that seismic energy mainly depends on the strain energy accumulation of crustal tectonic movement; however, in the world, more than 95% of intracrustal strong earthquakes and more than 85% of active volcanoes happen within the twenty four seismic cones [5], suggesting that there is a close relationship between intracrustal strong earthquake and seismic activity in the deep.One could conclude that the energy of intracrustal strong earthquake may mainly come from deep energy supply, and the strain energy accumulation of crustal tectonic movement may function as a complement and reinforcement.Therefore, shell strong earthquake may occur suddenly just like a volcano eruption. The Vital Relationship between Seismic Cone Activity and Human Beings The relationship between earthquake and fault as well as other surface structures is well known, but according to seismo-geothermics, the fault may either be the cause or just the result of seismic activity.Since 1900, the global earthquakes which have killed more than 1000 people almost all happened within the twenty four cones (Figure 6), which shows that it is the seismic cone that may have a vital relationship with human beings rather than surface structure covering the Earth.The seismic cones are places where intensive earthquakes happened, which can be clearly shown on three-dimensional spatial seismic distribution based on earthquake catalogue. Study on the Volcano Prediction No. 0419 card has made a good attempt to the volcano prediction such as Guatemala Santiaguito volcano, Tongariro volcano in New Zealand and Ecuador Tungurahua volcano which all occurred in the volcano circle region predicted by the author [6].Most of the twenty four global seismic cones have a history of volcano eruption.The present global volcano forecast only focuses on the observation and study of surface microseismic, which is not comprehensive.The volcano history as well as the regularity and influence of deep seismic activity in the located seismic cone are two key elements which must be studied at the same time. Focal Depth Precision and accuracy are two factors in determining focal depth.The images of both twenty four global seismic cones divided based on ANSS catalogue and the Mediterranean seismic cone based on EMSC catalogue are stable in long term; the results of the two kinds of cones are consistent, which is enough to show the precision of depth measurement of the two sets of catalogues.Of course, the accuracy of depth determination is doubt often.However, the predicting method in this paper is mainly concerned with the differences of focal depth rather than absolute depth, which therefore could not suffer the influence of its accuracy. The Proportion of Earthquake Size The research in this paper has not yet adopted a particular statistical method, but the proportion of the earthquake size is calculated as exactly as possible, and the lower limit of the earthquake magnitude defined in agreement within involved seismic cones is permitted. The Earthquake Catalogue ANSS earthquake catalogue and EMSC earthquake catalogue, and many other global earthquake catalogues are desirable.As for the earthquake catalog duration, requirements are not strictly defined.Under the circumstance of deep study on global seismic cones, seismic cone activity can be estimated based on the catalogue, provided that it has ten-year history or longer. Conclusion Intracrustal strong earthquakes and volcano activities have brought great disaster to human being.The lesson of conviction events due to the earthquake misjudgment in Italy, and the criticism and lashes of earthquake prediction by society and public, have made the earthquake prediction with social benefits an urgent task without delay.There is a world class seismological observation data in Mediterranean region, and the opportunities of earthquake prediction research in Italy, Greece, Turkey may be able to bring a new hope to the human being. In summary, this paper adopts the earthquake catalogue of the European Mediterranean Seismological Centre (EMSC), in accordance with the principles of Seismo-Geothermics Theory and the concept of seismic cone; it discusses the integrity of the earthquake catalogue and the overview of Mediterranean seismic cones; it focuses on the structural details and structural feature of the Italian branch of the Mediterranean seismic cone; it deduces the precursory process of subcrustal earthquake activities before two earthquakes magnitude over 6 and the eruptions of Etna volcano since 2005; then it summarizes the prediction working method of Seismo-Geothermics on estimating the general shell strength, the general period, and the rough location of future earthquake and volcano activities; and finally it discusses and explains some possible problems.The principle and working process of this method were testified in card No. 0419 in 2012, the author's prediction card, which can apply to predict for intracrustal strong earthquakes and volcano activities within the global 24 seismic cones.The purpose of this paper is to develop the tools and methods of the prediction of future earthquake and volcano. (NCEDC), doi:10.7932/NCEDC, the EMSC catalog was accessed through the European-Mediterranean Seismological Centre, and the GVP data was accessed through SMITHSONIAN INSTITUTION web. Figure 1 ( b) is the focal depth distribution of earthquake depth 50 km and deeper, which has a great difference between the distribution of surface seismicity, thus making the division of seismic cones.Based on the principle of dividing seismic cones, the deep seismic activity is divided into the Mediterranean seismic cone No.19 and west Mediterranean seismic cone No.20.No.19 is subdivided into 191 Italian branch, 192 Turkey branch and 193 Iran branch (No.193 is partly shown in this figure). Figure 1 ( c) is the three-dimensional distribution of all earthquakes in Figure 1(b).Two upright seismic cones with the depth of 690 km in the Mediterranean region can be seen in Figure 1(c), which are located in Sicily, Italy and Aegean Sea respectively.There are also some small unnamed cones. Figure 4 ( c) and Figure 4(d) depict the earthquake magnitude 6.1 on May 20, 2012 in Parma.In Figure 4(d), the subcrustal seismic activity in Parma area since January, 2012 is obviously stronger than that in Figure 4(c), which may indicate an impending strong earthquake. Figure 5 . Figure 5.The relationship between monthly frequency of earthquake activity and volcano activity in Italian cone.(By EMSC earthquake catalog, M ≥ 2.0, 2004.10.1-2014.4.26). Figure 6 . Figure 6.The distribution of deathly earthquakes worldwide has been killing more than 1000 people since 1900.
2019-04-05T22:52:39.838Z
2015-09-14T00:00:00.000
{ "year": 2015, "sha1": "4f7dfec2868a55ac3d195c1cb4708dc795ecf2f0", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=59566", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4f7dfec2868a55ac3d195c1cb4708dc795ecf2f0", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
15750750
pes2o/s2orc
v3-fos-license
Evidence of strong antiferromagnetic coupling between localized and itinerant electrons in ferromagnetic Sr2FeMoO6 Magnetic dc susceptibility ($\chi$) and electron spin resonance (ESR) measurements in the paramagnetic regime, are presented. We found a Curie-Weiss (CW) behavior for $\chi$(T) with a ferromagnetic $\Theta = 446(5)$ K and $\mu_{eff} = 4.72(9) \mu_{B}/f.u.$, this being lower than that expected for either $Fe^{3+}(5.9\mu_{B})$ or $Fe^{2+}(4.9\mu_{B})$ ions. The ESR g-factor $g = 2.01(2)$, is associated with $Fe^{3+}$. We obtained an excellent description of the experiments in terms of two interacting sublattices: the localized $Fe^{3+}$ ($3d^{5}$) cores and the delocalized electrons. The coupled equations were solved in a mean-field approximation, assuming for the itinerant electrons a bare susceptibility independent on $T$. We obtained $\chi_{e}^{0} = 3.7$ $10^{-4}$ emu/mol. We show that the reduction of $\mu_{eff}$ for $Fe^{3+}$ arises from the strong antiferromagnetic (AFM) interaction between the two sublattices. At variance with classical ferrimagnets, we found that $\Theta$ is ferromagnetic. Within the same model, we show that the ESR spectrum can be described by Bloch-Hasegawa type equations. Bottleneck is evidenced by the absence of a $g$-shift. Surprisingly, as observed in CMR manganites, no narrowing effects of the ESR linewidth is detected in spite of the presence of the strong magnetic coupling. These results provide evidence that the magnetic order in $Sr_{2}FeMoO_{6}$ does not originates in superexchange interactions, but from a novel mechanism recently proposed for double perovskites. The double perovskite Sr 2 F eMoO 6 is known as a conducting ferromagnet (or ferrimagnet) with a relatively high transition temperature, T c > 400K, being magnetoresistant at room temperature. 1 The structure of Sr 2 F eMoO 6 is built of perovskite blocks where the transition metal sites are alternatively occupied by F e and Mo ions. In the simplest ionic picture, the F e 3+ (3d 5 , S = 5/2) ions was assumed to be antiferromagnetically (AFM) coupled to their six Mo 5+ (4d 1 , S = 1/2) nearest neighbors, leading to a total saturation magnetization, M S = 4µ B /f.u. An alternative ionic description, giving the same M S value, assigned F e 2+ (3d 6 , S = 2) and Mo 6+ (4d 0 ), and assumed a ferromagnetic superexchange coupling between the F e 2+ ions. This picture was only fairly consistent with neutron diffraction results, 2 that indicated µ F e Magnetic measurements in the paramagnetic (PM) phase should be able to provide useful evidence in order to establish the F e and Mo valence in this compound and the possible interaction mechanisms. Niebieskikwiat et al. 5 found that the high temperature magnetization, M, displays a non-conventional behaviour, interpreted in terms of two contributions arising from localized (µ ef f = 6.7µ B /f.u.) and itinerant electrons, respectively. Preliminary measurements 6 in samples obtained under different thermal treatments have shown, for different applied fields, apparent values for µ ef f varying from 5.9µ B /f.u. to 4.5µ B /f.u. and suggested that this behavior was non intrinsic and due to the existence of antisite (AS) defects and to the presence of a ferromagnetic (FM) impurity. Electron Spin Resonance (ESR) experiments also help to understand the magnetic properties of these perovskites. The g value brings information on the electronic structure of the ground state of the resonant ions and the linewidth is an experimental probe of the spin dynamics. Niebieskikwiat et al. 5 observed a single ESR line whose intensity seemed to depart from a CW behavior and this result was interpreted in terms of a progressive delocalization of the F e 3+ electrons. In this paper we present detailed dc magnetization and ESR measurements in the PM regime performed on a sample which presents an extremely low antisite defect concentration (AS ∼ = 0.03 and M S = 3.7 µ B /f.u.) and only a small amount of a FM impurity phase (≤ 0.5%). This low AS value, determined by X-ray diffraction, was obtained by a careful control of the synthesis conditions (thermal treatment at 1200 • C for 12 hs in 5% H 2 /Ar), as described in Ref. 3. Preparation of Fe impurities free samples is not an easy task since the very stable SrMoO 4 phase is readily formed above 800 K under the presence of however small O 2 traces in the processing or measuring atmosphere. 7 and thus, severe reducing conditions are required. We have measured M(T) vs H for 300 K ≤ T ≤ 1100 K and for H ≤ 12.5 kG, with a Faraday Balance Magnetometer. The measurements were made in air (p < 1torr). In order to control the reversibility, particularly in the high temperature range, we increased T in 20 K steps, repeating the measurements at 473 K after each step. By following this procedure we found irreversibles changes for T > 800 K. Thus, we considered reliable only the measurements in the range 300 K-800 K and we analyze these data. The ESR experiments were performed with a Bruker spectrometer operating at 9.5GHz between 300K and 600K. In Fig. 2. The high field differential susceptibility, χ(T ) follows a CW law for T > 500 K and up to 800 K, the limit of the reversible behavior. The fast increase of M 0 (T ) at low T indicates the FM transition at T c = 400 K, determined from an Arrott plot (see inset in Fig. 1). We note the presence of only a small ferromagnetic component well above T c . This contribution is weakly T dependent and varies between M 0 (500 K) = 0.021 µ B /f.u. and M 0 (800 K) = 0.013 µ B /f.u.. This result is compatible with the presence of tiny amounts of F e impurities, as observed by X-ray Photoelectron Spectroscopy in epitaxial The ESR spectrum consists of a single line with lorentzian shape and g = 2.01(2) for T ≥ 430 K as described in a preliminary report 9 . Above 450 K the line broadens rapidly and we show in Fig. 3 the peak-to-peak linewidth, ∆H pp (T ). The relative double integrated intensity of the line, I ESR , decreases with increasing temperature, as shown in the inset. Since the spectrum broadens rapidly with increasing T , it is important to separate the contribution of the impurities, observed at high temperatures as a T independent secondary line. 9 In order to obtain accurate values for ∆H pp (T ) and I ESR (T ) of the principal line, we have separated the two contributions, for all temperatures above 480 K, by subtracting the impurity spectrum which is almost temperature independent (M 0 varies less than 5% between 480 K and 550 K) and the only visible at T > 550 K. If both χ 0 e and χ 0 S were Curie-like, χ(T ) would have a typical ferrimagnetic behavior. However, one of the coupled systems is delocalized and, therefore, its bare susceptibility, χ 0 e , is temperature independent. In this case, the total susceptibility, for χ 0 S = C S /T with C S = µ 2 S N A /3k B , may be written as Here, two terms can be identified. The first, temperature independent, is equal to χ 0 e . The second one is CW-like, where the Curie constant is now C ′ = C S (1+λχ 0 e ) 2 , renormalized because of the S-e coupling. The effective moment of the coupled system is then given by . Therefore, we note that a reduction of µ ef f is expected for λ < 0, due to the AFM coupling of the itinerant electrons to the localized F e cores. The Curie Weiss temperature in Eq. (5) is given by Θ = C s (λ 2 χ 0 e + α) and describes an effective interaction between the S moments mediated by the delocalized electrons. Independently of the sign of λ, and for small α, it is always FM. In the double perovskite structure, α would be originated in superexchange interactions between second neighbors F e ions and it is indeed expected to be small. The behaviour predicted by Eq. (5) is consistent with our experimental results, provided that χ 0 e is below the experimental resolution. Based on the band structure calculations, and the XAS results 4 we can safely assume a 3d 5 configuration for the localized F e cores, and then µ S = 5.9µ B . From the measured µ ef f = 4.72µ B and Θ = 446K, we derive χ 0 e = 3.7 10 −4 emu/mol and λ = −540 mol/emu. The value obtained in this way for χ 0 e is, then, fully compatible with our dc susceptibility measurements. It is interesting to compare χ 0 e with available information in order to test its significance. The corrections for the Landau diamagnetism to the Pauli susceptibility and the Stoner amplification parameter may be derived by comparison between measured and calculated values in other metallic perovskites, such as 12 LaNiO 3 . By doing this we can estimate a density of states at the Fermi level, N(ε F ) = 2.8 states/eV-f.u. for Sr 2 F eMoO 6 . Interestingly enough this value compares well with band structure calculations. With respect to λ, its negative value confirms the antiferromagnetic coupling between the F e cores and the delocalized electrons at variance with the double exchange (DE) mechanism where localized and itinerant spins tend to be parallel. This result supports the novel mechanism, kinetically driven, described by Sarma et al. 4 It should be emphasized that, unlike typical ferrimagnets, the CW temperature is positive, in spite of the AFM character of the interaction. Notice that Θ, and consequently T c , is proportional to χ 0 e . Within this picture a larger density of states at the Fermi level should promote a higher T c . We can now turn to the ESR results. We have found that I ESR (T ) follows the same temperature behavior as χ(T ), in the whole PM region (see inset Fig. 3). This observation indicates that the same magnetic species contributes to the ESR spectrum and the dc susceptibility. Our ESR spectrum should also shed light on the issue of the valence of F e ions. The measured gyromagnetic factor, g = 2.01(1), is T independent and may be identified with the spin-only ground state of F e 3+ ions. This resonance corresponds, in the band picture, to the localized 3d 5 F e cores. The observed g-value discards the possibility of assigning the resonance to localized F e 2+ (g ∼ = 3.4). Our dc susceptibility measurement indicates a strong coupling between these localized F e cores and the itinerant electrons. The influence of this coupling on the spin dynamics is described by the Bloch-Hasegawa (BH) type equations. 13 (7) where H ef f e and H ef f S , defined by Eqs. (1) and (2), are now the instantaneous effective fields (including the rf field). Here, 1/T eL and 1/T SL are the spin-lattice relaxation rates for delocalized and localized spins, respectively and 1/T Se , 1/T eS the cross relaxation rates. In our case values of g S and g e are both expected to be very close to g ∼ = 2. The solutions of Eqs. (6) and (7) present two well differentiated regimes: bottlenecked and non-bottlenecked, associated with the relative importance of the coupling between the equations. 13 In the non-bottlenecked case both systems tend to respond independently and two resonances should be observed, with g-shifts related to the corresponding effective fields. In the bottleneck limit the strong coupling of Eqs. (6) and (7) results in a single resonance line corresponding to the response of the weighted sum M = M e /g e + M S /g S to the rf 8 field (symmetric-mode). The other solution of the BH equations (antisymmetric-mode) has no coupling to the rf field and, therefore, does not contribute to the resonance spectrum. The symmetric mode has an effective g-value g = [g e χ e (T ) + g S χ S (T )]/χ(T ) (8) where χ S (T ) and χ e (T ) were defined in Eqs. (3) and (4), respectively and the linewidth is given by In our case, where g S ∼ = g e , M is the total magnetization of the system and a temperature independent g ∼ = 2 is obtained from Eq. (8). Since χ 0 S (T ) = C S /T ≫ χ 0 e , in the whole T range of our experiment, the linewidth may be approximated by ∆H pp (T ) ∼ = [C S /(T χ(T ))]∆H ∞ pp , where ∆H ∞ pp , the linewidth in the high T limit, is dominated by the relaxation rate of the localized cores, 1/T SL . We obtained a good fit of the data using a temperature independent ∆H ∞ pp = 14(1)kG, as seen in Fig. 3. The relaxation rate, for strongly localized interacting spins results from the balance between broadening (dipolar, antisymmetric exchange, crystal field), ω a , and narrowing (isotropic exchange), ω e , interactions between the F e cores: 1/T SL ∝ (ω 2 a /ω e ). The value obtained here for ∆H pp is very large, as compared with those found in other F e 3+ oxides 10,14 with ordering temperatures around 200K-750K, where ∆H ∞ pp varies between 0.5kG and 1.7kG. Since we do not expect large variations of ω a in perovskite oxides, we assume that the larger linewidth must be due to a less important degree of exchange narrowing (small α). In CMR manganites, where the conventional double exchange mechanism is responsible for strong FM interactions, a similar behavior was observed: the increase in the ordering temperature is not accompanied by an enhancement of the exchange narrowing of the ESR line. 15 In the case of Sr 2 F eMoO 6 , the reason for this behavior can be rationalized in terms of the BH equations. The ordering temperature is determined by the combined effect of the S-e coupling (λ) and the S-S superexchange (α). The narrowing of the linewidth, instead, depends only on α. Taking into account that in the F e compounds referred before, 10,14 the F e 3+ ions are nearest neighbors it is not surprising that the narrowing effect in Sr 2 F eMoO 6 is smaller because F e 3+ ions are second neighbors in this case. In summary, we have obtained an excellent and consistent description of the experimental results of dc susceptibility and ESR spectroscopy, in terms of a system of two coupled equations for the F e 3+ localized cores (indicated by g ∼ = 2) and the itinerant electrons, delo- Within the same picture we have described the spin dynamics of the strongly coupled system extending the use of the Bloch-Hasegawa equations to materials magnetically concentrated, where χ 0 S (T ) ≫ χ 0 e even at high temperatures. In this case, the effective relaxation rate for the coupled system is dominated by 1/T SL , the relaxation rate of the localized spins. The absence of narrowing effects associated with the high T c is consistent with the fact that the dominant mechanism for magnetic ordering in Sr 2 F eMoO 6 is a process where the FM coupling between Fe ions is mediated by the mobile electrons. We thank Dr. B. Alascio for helpful comments. This work was partially supported by: CONICET Argentina (PIP 4749), ANPCyT Argentina (PICT 03-05266) and the projects:
2014-10-01T00:00:00.000Z
2002-05-09T00:00:00.000
{ "year": 2002, "sha1": "ac61a4fb59c2e09b61a13f58c238ad864e594285", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0205187", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ac61a4fb59c2e09b61a13f58c238ad864e594285", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
251953294
pes2o/s2orc
v3-fos-license
The Glass Ceiling of Automatic Evaluation in Natural Language Generation Automatic evaluation metrics capable of replacing human judgments are critical to allowing fast development of new methods. Thus, numerous research efforts have focused on crafting such metrics. In this work, we take a step back and analyze recent progress by comparing the body of existing automatic metrics and human metrics altogether. As metrics are used based on how they rank systems, we compare metrics in the space of system rankings. Our extensive statistical analysis reveals surprising findings: automatic metrics -- old and new -- are much more similar to each other than to humans. Automatic metrics are not complementary and rank systems similarly. Strikingly, human metrics predict each other much better than the combination of all automatic metrics used to predict a human metric. It is surprising because human metrics are often designed to be independent, to capture different aspects of quality, e.g. content fidelity or readability. We provide a discussion of these findings and recommendations for future work in the field of evaluation. Introduction Crafting automatic evaluation metrics (AEM) able to replace human judgments is critical to guide progress in natural language generation (NLG), as such automatic metrics allow for cheap, fast, and large-scale development of new ideas. The NLG fields are then heavily influenced by the set of AEM used to decide which systems are valuable. Therefore, a large body of work has focused on improving the ability of AEM to predict human judgments. Human judgment data is typically employed to decide which metric to select based on correlation analysis with human annotations (?Owczarzak et al., 2012;Graham, 2015). In this work, we take a step back and investigate the relationship between existing AEM and human judgments globally. We do not make metric recommendation but reflect upon the global progress in the field of automatic evaluation. Our work is motivated by the findings of Fig. 1. It depicts the improvement over time, when new metrics were introduced, in the ability to fit human judgments when using all existing metrics as features. The fit is measured by the correlation with humans of a trained classifier in a 5-fold cross-validation setup. Surprisingly, we observe small marginal improvement and little progress over the years. Recent works emphasized the importance of viewing metrics in terms of how they rank systems instead of just comparing score values (Novikova et al., 2018;Peyrard et al., 2021;Colombo et al., 2022a). Indeed, not only ranking is a more robust framework of comparison, it is also more aligned with the way metrics are used: identifying and extracting the "best system". Thus, we perform our analysis in the space of rankings. i.e., how do metrics rank systems? By analyzing 9 datasets covering 4 tasks and 270k scores, we made the following observations: Findings. (i) Automatic metrics are much more similar to each other, in terms of how they rank systems, than they are to human metrics. It means that AEM, even the more recent transformer-based ones are similar to the older ones when used in practice (ROUGE and BLEU). (ii) This lack of complementarity results in the inability to fit human judgments even when all these metrics are taken together as features for a classifier predicting humans. (iii) Quite surprisingly, different human dimensionsdifferent annotations guidelines such as readability, or content fidelity -are very predictive of each other, whereas AEM are much less predictive of humans. This finding is striking because human metrics are designed to capture different and independent aspects of quality whereas AEM have been selected precisely for their ability to match humans. We would expect human metrics to be uncorrelated and automatic metric to be highly correlated with humans but we observe the opposite. First, it casts serious doubt about the ability of AEM to replace human judgments. Then, the correlation between independent human annotations of quality hints at some latent inherent goodness of systems: good systems are good in different aspect whereas bad systems are bad across all aspects. Our findings have several consequences that can inform future research. Newly introduced metrics are not complementary to previous ones, resulting in small global improvements. As a way forward, we propose that research, instead of crafting metrics that maximize correlation with humans, focus on making metrics that also aim to be explicitly complementary to the set of existing metrics. This would enforce maximal marginal gain and ensure that the field, as a whole, makes progress towards capturing the complexity of human annotations. For practitioners, it is common practice to report several AEM in the hope to get a better view of system performances. However, reporting several metrics that all produce similar rankings does not bring useful additional information. With our proposal, reporting a set of complementary metrics would better serve the intended purpose. To help research build upon our work and use our measure of complementarity, we make our code available at github. Methodology Terminology. Let X be the space of possible outputs for an NLG task. An NLG metric is a function m : X ×X → R + which, from a given textual candidate C ∈ X and corresponding reference R ∈ X , computes a score m(C, R) reflecting the properties that C should satisfy (e.g. fluency, fidelity...). Of course, it is illusory to summarize subtle semantic properties by a single scalar and one is rather seeking for metrics that are able to discriminate between different systems. In fact, crafted AEM are evaluated by comparison to human judgments: one usually computes ranking correlations such as the Kendall's τ . Higher correlations indicating that the AEM is a better replacement for the human metrics. Encoding metrics with rankings. Since the usage of NLG metrics is to rank systems, we choose to represent an NLG metric, automatic or human, by the ranking it induces on a set of systems or of utterances. More formally, for N ≥ 1 NLG systems evaluated on a dataset made of K ≥ 1 utterances, there exists a natural ranking representations of m: Each utterance k ∈ {1, . . . , K} induces a ranking σ m k ∈ R N of the N systems seen as a vector σ m k , where σ m k (S) is the rank of system S ∈ {1, . . . , N}. For a system S, the representation of a metric m, noted σ m,S , is sum of rankings over the utterances: We call this System level representation. Symmetrically, each system k ∈ {1, . . . , N} induces a ranking σ m n ∈ R K of the K utterances, where σ m n (k) is the rank of utterance k. The Utterance level representation of m is sum of rankings over the systems: Using the space of rankings has been shown to be more robust than the raw scores as it is less sensitive to outliers and statistical variations (Novikova et al., 2017;Peyrard et al., 2021;Colombo et al., 2022a). Furthermore, this representation is closely tied to Borda counts, which enjoys theoretical properties: the ranking induced by σ m,S is a 5approximation of the Kemeny-consensus which is a good notion of average in the symmetric group (Kemeny, 1959;Young and Levenglick, 1978;Coppersmith et al., 2006). It is moreover the fastest approximation of the Kemeny-consensus whose computation is NP-hard (Ali and Meilȃ, 2012). Complementarity. We measure the complementarity between two metrics -humans or automatic -by the average over utterances of the distance between their rankings of systems. Formally, for two metrics m 0 and m 1 , complementarity is given by: where d τ is the normalized Kendall's distance between the vectors of rank. It is related to the Kendall's rank correlation τ by: Similarly, we define the complementarity between a metric m 0 and a set of other metrics m := {m i } i=1,...,l , as the average pairwise complementarity: Complementarity measures the extent to which a metric ranks systems differently than another metrics or a set of other metrics. Whether comparing two metrics or a metric with set, it is a number between 0 and 1 where 0 indicates that the metrics rank systems in the exact same order and 1 indicates the exact opposite order. In between, it counts the number of inversions between the two rank lists normalized by the number of possible pairs of systems. Dataset description To ensure a wide coverage of NLG we focus on four different problems i.e., dialogue generation (using PersonaChat (PC) and TopicalChat (TC) (Mehri and Eskenazi, 2020)), image description Figure 2: Complementarity: For each dataset, the pairwise complementarity between each pair of metrics as computed by Eq. 3 both human and automatic. In these matrix plot, symmetric by design, we ordered metrics to have the human one first and the automatic ones after, the red lines trace the limit between humans and AEM. (Ng and Abrecht, 2015)), BERTScore (Zhang et al., 2019), MoverScore (Zhao et al., 2019). For MLQE we solely consider several version of BERTScore, MoverScore and ContrastScore. The human evalutions criterion are specific to each dataset and will be identified by starting with an H:. Overall, our final datasets gather over 270k scores. Experiments Finding 1: Automatic metrics are similar to each other much more than they are to human metric. In Fig. 2, we report the pairwise complementarity between each pair of metrics as computed by Eq. 3 for both human and AEM. When aggregated over pairs and over datasets, we obtain an average complementarity between: (i) two human metrics of .16 ± .01, (ii) two AEM of .20 ± .01 and (iii) a human and an automatic metric of .35 ± .02. Importantly, we observe across datasets low complementarity, i.e., strong similarity, between AEM, low complementarity between human metrics but high complementarity, i.e., low similarity, between automatic and human metrics. We draw two conclusions from this analysis: (i) AEM rank systems similarly but (ii) differently than humans. There is some nuances across datasets. The effect described above is particu- larly strong in the Dialog, MLQE and SUM-Eval datasets. In particular, we notice that TAC datasets, from the summarization task, have lower complementarity in general, meaning that all metrics, human and automatic, are more similar. Indeed, a lot of works have relied on these datasets to develop new metrics. Interestingly, the more recent REAL-SUM and SUM-Eval reveal much lower metric similarity. Finding 2: Automatic metrics even all combined do not explain human metrics. If AEM are rather different than human metrics, we might wonder whether it is possible to get a good approximation of human judgments by combining existing AEM together. To account for possible correlations, we rely on XGBoost regressors with 5-fold crossvalidation to predict human judgments. The training is performed on three different features space: (i) AEM only, (ii) other human metrics only and (iii) both sets of metrics combined. We compute the Kendall's τ between predictions and ground truths and report the results in Fig. 3. The plot confirms that AEM struggle to capture human judgment subtlety: correlation rarely exceeds .4 on held-out data. In contrast, human metrics are much more predictive of each others, even if they are often supposed to capture different concepts. Finally, it is worth noting that adding AEM to human ones do not marginally improve the prediction power. These findings cast shadows over recent progress in the field. In next section, we discuss the implications and make a proposition for future work. Discussion Our analysis reveals that automatic metrics are not complementary, and recent automatic metrics actually capture the same properties of human judgments as older ones. Furthermore, the existing metrics are not strong predictors of human judgments. Quite surprisingly, other human metrics which are often designed to be independent of each other end-up being more predictive of each other than automatic metrics. This predictability of human metrics from one another can be explained due to the available datasets: when a system is good at extracting content, it is also often good at making the content readable, when a system is bad it is often bad across the board in all human metrics. However, the fact that automatic metrics are less predictive than other human dimensions casts some shadow over recent progress in the field. It shows that the current strategy of crafting metrics with slightly better correlation than baselines with one of the human metrics has reached its limit and some qualitative change would be needed. A promising strategy to address the limitations of automatic metrics is to report several of them, hoping that they will together give a more robust overview of system performance. However, this makes sense only if automatic metrics measure different aspects of human judgments, i.e., if they are complementary. In this work, we have seen that metrics are in fact not complementary, as they produce similar rankings of systems. Proposition for future work To foster meaningful progress in the field of automatic evaluation, we propose that future research craft new metrics not only to maximize correlation with human judgments but also to minimize the similarity with the body of existing automatic metrics. This would ensure that the field progresses as whole by focusing on capturing aspects of human judgments that are not already captured by existing metrics. Furthermore, the reporting of several metrics that have been demonstrated to be complementary could become again a valid heuristic to get a robust overview of model performance. In practice, researchers could re-use our code and analysis to enforce complementarity by, for example, enforcing new metrics to have low complementarity as measured by Eq. 3. Even though we have considered a representative set of automatic evaluation metrics, new ones are constantly introduced and could be added to such an analysis. Similarly, new datasets could be added to the analysis and impact the results. In an effort to make our findings relevant in the long run, we release an easy-to-use code base to replicate our analysis with new metrics and datasets. Like the majority of analysis on automatic evaluation metrics, ours rely on the assumption that human judgments are valid and meaningful. However, some works have questioned the quality of human judgments in standard datasets. A.1 Utterance level Representation In the main paper, we focus on System level representation. Each utterance k ∈ {1, . . . , K} induces a ranking σ m k ∈ R N of the N systems, where σ m k (n) is the rank of system n. The system level representation of m is the sum of rankings over the utterances: In the supplementary, we also provide an analysis at Utterance level representation. Each system k ∈ {1, . . . , N} induces a ranking σ m n ∈ R K of the K utterances, where σ m n (k) is the rank of utterance k. The utterance level representation of m is the sum of rankings over the systems: A.2 A remark on the rank representations For a given family of l ≥ 1 objects, the formal mathematical object describing a ranking is a permutation σ ∈ S l which describes how the objects must be interchanged to be ordered. The set of permutations is a group where the notion of mean is not straightforward since the addition of two permutations is not a well defined object. For a given family σ 1 , . . . , σ p , the classical surrogate is called a Kemeny consensus, defined by: where d the Kendall distance, given by: However, computing a Kemeny consensus is a NP hard problem (Bartholdi et al., 1989;Dwork et al., 2001). It turns out that the Borda count, defined as the sum of ranks induced by the permutations, is a very good approximation of the Kemeny consensus (Ali and Meilȃ, 2012), justifying our choices (5) and (6). B Extending Finding 1 using clustering analysis In this section, we want to obtain a visual and interpretable representation of both automatic and human metrics to understand their relationships better. Formally, we study the abstract space of metrics when encoded at the System or Utterance level. We ask the two following questions: • What is the effective dimension of this space? • Does it exist clusters of metrics? B.1 Representing the metrics in a 2D space In Figure 4a and Figure 4b, we report the variance analysis given by a PCA (Jolliffe and Cadima, 2016) for each dataset at the System and Utterance levels, respectively. Analysis: We observe that only a few components (less than 6) are needed to explain over 80 % of the variance. This behavior is typical to all considered datasets and can be observed when studying the ranks at the System and Instance levels. Takeaways: Automatic and human metrics present in our datasets can be represented in a lowdimensional space. This confirms the low complementarity already observed in the main paper: the effective dimension of metrics is small. We will use the two first components in the next experiments to represent the metrics in a 2D space. In Figure 5 and Figure 6, we represent all the considered metrics (both human and automatic) on the 2D-dimensional space corresponding to the two first components of the PCA. We cluster the metrics with the Louvain Algorithm (Blondel et al., 2008) performed on the similarity matrix between metrics. Analysis: From the figure, we observe a low number of clusters, i.e., two in most cases and at most three in the case of utterance level representations. When using system-level representation, the Human metrics have their cluster in all the configurations except for FLICKR, where H:overall is in the same cluster as JS 2 . We observe a similar trend when studying the utterance level representation: human metrics often belong to the same cluster, which contains a low number of automatic metrics. It is also worth noting that in most figures, human metrics are isolated. Takeaways: This experiment further validates Findings 1: Automatic metrics are similar to each other much more than they are to human metric. The proposed procedure could be used in the future to find properties of newly introduced metrics and obtain visual representations of the metrics. C Further results for Findings 2 In this section, we provide further experiments that validate Findings 2 and provide a method for future research to understand newly introduced metrics better. Specifically, we aim to answer the following research question: • In Findings 1 we showcase that human metrics carry different information than automatic metrics. How to measure the amount of information missing between the automatic and human metrics? • What metric or group of metrics are the most useful to predict a given human metric? C.1 Measuring the information missing in automatic metrics In this subsection, we extend the result provided by Figure 3. We measure the ratio between the MSE-error of a linear regression trained with automatic metrics together with human metrics and a linear regression trained only with automatic metrics for varying regularization coefficient. For each dataset, we provide mean and variance corresponding to the prediction of available human metrics. When solely one human metric is available, the dataset is not considered. Observations: From the Figure 7, we observe a strong decrease in error when adding human metrics to predict another human metric. When α increases, all the coefficients are set to 0, and the relative MSE (d) Detailed for each dataset when using utterance level Representation Figure 7: Human metrics contain useful information that is not in automatic metrics for predicting other human metrics. On this plot, we report the ratio between the MSE-error of a linear regression trained with automatic metrics together with human metrics and a linear regression trained only with automatic metrics. For each dataset, we provide mean and variance corresponding to the prediction of available human metrics. is thus 0. It is worth noting that these observations hold for both system and utterance level representation. When observing the details per dataset, we observe a similar trend for all human metrics. Takeaways: When predicting a specific human metric, other human metrics contain useful predictive information that is not present in the automatic metric. C.2 Which metrics are the most useful to predict human judgment at the System level? For this experiment we will rely on a Lasso Regression and denote the multiplier of the L1 term α. For several values of α (x-axis), we report each metric's weights (y-axis) in Figures 8 and 9. Observations: When increasing the weights given to the L1 penalization term, we observe that the regression weights of the human metrics are the ones that are the last to be set to 0. Human metrics contain the most relevant information. It is worth noting that this phenomenon is generic across the datasets and human criteria. Takeaways: Human metrics are the most useful metrics when predicting other metrics. Figure 9: Human metrics are the most useful metrics when predicting other metrics. Regression weights (y-axis) obtained by each metric when training a Lasso Regression to predict a human metric for different regularization coefficients (x-axis) on the system level representation of the metrics.
2022-09-01T06:41:29.454Z
2022-08-31T00:00:00.000
{ "year": 2022, "sha1": "ee4e3c247e02adb873083eec1c4e604450ae8abe", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2023.findings-ijcnlp.16.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9d6a08ce3fb1de8139d2df26435ee6e44e203ede", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
244099024
pes2o/s2orc
v3-fos-license
Elucidating the Role of Extracellular Vesicles in Pancreatic Cancer Simple Summary Pancreatic cancer is one of the deadliest cancers worldwide. The chance of surviving more than 5 years after initial diagnosis is less than 10%. This is due to a lack of early diagnostics, where often at the time of initial detection the tumour has already spread to different parts of the body and has developed a propensity to develop drug resistance. Therefore, to tackle this devastating disease, it is necessary to identify the key players responsible for driving pancreatic cancer. Numerous studies have found that small bubble-like packages shed by cancer cells, called extracellular vesicles, play an important role in the progression of the disease. Our knowledge on how extracellular vesicles aid in the progression, spread and chemoresistance of pancreatic cancer is the focus of this review. Of note, these extracellular vesicles may serve as biomarkers for earlier detection of pancreatic cancer and could represent drug targets or drug delivery agents for the treatment of pancreatic cancer. Abstract Pancreatic cancer is one of the deadliest cancers worldwide, with a 5-year survival rate of less than 10%. This dismal survival rate can be attributed to several factors including insufficient diagnostics, rapid metastasis and chemoresistance. To identify new treatment options for improved patient outcomes, it is crucial to investigate the underlying mechanisms that contribute to pancreatic cancer progression. Accumulating evidence suggests that extracellular vesicles, including exosomes and microvesicles, are critical players in pancreatic cancer progression and chemoresistance. In addition, extracellular vesicles also have the potential to serve as promising biomarkers, therapeutic targets and drug delivery tools for the treatment of pancreatic cancer. In this review, we aim to summarise the current knowledge on the role of extracellular vesicles in pancreatic cancer progression, metastasis, immunity, metabolic dysfunction and chemoresistance, and discuss their potential roles as biomarkers for early diagnosis and drug delivery vehicles for treatment of pancreatic cancer. Pancreatic Cancer Pancreatic cancer (PC) is a highly fatal neoplastic disease. It is considered a major cause of cancer-associated mortality, with an overall 5-year survival rate of less than 10% [1]. Once patients are diagnosed with advanced PC, their life expectancy does not exceed 6 months [2]. The extremely poor prognosis associated with PC can be attributed to a number of factors, including the rapid progression of the disease, high metastatic propensity, lack of early diagnostic symptoms or biomarkers and development of therapeutic resistance to both chemotherapy and radiotherapy [3,4]. Early resection of the tumour is considered to be the only successful treatment regime for this aggressive malignancy. However, by the time of diagnosis, it is metastatic in approximately 80% of the patients, making PC treatment extremely daunting [5,6]. Despite several studies outlining various signalling pathways involved in PC progression, the underlying mechanism behind this malignancy is poorly understood. Therefore, it is necessary to gain further insight into the cellular and molecular mechanisms contributing to PC progression and chemoresistance. Over the last two decades, the role of extracellular vesicles (EVs) in mediating intercellular communication has come into focus. These nano-sized, membrane-enclosed vesicles are routinely secreted by different cell types, including cancer cells, into their microenvironment [7,8]. Emerging evidence implicating EVs in cancer progression, metastasis and chemoresistance across various malignancies, including PC, have evolved [6,9,10]. In this review, we aim to discuss the current knowledge on the role of EVs in PC progression, metastasis and chemoresistance. In addition, we will discuss their applications as potential biomarkers for early diagnosis and as therapeutic targets and drug delivery vehicles for the treatment of PC. Extracellular Vesicle Overview 1.2.1. EV Subtypes and Biogenesis EVs are secretory lipid bilayer membrane-bound nanovesicles that can be classified based on their size, biogenesis, and mode of secretion. It is thought that all cells in the human body secrete EVs and they are like a snapshot of the secreting cell, encapsulating active and specific biomolecules from the parent cell. There are several classifications of EVs based on their size and biosynthesis, which include: exosomes (30-150 nm), ectosomes or shedding microvesicles (100-1000 nm), apoptotic bodies (1000-5000 nm), migrasomes (500-3000 nm) and large oncosomes (1000-10,000 nm) ( Figure 1) [7,9]. All EVs contain a compendium of functional biomolecules such as proteins, nucleic acids, lipids and metabolites [10][11][12]. Once secreted into the extracellular space, these nanosized particles serve as messengers, delivering their cargo into recipient cells, both near and far from the secreting cell [13,14]. One subtype of particular interest to researchers are exosomes. Exosomes are classified by their biogenesis pathway; they are formed within the endocytic compartment by inward budding within the early-late endosome, resulting in the formation of intraluminal vesicles (ILVs) within the multivesicular body (MVB) (Figure 1). Once formed, these intraluminal vesicles can be released into the extracellular space by exocytosis (via fusion of MVB with the plasma membrane) as exosomes ( Figure 1). Alternatively, MVBs can be targeted for degradation and fuse with the lysosome for enzymatic digestion (Figure 1) [15]. However, the exact mechanism behind the sorting and fate of MVBs is poorly understood. The Endosomal Sorting Complex Required for Transport (ESCRT) machinery containing ESCRT-0, I, II and III are thought to be the major players required for exosome biogenesis. They act in a series to bind, cluster, and sort ubiquitinated proteins into late endosomes [16]. Interestingly, MVBs can also be formed by an ESCRT-independent pathway. This pathway involves the accumulation of sphingomyelin in lipid microdomains (lipid rafts), where the sphingomyelin is converted to ceramide by sphingomyelinases. These ceramide-rich domains are not structurally balanced between lipid monoleaflets, which causes reverse budding of the membrane. This pathway is referred to as the ceramidedependent pathway [17]. In addition, RAB GTPases and tetraspanins are also involved in the biogenesis and release of exosomes [18,19]. The other major class of EVs of interest are shedding microvesicles (MVs) or ectosomes, which are thought to be larger nanoparticles that are released through outward budding of the plasma membrane ( Figure 1) [20,21]. It has been shown that there are several driving factors for MV formation, which include: rearrangement of the actin cytoskeleton and plasma membrane content, accumulation of intracellular calcium, externalization of phosphatidylserine and calpain activation and alterations in local membrane curvature [22,23]. Interestingly, the ESCRT machinery that generates exosomes also participates in MV biogenesis. (30-150 nm) are formed within the endosomal system. Maturing from the early endosome to the late endosome, ESCRTs and other related proteins then assist in wrapping recruited cytosolic cargoes with endosomal lipid membrane to form independently closed intraluminal vesicles inside multivesicular bodies (MVBs). MVBs are then either directed toward lysosome for degradation or toward plasma membrane where they fuse to be released from the intraluminal vesicles into extracellular space as exosomes by exocytosis. Microvesicles or ectosomes (100-1000 nm) shed directly from the plasma membrane, encapsulating cytosolic contents. Apoptotic bodies (1000-5000 nm) and large oncosomes (1000-10,000 nm) are secreted from the membrane blebbing of apoptotic cells and cancer cells, respectively. Migrasomes (500-3000 nm) are generated specifically by migrating cells. EVs are transported via bodily fluids to their target recipient cells. EVs may directly fuse with the recipient cell plasma membrane to deliver their contents and can interact to the surface receptors of the target cells or get taken up into the recipient cells through different endocytic mechanisms. Created with https://biorender.com/ (8 October 2021). Exosomes and MVs are considered to be the major types of EVs secreted by healthy and cancerous cells; however, once secreted into the extracellular space they are very difficult to differentiate from one another. This is due to the fact that their size distribution heavily overlaps with one another, and there are no known protein markers present on one sub-type that are absent on the other. It is thought that exosomes and MVs are differentially enriched in specific proteins that represent the organelles they originate from, which may help in differentiating between these subtypes, but it is not absolute. For instance, exosomes originate from the endocytic pathway and the MVB is thought to be enriched in the tetraspanin cluster of differentiation (CD) CD63, as well as ESCRT components such as tumour susceptibility gene 101 (TSG101), programmed cell death 6-interacting protein (PDCD6IP; also known as ALG-2-interacting protein 1, ALIX) and syntenin [24][25][26][27]. MVs are thought to be enriched in CD9 and CD81 as these tetraspannin are primarily located at the plasma membrane [27,28]. However, CD9 and CD81 are present within the endocytic system (and on MVBs) due to the nature of endocytosis originating from the plasma membrane. Likewise, TSG101 has been reported to be involved in MV shedding and it is unclear whether ALIX and syntenin are specific for exosomes [24][25][26]29]. Therefore, for the purpose of this review, from here on we refer to both exosomes and MVs as EVs as we cannot be sure that they are specifically one subtype or the other and are likely to include a heterogenous population. Other EV subtypes include apoptotic bodies, migrasomes and large oncosomes ( Figure 1). Apoptotic bodies are large EVs secreted from membrane blebs of apoptotic or dying cells [11,30,31]. Migrasomes have open pomegranate-like structures as they harbour multiple small vesicles within their lumen. They are generated at the edge of migrating cells by a process termed as migracytosis [32,33]. Large oncosomes, on the other hand, are released from amoeboid cancer cells and are known to be carriers of oncogenic cargo [34]. Key Methodologies to Isolate and Characterize EVs As described above, EVs are a heterogeneous population of membrane-bound nanoparticles of diverse size and density that are secreted by cells of all tissues and organs in both healthy and diseased conditions [7,9]. These EVs are packed with a variety of biomolecules that include surface receptors, proteins, lipids, and nucleic acids and analysis of their cargo can provide useful insights into the state of the parental cells, such as in PC [35]. There are a variety of methods that are available and commonly used to isolate EVs for further analysis. Here, we provide a brief summary of some of the most commonly used techniques. EV isolation techniques are based on known physical properties of EVs, such as size, density and surface content. The most commonly utilized conventional EV isolation techniques include: differential ultracentrifugation [36,37], density gradient centrifugation [38,39], ultrafiltration [40], size-exclusive chromatography [41,42] and immunocapture [38]. Each of these methods come with their own advantages and shortcomings, and while these techniques are good for the isolation of whole EVs, it is important to note that overlaps in their physical properties such as size and density make it difficult to separate EV subpopulations. Differential centrifugation has long been considered the gold standard for EV isolation since their discovery. It utilizes centrifugal force to separate vesicles based on their sedimentation rate. Larger particles such as cells, cell debris, and apoptotic bodies are first sequentially removed by sedimentation with increasing centrifugal force, followed by smaller particles (smaller EVs which are thought to be MVs and exosomes) at much higher centrifugal forces [37,43,44]. While this method is standard for isolating EVs from cell culture or biological fluids, it is thought to be crude and contain protein contaminants. Density gradient centrifugation is another traditional method used for isolation of EVs. This technique separates EVs and contaminants based on their buoyant density into specific layers in solutions of either sucrose, iohexol, or iodixanol [9,38]. Compared to differential centrifugation alone, further purification using a density gradient is thought to yield a purer EV population, containing fewer protein contaminants. This has also long been used for the separation of subcellular components, such as mitochondria, peroxisomes, and endosomes, and the isolation of viral particles [45]. Ultrafiltration is newer in the field but gaining interest as an alternative to differential centrifugation. Ultrafiltration isolates vesicles based on their size by using porous membranes to retain specific size particles and allowing smaller particles to flow through the membranous filter [40,46]. This method is good for concentrating EV samples and usually requires further EV isolation techniques for purification [47]. Size exclusion chromatography is emerging as an excellent method to remove protein contaminants but requires a small/concentrated sample to begin with, which is good for biological fluids such as serum, but it requires a concentration step prior to separation when used for cell culture-derived EVs. During this process, the solution is filtered through a column of porous beads with a radius smaller than the radius of desired EVs [41,42]. Finally, affinity-based techniques such as immuno-capture can be used to isolate EV subpopulations based on the expression of surface receptors. One common example is using antibody-conjugated beads to bind and pull down specific EVs [38,48]. There is now a trend toward the use of several EV isolation techniques together to obtain a purer EV preparation. These techniques are further discussed in detail here [49]. Following the isolation of EVs, a combination of multiple methods is required to characterise EVs and validate the accuracy of isolation and purity of the EVs preparation, in accordance with MISEV guidelines [50]. Transmission electron microscopy and cryoelectron microscopy are used to confirm the morphology and purity of EVs [51]. EV size, concentration and zeta potential can be assessed on the basis of Brownian motion by nanoparticle tracking analysis (NTA) [52] and dynamic light scattering (DLS) [53]. Nanoscale flow cytometry (nanoFACS) is used for the analysis of interested EV populations using antibodies that specifically recognize EVs from heterogeneous population [54]. To further verify the purity of isolated EVs, Western blotting can be used to detect the presence of EV markers such as CD63, CD9, CD81, TSG101 and Alix and rule out contamination with negative controls [55,56]. EV Cell-to-Cell Signalling and Cargo Delivery Once EVs are secreted into the extracellular space, they are carried via bodil fluids to recipient cells where they interact and deliver their messages to elicit a variety of effects. How EVs elicit their effects on recipient cells varies. In some instances, EVs mediate ligandreceptor interactions at the plasma membrane to stimulate signalling cascades within the recipient cell [57,58]. The other mechanism for mediating an effect in the recipient cell is for the EV to deliver its cargo of active biomolecules into the cytoplasm (or another relevant organelle/compartment). How cargo delivery is achieved remains unclear, and some of the suggested mechanisms for delivery include (1) plasma membrane fusion and (2) via endocytosis ( Figure 1). Fusion at the plasma membrane suggests that the EV membrane directly merges with the cell surface membrane. During this process, it is thought that the two lipid bilayer membranes would come in close proximity and form a hemifusion stalk to facilitate the merging of two membranes [59]. Proteins such as Rab, Lamp-1, Sec1/Munc-18 and SNAREs may help with the process of fusion, although it is unclear if they would be available in this instance [60]. Alternatively, EVs may gain entry into the target cell by conjugating with membrane-bound receptors on the recipient cell, not only activating various signalling pathways, but leading to receptor-mediated endocytosis [61]. Both clathrin-mediated and caveolae/caveolin-1-dependent endocytosis have been shown to be crucial players in the uptake of EVs in some instances [62,63]. During clathrin-mediated endocytosis, clathrin-coated vesicles deform the structure of the plasma membrane to form a cave-like structure. Later, these intracellular vesicles un-coat clathrin to fuse with the endosome to unload their cargo [64]. Caveolin-dependent endocytosis, on the other hand, is a lipid raft-mediated endocytosis. Interaction of caveolin-1 with cholesterol results in invagination of the membrane that internalizes EVs into the cell [65]. EVs can also be endocytosed via non-receptor-mediated endocytic pathways, such as micropinocytosis and phagocytosis [63,66,67]. The current knowledge on EV uptake via endocytosis is covered in more detail [68]. Regardless of the uptake mechanism, once in the endocytic system the EVs need to fuse with the limiting endocytic membrane (also known as back-fusion) to deliver their contents. Currently, direct evidence for both fusion at the plasma membrane and fusion within the endocytic compartment is limited but reviewed in detail [69]. The Role of EVs in Pancreatic Cancer Cancer patients tend to have more EVs in their circulation compared to healthy individuals [70]. These EVs are enriched with proteins, lipids and nucleic acids, which play a pivotal role in intercellular communication [71]. Several studies have investigated EV-mediated and inferred cargo delivery for the modulation of tumorigenesis, metastasis, immune activation, and therapy resistance in pancreatic cancer, as discussed below (Table 1). EVs Aid PC Cell Proliferation and Angiogenesis A number of studies have affirmed that EVs function to regulate PC development, progression and angiogenesis. For instance, myoferlin-rich EVs are secreted by PC which stimulate proliferation and migration of PC cells [72,73]. Tumour-derived EVs containing miR-222 were reported to induce PC proliferation and invasion by regulating the expression and re-localization of tumour suppressor P27 (also known as Kip1 or cyclin-dependent kinase inhibitor B). On one hand, miR-222 can directly inhibit the expression of P27; on the other, miR-222 can also phosphorylate p27 via the miR-222/PPP2R2A/AKT pathway that leads to active cytoplasmic p27 expression. Together, these effects contribute to PC proliferation and invasion (Figure 2) [74]. Moreover, in vitro studies by Richards et al. demonstrated that EVs derived from cancer-associated fibroblasts (CAFs), following treatment with gemcitabine, increased the levels of Snail and miR-146a in the recipient PC cell lines to promote proliferation and chemoresistance. Similarly, inhibiting the release of EVs from gemcitabine-treated CAFs reduced proliferation and cell survival [75]. Interestingly, another study found that uptake of CAF-derived annexin-A6-positive EVs by PC cells accelerate PC aggressiveness [76]. In this same study, they found annexin-A6-enriched EVs also aid in PC progression by modulating angiogenesis to supply adequate oxygen and nutrients for the growth of the tumour. Moreover, Shang et al. reported PC cell-derived EVs containing miR-27a stimulate human microvascular endothelial cell proliferation and angiogenesis in PC by reducing the expression of BTG2 [77]. Another study by Novizio and colleagues demonstrated annexin A1-enriched EVs promote PC progression by stimulating the activation of formyl peptide receptors (FPRs) that trigger epithelial-mesenchymal transition of the epithelial cells. This mesenchymal switch is important for stabilizing the neovasculature during angiogenesis [78]. Together, these studies strongly suggest that there is a role for EVs in promoting PC cell proliferation, tumourigenesis and angiogenesis to support growth. Additional studies to further delineate the molecular mechanisms governing these effects will help us fully understand the role of both CAF and PC derived EVs in supporting tumourigenesis. PC EVs Modulate Invasion and Metastasis The median survival of patients with PC is less than 6 months [1]. PC is known to mostly metastasize to abdominal sites, especially the liver. Along with the anatomical position of liver that allows entry of pancreatic blood through the hepatic portal vein, the recruitment of PC-derived EVs by the liver makes it an ideal and common metastatic site in PC [6]. Accumulating evidence support the notion that EVs promote the metastatic cascade in PC (Figure 2; Table 1). For instance, one in vivo study showed that 24 h post retroorbital injection (behind the eye into the retro-orbital venous sinus) fluorescently labelled PC-derived EVs were more efficiently taken up by the liver than lungs and therefore were mostly detected in the liver [79]. Costa-Silva and colleagues have also shown that EVs derived from PC initiate the formation of a pre-metastatic niche in the liver that ultimately increases tumour burden [67] . EVs secreted by PC, enriched with high levels of macrophage migration inhibitory factor (MIF), are readily taken up by the Kupffer cells that alter the liver microenvironment by initiating an inflammatory response and activating fibrotic pathways. These changes assist metastasis in the liver by educating liver cells to recruit immune cells and upregulate the production of TGF-β and fibronectin [67]. Interestingly, PC-derived EVs containing integrin αvβ5 tend to travel to the liver, whereas EVs enriched with α6β4and α6β1 are primarily found in the lungs (Figure 2) [79]. This specificity of PC EV targeting to different organs, primarily the liver, coincides with known metastatic sites in PC patients and suggest that EVs are important mediators of invasion and metastasis. However, further investigation is essential to understand the molecular mechanisms behind organotrophic metastasis mediated by PC-derived EVs. PC EVs Promote Chemoresistance in PC Unfortunately, chemoresistance is a major obstacle in the treatment of PC. Prolonged exposure to gemcitabine, the standard chemotherapeutic agent for PC, results in the development of chemoresistance in the majority of cancer patients [80]. Several recent studies have revealed that EV-mediated cell-to-cell interaction aids the development of drug resistance in multiple cancers by transporting drug resistance-related genes between cells. For instance, it has been shown that proteins such as survivin, [81], Snail [82], P glycoprotein [83] and Multidrug resistant-1 (MDR-1) [84] are trafficked via EVs to recipient cells to promote drug resistance. PC derived EVs have also been reported to deliver a variety of chemoresistance associated miRNAs and proteins to recipient cells to diminish the efficacy of chemotherapeutics, summarised Figure 2 and Table 1. Continuous treatment with gemcitabine has been found to increase the expression of miR-155 in PC cells which subsequently augments the secretion of miR-155 packaged EVs. When miR-155-enriched EVs are taken up by other cancer cells, gemcitabine-associated chemoresistance is increased through the activation of anti-apoptotic pathways and the suppressing gemcitabine-metabolizing enzyme, deoxycytidine kinase [85,86]. Another study showed that EVs derived from gemcitabine-resistant PC stem cells confer drug resistance to gemcitabine-sensitive PC cells by transmitting miR-210 [87]. MiR-21 is another chemoresistance marker found to be elevated in PC EVs that promote drug resistance by binding to apoptotic peptidaseactivating factor 1 or activating the PI3K/AKT signalling pathway [88][89][90]. Furthermore, EVs have also been found to induce chemoresistance in PC cells by increasing the expression of superoxide dismutase 2, catalase and reactive oxygen species (ROS) detoxifying genes, thereby stimulating ROS detoxification [86]. Long-term treatment with gemcitabine increases EV secretion and upregulates the level of chemoresistance-inducing factor Snail and miR-106b in both CAFs and CAFs-derived EVs. The uptake of CAFs-derived EV Snail and miR-106b by recipient PC cells facilitates gemcitabine resistance by directly targeting miR-146a and TP53INP1, respectively [62,91]. Taken together, EVs derived from PC cells or other cell types from the tumour microenvironment promote drug resistance by altering genes, RNAs, proteins, and signalling pathways, which limits the therapeutic management of PC. EVs Regulate Tumour-Associated Immunity The production of various cytokines and recruitment of immunomodulatory cells such as myeloid-derived suppressor cells (MDSCs), tumour-associated macrophages (TAMs) and Tregs are responsible for making the pancreatic tumour microenvironment immunosuppressive [92,93]. Based on EV cargo and the recipient cells, EVs exhibit opposing roles in PC-associated immunity. PC-derived EVs containing miR-203 and miR-212-3p have been shown to decrease the expression of Toll-like receptor 4 (TLR4) and regulatory factor X-associated protein (RFXAP) in dendritic cells, resulting in immune tolerance [94,95]. Furthermore, EVs secreted by PC cells can also suppress the immune system by reducing the expression of human leukocyte antigen D related (HLA-DR) in monocytes, diminishing the ability of natural killer cells to kill the tumour [96,97]. There is evidence that tumour suppressor SMAD4-deficient PC cells secrete miR-1260a and miR-494-3p-positive EVs. These EVs create an immunosuppressive tumour microenvironment when taken up by myeloid-derived suppressor cells by promoting proliferation and glycolysis (Figure 2) [98]. On the contrary, PC-derived EVs positive for Heat Shock Protein 70 (HSP70) are capable of activating the cytolytic activity of natural killer cells [99]. Circulating EVs are also held responsible for promoting an innate inflammatory response and pancreatitis-associated lung injury by activating nucleotide binding oligomerization domain NOD-like receptor protein 3 (NLRP3) and inducing pyroptosis of alveolar macrophages [100]. Taken together, it is clear there is crosstalk between PC-derived EVs and the immune system; however, the underlying mechanisms are yet to be further explored to clarify the contrasting effects of EVs in tumour-associated immunity in PC. EVs Participate in PC Metabolic Disfunction Two common paraneoplastic effects of PC are cachexia, metabolic wasting syndrome, and diabetes [101]. The molecular mechanism behind the onset of PC-associated diabetes and weight loss is poorly understood. A previous study revealed that PC-derived EVs transfer adrenomedullin (AM) into β-cells, which triggers ER stress and defects in the unfolded protein response, which result in the inhibition of insulin secretion [102]. In addition, PC-derived EV AM also promotes lipolysis, hence contributing to PC-associated weight loss. PC-derived EVs enriched in AM bind to AM receptors (ADMRs) on the surface of adipocytes, in turn activating p38 and (ERK)1/2 MAP kinase signalling pathways and promote lipolysis by increasing the expression of phosphorylated hormone-sensitive lipase (p-HSL) and phosphorylated perilipin1, known markers of active lipolysis. Furthermore, blocking of ADMR or p38 and MAPK has been shown to decrease lipolysis, suggesting the loss of body weight in PC is contributed by PC EV-associated AM that results in the loss of adipose tissue. [103]. Yang and colleagues have shown that zinc transporter (ZIP4) also plays a significant role in PC-associated cachexia by promoting muscle wasting and through stimulating RAB27B-mediated HSP70 and HSP90 positive EV release. The secretion of EVs by PC has been shown to be augmented by ZIP4 via the upregulation of RAB27B expression through zinc-sensitive transcription factor CREB. The elevation in RAB27B expression in PC promotes the release of HSP70 and HSP90-positive EVs that promote p38 MAPK-mediated muscle catabolism by activating Toll-like receptor 4 (TLR4) [104]. These studies advocate for the role of EVs in metabolic disfunction in PC, particularly their role in PC-associated cachexia. However, additional investigation is necessary to further elucidate the role of EVs in mediating the metabolic reprogramming of PC. Overall, these findings revealed various functions of EVs in modulation pancreatic cancer progression, invasion, metastasis, chemoresistance, tumour immunity and PCassociated metabolic disorder. EVs as Potential Diagnostic and Prognostic Biomarkers Due to lack of early diagnostic symptoms, screening methods and biomarkers, more than 80% of PC patients are diagnosed with metastatic or locally advanced cancer [105]. Currently, the only biomarker available for PC is CA19-9; however, its low sensitivity and specificity makes it less reliable for early PC screening [2]. Therefore, novel early diagnostic markers are critical to improve the overall survival of PC patients. Since EVs secreted by cancer cells play an important role in disease progression, EVs and their contents have promising potential to serve as highly specific early diagnostic and prognostic biomarkers for PC (Table 1). Moreover, the abundance and stability of EVs in various biological fluids increases their potential of being highly sensitivity in PC diagnosis [106]. To elucidate whether EVs can be used as biomarkers for PC detection, several groups have isolated EVs from the plasma of PC patients and healthy control individuals and analysed the respective EV miRNA content utilising RT-PCR, a microfluidics-based approach, and localized surface plasmon resonance (LSPR)-based microRNA sensors ( Figure 3A). The levels of miR-17-5p, miR-21 [107], miR-550 [108] and miR-10b [109] were found to be upregulated in PC patients, indicating that they may serve as potential biomarkers for the diagnosis of PC (Figure 2). In another study, Xu and colleagues reported high expression of miR-196a and miR-1246 in PC-derived EVs [110]. Madhavan et al. conducted a study where five PC initiating cell markers (EpCAM, CD104, MET, Tspan8 and CD44v6) and four miRNAs (miR-4644 miR-4306, miR-3976, and miR-1246) in serum EVs were analysed. Evaluating the expression levels of PC-initiating cell markers and miRNA serum-EV markers significantly improved the sensitivity and specificity in detecting PC and in differentiating patients with PC from healthy individuals [111]. In addition to miRNA, EV proteins are also sensitive screening tools for PC diagnosis. Melo et al. suggested that glypican-1 (GPC1)-positive EVs secreted by PC could be used as a diagnostic marker for detecting early PC. Mass spectrometry analysis of circulating EVs isolated from 190 PC patients and 100 healthy individuals demonstrated that serum samples of PC patients specifically contain a significantly higher amount of GPC1-positive EVs than normal control individuals, indicating a strong correlation between GPC1-positive EVs and PC (Figure 2). The amount of GPC1-positive EVs present in the circulation is reflective of tumour burden and a decrease in the amount GPC1-positive EVs is correlated with improved survival [112]. It is postulated that EVs containing MIF could be a probable prognostic marker for PC progression and metastasis since MIF was found to be significantly upregulated in EVs isolated from the plasma of a PC mouse model and PC patients compared to the healthy control subjects as determined by mass spectrometry. Furthermore, enzyme-linked immune assay (ELISA) showed MIF to be elevated in EVs isolated from patients with PC with progression of disease post-diagnosis compared to healthy controls. Importantly, MIF was markedly elevated in EVs from stage I PC patients prior to liver metastasis [113]. In another study, Hoshino and colleagues found EVs isolated from PC patients with known liver metastasis have higher levels of integrin αvβ5 compared to patients without metastasis, hinting that αvβ5 could be another potential marker of PC metastasis [79]. Collectively, these findings indicate that evaluating the expression patterns of miRNA and proteins in EVs could be utilized to discriminate between PC and non-malignant patients. EVs as Drug Delivery Tools In addition to being carriers of various biomolecules which are readily taken up by recipient cells, EVs are nontoxic and have low immunogenicity. These features make EVs an attractive candidate for targeted delivery of chemotherapeutics. Unlike synthetic nanoparticle drug delivery systems, these membrane-bound nanovesicles can deliver their cargo to recipient cells by avoiding systemic toxicity associated with chemotherapy in off-target tissues, therefore improving overall therapeutic effects [114,115]. Moreover, integrin-associated transmembrane protein CD47 protects EVs against being cleared by monocytes [116]. Pascucci et al. demonstrated that treatment of mesenchymal stromal cells (MSCs) with anticancer drug paclitaxel causes MSCs to package and secrete drug containing EVs that inhibit the proliferation of PC cells in vitro ( Figure 3B) [117]. Later Kim et al. showed that EV-loaded paclitaxel can be utilized to treat multidrug-resistant cancer. When drug-resistant cancer cells are treated with EV-encapsulated paclitaxel, the drug accumulated in drug-resistant cancer cells bypassing Pgp mediated drug efflux. This reduces drug elimination which results in increased cytotoxicity and improved therapeutic efficacy in resistant tumours [118]. Furthermore, EVs were successfully employed to transport curcumin to the recipient PC cells to induce cytotoxicity [119]. Apart from chemotherapeutics, EVs can also carry siRNA and proteins to recipient cells. For instance, EVs were engineered to carry siRNA-targeting KRAS G12D , a common mutation in PC, resulting in the suppression of PC and improvement of overall survival in mice ( Figure 3B) [116]. Likewise, the delivery of survivin blocker, survivin T34A mutant, via EVs to the PC cell line MiaPaCa-2 enhanced its sensitivity to gemcitabine [120]. Collectively, all these studies indicate that EVs are promising novel drug delivery tools for the treatment of PC. However, there are some technical limitations that need to be considered, for instance bulk exosome preparation, effective EV delivery, and target specificity of EVs. EVs as Potential Therapeutic Targets Since EVs contribute significantly to PC progression and chemoresistance, strategies to block EV release from cancer cells or inhibiting EV uptake by the recipient cells could be a potential therapeutic target for PC ( Figure 3C). Arresting EV secretion from gemcitabine-treated CAFs by GW4869 significantly decreased the survival of PC cells, as it inhibited the transfer of chemoresistance [75]. In addition, a reduction in the number of EVs attenuated gemcitabine resistance induced via miR-155 [85]. Furthermore, blockade of EV MIF prevented the formation of the pre-metastatic niche in the liver and therefore metastasis ( Figure 3C) [113]. All these studies suggest EVs could be a good target for treating PC. However, as discussed above, EVs are released by almost all cell types, and cell type-specific blockade of EV secretion or uptake will be a challenge that needs to be considered for EVs to be a suitable therapeutic target. Taken together, there are promising studies focusing on the application of EVs in the diagnosis and treatment of PC. Nonetheless, they are still in progression and further inquiry in this field will result in the advancement in PC treatment. Conclusions Pancreatic cancer is one of the most lethal and incurable malignancies. Due to the lack of sensitive diagnostic tools, most PC patients are diagnosed with advanced or metastatic cancer which develops chemoresistance. Numerous studies have confirmed that EVs play a pivotal role in PC progression, metastasis, inflammation, chemoresistance and immunosurveillance escape. As EVs are readily available in the circulation and actively participate in cell-to-cell communication by shuttling various biomolecules; therefore, EVs and their content are now being investigated for PC diagnosis and specific biomarkers. EVs are also being explored as novel therapeutic targets for PC. Herein, we endeavoured to summarise the role of EVs in PC along with their potential application in diagnosis and treatment. There are promising data that suggest EVs are suitable for clinical applications; however, additional in-depth research is necessary to explore the molecular mechanisms by which EVs promote PC, which will further open up new possibilities for PC treatment. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-11-14T16:07:56.084Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "592eb8780a5296e6a19ea3a4a002d2d280be7ce9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/13/22/5669/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eec7ae6a96826c566d8cff64b37638c7fa247e9a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
222092639
pes2o/s2orc
v3-fos-license
Protocol for Cloning Epithelial Stem Cell Variants from Human Lung Summary The plurality of clonogenic cells derived from human lung includes a spectrum of diverse p63+ stem cells responsible for the regeneration of normal epithelial tissue and disease-associated metaplastic lesions. Here, we report protocols for the cloning, expansion, and characterization of these stem cell variants, which in general assist in analyses of stem cell heterogeneity, genome editing, drug screening, and regenerative medicine. For complete details on the use and execution of this protocol, please refer to Kumar et al. (2011), Zuo et al. (2015), and Rao et al. (2020). Sigma-Aldrich Filter the medium with a 0.22 mm filter. The culture medium can be stored at 4 C for up to 1 month. Filter the medium using a 0.22 mm filter. Aliquot and store at À80 C for up to a year. Filter the medium with a 0.22 mm filter. The buffer can be stored at 4 C for up to 1 month. Digestion Buffer Measure and dissolve 1 mg collagenase in 1 mL tissue processing buffer. Filter the medium with a 0.22 mm filter. Prepare fresh digestion buffer each time. Neutralizing Medium StemECHO100 is a ready-to-use neutralizing medium. The culture medium can be stored at 4 C for up to 1 month. Stem Cell Growth Medium To prepare complete growth medium, StemECHO TM PU Expansion Medium, add 0.25 mL StemE-CHObullet003 to 250 mL StemECHO103. Mix thoroughly and store at 4 C for up to 1 month. Filter the medium using a 0.22 mm filter. Aliquot and store at À80 C for up to a year. STEP-BY-STEP METHOD DETAILS The procedures described in this protocol can be conducted in a typical Class II biological hood using standard aseptic technique. Given their high clonogenicity and rapid proliferation, a single stem cell will yield more than 1 billion stem cells with 60-80 days in this system, which enables downstream applications in a timely manner ( Figure 1). Importantly, the intrinsic ''immortality'' of lung stem cell variants that renders their unlimited proliferative potential is not due to mutations of pRB, CDKN2A, p53, or any of a host of tumor suppressor or proto-oncogenes. Moreover, lung stem cell variants in this system retain their capacity of multipotent, lineage-specified differentiation despite continuous cultivation of more than one year. Reviving 3T3-J2 Cells from Frozen Stock Timing: 4-5 days 1. Retrieve cryovials from cell storage deep freezer and thaw the vial containing 10 6 cells/1 mL in 37 C water bath. 2. Gently transfer cells with 1 mL pipette into a 50 mL Falcon tube and add 30 mL culture medium drop by drop while swirling the tube slowly. 3. After re-suspension, seed cells into a 150 mm tissue culture plate. 4. Gently shake the plates with cells in the hood to evenly distribute the cells and transfer the cells in an incubator at 37 C with 7.5% CO 2 . 5. Change the medium the next day after thawing the frozen vial and then change medium every 2 days. In general, cells will be ready for passaging and expansion 3 days after thawing. CRITICAL: Do not centrifuge to wash away DMSO. Change the medium the day after thawing the vial because the freezing medium contains DMSO that is harmful to cells. Passage the cells when they become 70$80% confluent. If 3T3-J2 cells have grown past confluence, abandon them and thaw a new vial. Expanding 3T3-J2 Cells Timing: 1 day 6. Transfer plates from the incubator and check the confluency and morphology of the 3T3-J2 cells under a microscope. 7. Move the plate gently into a cleaned hood, remove the medium, wash the plate with 20 mL of DPBS to remove any serum, and add evenly 5 mL of pre-warmed 0.05% Trypsin-EDTA to the plate. 8. Incubate the cells with trypsin in an incubator at 37 C with 7.5% CO 2 for 5-6 min. 9. Add 10 mL of warmed 3T3-J2 culture medium to neutralize the trypsin. Pipette up and down 10 times to make a single-cell suspension. 10. Spin down the cells at 300 g for 5 min, remove the supernatant carefully and resuspend the pellet into 5 mL of fresh 3T3-J2 culture medium. Count the cells using a hemocytometer and Trypan blue. 11. Seed 600,000-800,000 cells per plate in the amount of 20 mL 3T3-J2 culture medium, shake the plates to distribute the cells evenly and transfer the plates in the incubator at 37 C with 7.5% CO 2 . CRITICAL: It is important to strictly follow the seeding cell numbers recommended here; either higher or lower seeding density will affect the quality of the 3T3-J2 cells. 12. Every 2-3 days after passaging change the medium by removing the medium from the plate and adding 20 mL of fresh, pre-warmed medium. Irradiating and Freezing Down 3T3-J2 Feeder Cells Timing: 1 day 13. Repeat cell passaging procedure steps based on the need of cell number. Collect the cell pellets, resuspend the cells in 50 mL Falcon tubes containing 40 mL 3T3-J2 culture medium at the density less than 10 7 cells/mL and leave on ice before irradiation. 14. Irradiate 3T3-J2 cells at 2,000 rad. We use X-ray irradiation machine (RadSource, RS1800, Cat No. 1087). 2,000 rad is calculated based on the irradiation standard to the machine. CRITICAL: Although the irradiation and mitomycin-C (MC) treatment seem to be qualitatively equivalent, some studies suggest that irradiation is more suitable and efficient than MC treatment for the preparation of nonreplicating feeder cells. (Roy et al., 2001;Fleischmann et al., 2009;Llames et al., 2015). However, we have not directly compared these treatments in our laboratory so it remains unclear whether MC treatment is suitable to prepare feeder cells for lung stem cell cloning. 15. After irradiation, put the container on the ice and immediately count viable cell numbers with a hemocytometer and Trypan blue. 16. Spin down cells at 300 g for 5 min at 4 C and resuspend in freezing medium very gently to freeze down 1 3 10 7 cells/vial. Use 0.6 mL freezing medium per vial. CRITICAL: 3T3-J2 cells are extremely fragile after irradiation. Thus, cells need to be handled with extreme care. Do not pipet up and down to mix the cells. Swirl the container of cells very gently for mixing. Put the cells in a Cool Cell Freezer box at À80 C for at least 24 hrs for gradual freezing. Transfer the frozen vials to a liquid nitrogen tank or deep freezer next day for long-term storage. Preparing Feeder for Stem Cell Culturing Timing: 1 day ll OPEN ACCESS 18. Revive and seed irradiated 3T3-J2 cells in the presence of 3T3 culture medium and shake the plates with cells to evenly distribute the cells (see Table 1 for recommended cell numbers and culture medium volume) and transfer the plates in an incubator at 37 C with 7.5% CO 2 . Cloning Stem Cell Variants from Human Lung Timing: 10 days CRITICAL: Fresh human normal or diseased lung tissue should be preserved in cold tissue processing buffer for short-term storage or shipping and be processed within 48 hrs for optimal yield. If lung tissues are properly minced and frozen in the presence of stem cell freezing medium, the lung stem cells can be derived from these frozen tissues at a later time. We have successfully derived cells from tissues that have been frozen for 2 years. There is no obvious difference in the yield of clonogenic cells among different sections of the lung. In this protocol we used distal lung tissue, but this protocol is also applicable to proximal lung tissue. The improper storage or insufficient digestion will lead to unsatisfying yield of healthy and clonogenic human cells in this system. 20. Wash 1 cm 3 of distal lung tissue with 30 mL cold tissue processing buffer in 50 mL Falcon tube, centrifuge at 300 g for 10 min at 4 C, and then carefully remove the medium without disturbing the tissue. Repeat the washing procedure three times. 21. Carefully remove the tissue and put it into a 150 mm tissue culture dish. Mince the tissue between two feather scalpels until the tissue resembles a paste. The whole cutting process takes about 10-15 min. CRITICAL: This mincing step is critical and will determine cloning efficiency. Thus, try to mince the tissue as thoroughly as possible. To keep the tissue from drying, add a few drops of tissue processing buffer during the process. 22. Digest minced tissue in digestion buffer (10 mL digestion buffer per 1 cm 3 lung tissue) for approximately 60 -90 min in a 37 C rocker set to 150 rpm. Every 10 min, pipette the mixture up and down to break up any aggregated clumps. 23. After digestion, add 20 mL cold tissue processing buffer to the tubes containing digested tissue. Then mix the contents by inverting the tubes ten times. 24. After mixing, pipet the solutions through a 100 mm cell strainer and collect the pass-through suspension in a new 50 mL Falcon tube. 25. Add tissue processing buffer to this filtered cell suspension until the total volume is 45 mL, centrifuge at 300 g for 20 min at 4 C, remove the buffer carefully and repeat washing steps 6 more times at 300 g for 10 min. 26. After the final wash, lyse the erythrocytes in the cell pellets by adding 5 mL ACK lysing buffer and incubating for 5 min at 20-22 C. Add 35 mL of tissue processing buffer and spin down at 300 g for 10 min at 4 C. Wash one more time with 10 mL cold neutralization medium. 27. Following the last centrifugation, resuspend the cell pellet in 1 mL of stem cell growth medium completely by pipetting up and down for ten times. Count the primary cells using hemocytometer. Seed the cells into the tissue culture dish pre-seeded with irradiated 3T3-J2 feeder cells (see Table 2 for the seeding density of primary cells). 28. Change stem cell growth medium every 2 days. CRITICAL: Plates seeded with irradiated 3T3-J2 feeder cells need to be used within 5 days. Using the aged feeder plates will lead to the loss of stemness in these lung stem cell variants. Passaging Human Lung Stem Cell Variants Timing: 2-4 h 29. When small and round lung stem cell colonies with relatively large nucleus and high nucleus/ cytoplasm ratio are visible under the microscope (takes about 7-10 days, Figure 2), wash the plate with DPBS twice, add 1 mL TrypLE and incubate at 37 C in the incubator with 7.5% CO 2 for 15 min. Check the cells once or twice and, if they are not totally dissociated, incubate another 5-10 min until cells are fully detached from the plate. 30. Pipet the cells gently up and down 5-10 times to further dissociate them into single cells. 31. Add 2 mL warmed neutralizing medium quickly into the well-trypsinized cells and pipet up and down 20 times. Pipet the cells through a 30 mm pre-separation filter, centrifuge them at 300 g for 6 min at 4 C and then discard the supernatant. CRITICAL: Do not add neutralization medium dropwise as cells will aggregate. Pipet vigorously to break down cell clusters into single cells. 32. Resuspend the cell pellets into 6 mL stem cell growth medium, count the cells, and seed them onto a plate that was pre-seeded with irradiated 3T3-J2 feeder cells (see Table 1 for recommended seeding density). 33. Change the medium every 2 days. In about 2-3 days, individual lung stem cell colonies will be observed. In about 4-6 days, the stem cell culture plate should be ready for passaging, freezing down, single-cell cloning or single-cell RNA sequencing. No. of cells 1,000,000-2,000,000 500,000-1,000,000 200,000-400,000 Volume of medium 2 mL 1 mL 0.5 mL ll OPEN ACCESS CRITICAL: This system can support lung stem cell immaturity for at least 25 passages (250 days continuous propagation). We recommend performing single-cell cloning and single-cell RNA sequencing as early as possible, preferably at passage 2. Establishing Single Stem Cell-Derived Pedigrees Timing: 6-8 weeks 34. After 2 days post seeding at a density of 100,000 cells per well of 6-well plate, the stem cell clone library culture is approximately 60% confluent and each colony has about 50 cells. Trypsinize the stem cell clones following steps mentioned above, pipette the neutralized cells up and down to achieve a single-cell suspension, and pass them through a 30 mm pre-separation filter. 35. Centrifuge at 300 g for 5 min and remove the 3T3-J2 feeder cells using QuadroMACS Starting Kit (LS) following manufacturer's protocol. Resuspend the cell pellet in 1 mL stem cell growth medium. 36. Use a cell sorter to sort single cells to individual wells of a 384-well plate previously seeded with feeder cells (Figure 3). 37. Change medium every 3-5 days. It will take around 7-10 days to observe colonies in many wells of the plate. 38. Two weeks after plating, gently wash the wells having colonies with 75 mL DPBS twice, add 30 mL TrypLE, incubate for 15 min or even longer in the incubator at 37 C with 7.5% CO 2 . CRITICAL: Check each well during trypsinization and make sure that all colonies are completely dissociated before neutralizing. 39. Gently pipet up and down 5 times for each well. Neutralize the TrypLE by adding 50 mL warmed stem cell growth medium quickly into each well and pipet up and down another 10 times to dissociate colonies. CRITICAL: Change tips for each well and pipet carefully to avoid aerosols and crosscontamination. 40. For each well of dissociated cells, transfer the cell suspension into one well of 24-well plate each that has been seeded with feeder cells. Each well will be marked as a single-cell-derived clone and can be expanded into billions of cells in a few weeks. Characterization of Individual Clones In Vitro Timing: 4-5 weeks Expansion of individual stem cell clones for molecular genetics and functional analyses in vitro (airliquid interface, ALI) and in vivo (xenografts) is essential to deconstruct the heterogeneity among the epithelial stem cells of the lung and airways. Growing individual clones at ALI allows assessment of the in vitro differentiation potential of each clone. 41. Expand individual clones in one well of a 6-well plate pre-seeded with feeder cells. Aspirate the culture medium, rinse with DPBS thoroughly, and add 1 mL TrypLE TM Express Enzyme (13) for 10-20 min in the incubator at 37 C with 7.5% CO 2 . 42. Pipette up and down 5-10 times. Neutralize with 2 mL stem cell neutralizing medium, vigorously pipette up and down and pass through a 30 mm pre-separation filter to achieve a single-cell suspension. Remove mouse feeder cells using QuadroMACS Starting Kit. 43. Count the cells and seed 200,000-300,000 cells in 200 mL complete Stem Cell growth medium per well of a 24-well Transwell insert. Add 700 mL complete stem cell growth medium into the lower chamber of the insert. 44. Incubate the Transwell insert in a 37 C incubator with 7.5% CO 2 for 3-4 days until confluency, and change medium of both upper and lower chambers every other day. 45. At confluency, remove the medium of the upper chamber of the insert by carefully pipetting to create ALI culture. Change the medium of the lower chamber into PneumaCult-ALI Media, and keep for an additional 21 days in the incubator at 37 C with 7.5% CO 2 to induce complete differentiation. Characterization of Individual Clones In Vivo Timing: 4-5 weeks To assess the pathogenic potential of the lung epithelial clones, such as neutrophilic inflammation and fibrosis, subcutaneous transplantation of them into highly immunodeficient NSG (NODscidIL2ranull) is performed (Figure 4 and Video S1). A detailed phenotypic and molecular characterization of the stem cell variants in chronic obstructive pulmonary disease (COPD), for instance, has been presented (Rao et al., 2020). 46. Expand individual clone in three wells of a 6-well plate pre-seeded with feeder cells. 47. At 4 to 5 days post seeding, the stem cells in each well should reach 90% confluency in a 6-well plate. Trypsinize and collect the cell pellets following the same procedure as described above. 48. Resuspend the cell pellets in 50 mL serum-free F12 medium and keep the suspension on ice. 49. Mix approximately 1 3 10 6 lung stem cells with 50 mL (1:1) growth factor-reduced Matrigel, and subcutaneously inject the cells into the back of 8-to 10-week-old NSG mice of either sex following inhalation anesthetic using isoflurane gas (Video S1). CRITICAL: In this step, both cell suspensions and GFR Matrigel should be kept on ice before injection. It is absolutely essential to use growth factor-reduced Matrigel as non-reduced contains factors that might affect complete differentiation of the cells. The differentiation fates of these clones in xenografts have been proved to be remarkably stable to 250 days of continuous propagation in vitro, suggesting that the passages of these clones will unlikely affect the outcome of the xenograft experiment. 50. Clearly label the exact spots where cells were injected with a permanent marker. 51. 3 weeks after the injection, euthanize the mice, collect nodules, and fix them in 4% PFA for 24 hrs at 4 C (Figure 4). 52. Process the nodules as a small tissue sample in a tissue processor, embed in paraffin blocks, cut sections, and characterize the histology using recommended antibodies. CRITICAL: It is important to keep the xenografts in the mice for at least 3 weeks before collecting to allow the stem cells to differentiate properly. EXPECTED OUTCOMES Using this protocol, libraries of epithelial stem cell variants from human normal lung or COPD lung have been established. This library comprises multiple types of p63+ stem cells that are committed to distinct lineages (e.g. Clusters 1-4; Rao et al., 2020). Single stem cell-derived pedigrees can be established and further characterized in vitro (molecular genetics, air-liquid interface assays, etc.) and in vivo (mouse xenograft assay). These established pedigrees are also suitable for the applications including RNA or DNA sequencing, genome editing, drug screening and stem cell-based regenerative medicine. QUANTIFICATION AND STATISTICAL ANALYSIS We provide the seeding density of irradiated 3T3-J2 cells in various types of tissue culture dishes in order to generate the highest quality of feeder seeded plates (Table 1). In addition, we provide the seeding density of lung stem cells for the optimal culture condition of maintaining stemness of these cells (Table 2). LIMITATIONS We have successfully derived and cultured stem cell variants from lungs of a large number of donors and observed very similar efficiency of cloning and long-term culturing independent of donor sex and age. The condition of the 3T3-J2 feeder layer can play a defining role in the success of human lung stem cell derivation, and this condition is ultimately dependent on adhering to rigid parameters of 3T3-J2 growth and expansion as defined in this protocol. Not every investigator in the laboratory can or will work within these parameters. Another important limitation of this method is the tendency of lung stem cells to spontaneously differentiate if colonies are allowed to merge to confluence. Thus, to maintain the stemness of lung stem cells, the seeding density and confluency of the cultures need to be strictly monitored. In addition, lung stem cells tend to differentiate if they are seeded as clusters of cells instead of single cells during passaging. Thus, thorough trypsinization and filtration or flow-sorting before seeding is essential to maintain the potential of these cells. While we have endeavored to control the culture conditions, we note that these media require fetal bovine serum, a variable whose impact is difficult to estimate but lot numbers should be monitored. Finally, it is critical to ensure the quality of lung stem cells prior to seeding them on Transwell membranes for ALI differentiation, transplanting them as xenografts, or subjecting them to genome editing protocols. An important consideration in employing this technology is that the initial "libraries" of clonogenic cells from the lungs are complex and comprised of heterogeneous stem cells with respect to their fate commitment. Thus from COPD lungs, four major clone types were identified, all of which expressed high levels of the p63 transcription factor, a master regulator of all stratified epithelial stem cells (Senoo et al., 2007). Apart from this similarity, the four major classes of stem cells show distinct and absolute fate commitments (Cluster 1: distal airway: Club cells, type I and II pneumocytes; Cluster 2: goblet cell metaplasia; Cluster 3: squamous cell metaplasia; Cluster 4; inflammatory ll OPEN ACCESS cell metaplasia; Rao et al., 2020). Given this complexity, and the possibility that one clone type might display proliferation advantages over another, it is likely that long-term expansion of the libraries could alter the clone distribution. We therefore recommend that analyses such as single-cell RNA sequencing or the generation of single-cell-derived clones be performed at early-passage stages, preferably at passage 2 or 3 of the library. TROUBLESHOOTING Problem 3T3-J2 cells lose contact inhibition and continue to proliferate at high density resulting in a loss of lawn quality. Potential Solution Contact inhibition is a feature of the 3T3-J2 line that makes these cells suitable to use as feeder layers for cloning stem cells. Growth at high densities will select for those that have lost this property. If this happens, it is better to discard the cells and start over with a new, early-passage vial of 3T3-J2. Problem Increased saturation density of 3T3-J2 cells Potential Solution Typically, 150 mm dish of 3T3-J2 cells yields 8-10 million cells. If you get around 15-20 million cells, your 3T3-J2 cells have been grown improperly and, as detailed above, have lost contact inhibition. This will adversely affect your stem cell culture. In this case, it is highly recommended that you thaw a new vial of 3T3-J2 cells. Problem Poor health of feeder cells Potential Solution The condition of feeder cells strongly depends on the quality of the 3T3-J2 culture. Ensure irradiation is performed on the collected 3T3-J2 cells as soon as possible. The cell suspension should be placed on ice at all times. After irradiation, the freezing medium must be added dropwise to the cells and gently mixed. Problem Stem cells lose clonogenicity. Potential Solution Loss of clonogenicity indicates a loss of stemness. This happens if the cells start to differentiate (Figure 5). To ensure immaturity of cells, the seeding density guidelines should be strictly followed ( Table 1). In addition, care must be taken to not let the cells grow to confluency as this will trigger differentiation. In case the clonogenicity is significantly reduced, it is better to start a new culture from early-passage stem cells, preferably from passage 2 or 3. Problem ALI-derived epithelium is damaged. Potential Solution The loss of structural integrity may be a result of bad condition of seeded lung stem cellss or damage induced during medium changes. The seeding density of lung stem cells on Transwell filters should be strictly adhered to (Table 1). The apical medium should be carefully removed after the culture attains confluency. If leakage in the ALI is observed, the experiment should be started again. Problem Xenograft nodule shows incomplete differentiation. Potential Solution The timing of in vivo differentiation is key for appropriate differentiation. Based on our experience, 3 weeks seem to be ideal to collect the xenografts. One or two weeks of differentiation may not be sufficient for terminal differentiation. In addition, if nodules are left for longer than a month, the epithelia in these xenografts may diminish. RESOURCE AVAILABILITY Lead Contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact Wa Xian (wxian@uh.edu). Materials Availability The unique/stable reagents generated in this study are available from the Lead Contact with a completed Materials Transfer Agreement. Data and Code Availability This protocol does not include any datasets or code. ACKNOWLEDGMENTS This work was supported by grants from the Cancer Prevention Research Institute of Texas (CPRIT; RR150104 to W.X. and RR1550088 to F.M.), the National Institutes of Health (1R01DK115445-01A1 to W.X., 1R01CA241600-01 and U24CA228550 to F.M.), the US Dept. of Defense (W81XWH-17-1-0123 to W.X.), and the American Gastroenterology Association Research and Development Pilot Award in Technology (to W.X.). W.X. and F.D.M. are CPRIT Scholars in Cancer Research. We thank all the members in the Xian-McKeon laboratory for helpful discussions and support. We thank H. Green and J. Rosen for advice and support. Comparison between immature lung stem cell clones (left) and clones undergoing spontaneous differentiation (right). Scale bar, 100 mm. The images were taken at 7 days after the lung stem cells were seeded onto lawns of 3T3-J2 feeder cells. If the procedures described in this protocol are followed strictly, more than 80% of the stem cell colonies remain immature as shown in the left panel. If more than 50% of the colonies display the morphology shown in the right panel, the culture should be discarded.
2020-07-16T09:01:29.977Z
2020-07-09T00:00:00.000
{ "year": 2020, "sha1": "3c84881b56901c0e559a3188d9f2433fb90550b9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xpro.2020.100063", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "55721415dacd381903ac936facd27ce2c919ffb7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
248749255
pes2o/s2orc
v3-fos-license
Trend in the incidence of type 1 diabetes (T1DM) among Qatari and Arab gulf children and adolescents over the past 20 years Letter to Editor. To the Editor, During the last decades, the world has seen a dramatic increase in the prevalence of diabetes mellitus, with special regard to the developed countries. More than 96,000 children and adolescents under 15 years are estimated to be diagnosed with T1DM annually and when the age range covers up to 20 years, the number is estimated to be more than 132,600 (1). The incidence of T1DM varies greatly between different countries, within countries, and between different ethnic populations, with the highest incidence rates observed in Finland, Northern Europe, and Canada (2). We reviewed the literature to find out the trend in of T1DM incidence in children and adolescents (age 6 months -18 years) in the state of Qatar in the past 20 years and compared them with the data reported in other Arabic gulf countries. The Arab region appears to have a higher prevalence of diabetes than the global average. Five of the top 10 countries with the highest prevalence of diabetes (in adults, aged 20 to 79 years) are in the Arab gulf region: Kuwait (21.1%), Qatar (20.2%), Saudi Arabia (20.0%), Bahrain (19.9%) and UAE (19.2%). In Kuwait, the incidence of T1DM in children and adolescents doubled from 20.18/100 000 (in 1995-1999) to 40.9/100 000 (in 2011-2013). These data confirm significantly high incidence and markedly increasing trend of T1DM in children and youths in the Arab Gulf states (3). In the largest country in the Gulf area (KSA), the reported incidence of T1DM in children and adolescents changed from 18.05/100000 (1990-1998) to 33.5/100000 in 2017. The incidence and trend of T1DM in Arab Gulf countries is extremely high compared to other Asian countries where the incidence of T1DM is very low. Different genetic/environmental interactions might operate in the etiology of T1DM between Caucasians. Arabs and Asians (4). Among different Arab countries, several non-HLA genes have been reported to be associated with susceptibility to T1DM, including CTLA4, CD28, PTPN22, TCRβ, CD3z, IL15, BANK1, and ZAP70 (5). Taken together, such marked variation in incidence trends is consistent with an etiologic understanding of T1DM as a disease that involves environmental triggers acting with genetic susceptibility to initiate autoimmune destruction of pancreatic β-cells. It would be of great interest to investigate to what degree genetic determinants influence the well-known regional differences in incidences, since we can identify environmental risk factors that may either initiate the autoimmune process or promote already ongoing β-cell damage in different countries. An increased prevalence of β-cell autoimmunity [anti-glutamic acid decarboxylase (GAD) antibodies (Ab), anti-islet cell Ab (ICA) and anti-insulin Ab (IAA)] was found in Qatari children and adolescents with T1DM in the 2020 compared to 2012 -2016 (82.7% vs. 75.5%; p: 0.009). This could be related to an increased autoimmune aggression secondary to environmental inciting factor/s, larger number of screened antibodies performed in recent years, and different age range of subjects included in the different studies (from 0.5 to 14 years in the 2016 to 0.5-18 years in the 2020). In conclusion, data from the Arab gulf showed a markedly increasing trend in the incidence of T1DM in children and adolescents over the last two decades. The high and increasing prevalence of positive autoimmunity as well as the genetic susceptibility evidenced by the inheritance of HLA susceptible loci can explain the high incidence and increasing trend in the Arab gulf population. In Qatari children with T1DM, an association of HLA haplotypes DQA1*03:01:01G (OR = 2.46; p: 0.011) and DQB1*03:02:01G (OR = 2.43; p value: 0.022) has been identified. Moreover, additional risk factors such as obesity, rapid urbanization and its associated changes in dietary habits and lack of physical activity are also important. Epidemiological studies are necessary to identify risk determinants that may be useful for primary prevention strategies
2022-05-14T06:22:45.939Z
2022-05-11T00:00:00.000
{ "year": 2022, "sha1": "ee8fc728c48c3f3345c68470b2ba5dae4d1752e7", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9b5a344bf2f90711604c0d1e08d4153514f45cb6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258179413
pes2o/s2orc
v3-fos-license
Counter examples for bilinear estimates related to the two-dimensional stationary Navier--Stokes equation In this paper, we are concerned with bilinear estimates related to the two-dimensional stationary Navier--Stokes equation. By establishing concrete counter examples, we prove the bilinear estimates fail for almost all scaling critical Besov spaces. Our result may be closely related to an open problem whether the two-dimensional stationary Navier--Stokes equation on the whole plane $\mathbb{R}^2$ is well-posed or ill-posed in scaling critical Besov spaces. Introduction Let n 2 be an integer and let 1 p, q ∞. We consider the following bilinear estimate in the homogeneous Besov spaceḂ n p −1 p,q (R n ): for real valued vector fields u, v ∈Ḃ n p −1 p,q (R n ) satisfying div u = div v = 0, with some positive constant C = C(n, p, q) independent of given functions u and v. Here P := I + ∇ div(−∆) −1 = δ j,k + ∂ x j ∂ x k (−∆) −1 1 j,k n denotes the Helmholtz projection onto the divergence free vector fields. It is proved in [5] that the estimate (1.1) in the higher-dimensional cases n 3 holds for 1 p n, 1 q ∞, whereas [13] showed that it fails for p = n, 2 < q ∞ or n < p ∞, 1 q ∞. The aim of this paper is to reveal the two-dimensional case is completely different from the higher-dimensional cases and show that the estimate (1.1) with n = 2 fails for almost all 1 p, q ∞. Before we state the main result precisely, we mention the background of this study and the motivation for starting it. The estimate (1.1) plays a significant role in the analysis of n-dimensional stationary Navier-Stokes equation where u = u(x) : R n → R n and p = p(x) : R n → R denote the unknown velocity fields and pressure of the fluid, respectively, whereas f = f (x) : R n → R n is a given external force. We note that applying (−∆) −1 P to the first equation of (1.2) and using Pu = u, (u · ∇)u = div(u ⊗ u) and P(∇p) = 0, we see that the stationary Navier-Stokes equation (1.2) is reformulated as In general, it is well-known as the Fujita-Kato principle (see [3]) that it is important to consider the solvability of a partial differential equation in the critical function space with respect to the scaling transform that keeps the equation invariant. In the case of the stationary Navier- for all dyadic numbers λ > 0, we see that f ∈Ḃ n p −3 p,q (R n ) and u ∈Ḃ n p −1 p,q (R n ) are the scaling critical classes. Kaneko-Kozono-Shimizu [5] established the bilinear estimate (1.3) in the case of n 3, 1 p < n and 1 q ∞. Making use of the contraction mapping principle via this bilinear estimate, they [5] considered the well-posedness of (1.3) in the scaling critical Besov spaces framework and proved that for 1 p < n and 1 q ∞, (1.3) with n 3 possesses a unique small is sufficiently small. On the other hand, Tsurumi [10,13] proved that (1.3) with n 3 is ill-posed for n < p ∞, 1 q ∞ and p = n, 2 < q ∞ in the sense that the solution maṗ B n p −3 [6,7,11,12,14,15] for other related studies. In contrast to the higher-dimensional case R n with n 3, the question for the wellposedness and ill-posedness of two-dimensional stationary Navier-Stokes equation remains as an open problem for all 1 p, q ∞. This is because it is quite hard to show the two-dimensional bilinear estimate (1.1) with n = 2 for any 1 p, q ∞. The aim of this paper is to show that the two-dimensional bilinear estimate (1.1) with n = 2 fails for almost all 1 p, q ∞. More precisely, for any 1 p ∞, 1 q < ∞ or 2 p ∞, q = ∞, we prove that the following bilinear estimate fails with real valued vector fields u, v ∈Ḃ n p −1 p,q (R 2 ) satisfying div u = div v = 0 and some positive constant C = C(p, q) independent of given functions u and v. Furthermore, for the remaining case 1 p < 2, q = ∞, we show that the following bilinear estimate fails satisfying div u = div v = 0 and some positive constant C = C(p) independent of given functions u and v. Here, we note that the estimate (1.5) is slightly stronger than (1.4) due to the embeddingḂ . Now, our main result of this paper reads as follows. Here, p ′ := p/(p − 1) denotes the Hölder conjugate index of p. (1) As a related study of Theorem 1.1, we refer the work of Tsurumi [14] where he constructed counter examples for the fractional Leibniz estimates for the product f g of two scalar functions. Compared with [14], the main difficulty of our problem is that we need to estimate more complicated product terms P div(u⊗v) for some divergence free vector fields u and v. In [10,13], Tsurumi overcame this problem by finding some vector fields u and v on R n with n 3 for which the bilinear term P div(u⊗v) has a simple structure. However, these idea cannot be directly applied to our problem since these good examples u and v are needed at least three components. To prove Theorem 1.1, we find that if we choose u(x) = ∇ ⊥ (ψ(x) cos(Mx 1 )) with M ≫ 1 and some smooth function ψ satisfying (2.1) below, then the leading term of the low frequency part of P div(u ⊗ u) has a simple structure. This is the key ingredient of our analysis. (2) By our result, we may conjecture that the two-dimensional stationary Navier-Stokes equation (1.2) with n = 2 is ill-posed in the scaling critical framework ) for all p and q satisfying (i) or (ii) in Theorem 1.1. Indeed, following the standard ill-posedness argument proposed in studies such as [2,10,13,16], we see that constructing a sequence of functions which gives a counter example of the bilinear estimate (1.4) is a key ingredient in the proof. Theorem 1.1 shows that there exists a sequence N and u N denote the first and second iteration defined by Then, we formally decompose the solution u N to (1.2) with the external force f N as To complete the proof of the ill-posedness, it is necessary to show the existence w N obeying (1.6) and establish some estimates for it. However, since it is quite hard to find a Banach space for all u, v ∈ X(R 2 ) with div u = div v = 0, we cannot control the perturbation w N in any function space and thus the question of the well-posedness and ill-posedness for (1.2) in critical Besov spaces is still a challenging problem. We prepare notations which is used in this paper. Throughout this paper, we denote by c and C the constants, which may differ in each line. In particular, C = C( * , ..., * ) denote the constant which depends only on the quantities appearing in parentheses. Let S (R 2 ) be the set of all Schwartz functions on R 2 and let S ′ (R 2 ) be the set of all tempered distributions on R 2 . We consider a family {φ j } j∈Z of functions in S (R 2 ) satisfying As a related notion, we define the family {∆ j } j∈Z of Littlewood-Paley frequency localized operators by ∆ j f := F −1 φ j f . Then, we define the homogeneous Besov where P(R 2 ) is the set of all polynomials on R 2 . In what follows, for a space X(R 2 ) of functions on R 2 , we use the abbreviation · X = · X(R 2 ) . We refer to [1,9] for the basic properties of Besov spaces. This paper is organized as follows. In Section 2, we prepare some key lemmas for the proof of Theorem 1.1. In Section 3, we prove Theorem 1.1. Remark 2.2. As M ≫ 1, we regard the first term of the right hand side of (2.2) is the leading term. This leading term is so simple that we may obtain the appropriate lower bound estimate. See Lemma 2.4 for details. Proof of Lemma 2.1. Let j ∈ Z satisfy j 0. By the direct calculations, we see that and By making use of the elementary properties of the trigonometric functions, we have Here, since the supports of the Fourier transforms of the second and third terms of the above right hand side are included in {ξ ∈ R 2 ; 2M − 4 |ξ| 2M + 4}, they vanish by applying ∆ j . Thus, we have Similarly, we obtain Hence, it follows from the above calculations and (2.3) that and we complete the proof. Proof. Let ϕ j (x) := ψ(x) cos(2 j 2 x 1 ). By the definition of u N and v N , we see that Since there holds for integers 10 j, ℓ N + 10 with j = ℓ, Hence, from Lemma 2.1, we obtain which completes the proof. Lemma 2.4. Let ψ ∈ S (R 2 ) satisfy (2.1). Then, for any 1 p ∞, there exists a positive constant c = c(p) such that Proof. Let j ∈ Z satisfy j −2. We first consider the case 1 p 2. Let Then, we have Since ψ(2 j ξ − η) = ψ(η) = 1 for all ξ ∈ A 0 and η ∈ R 2 with |η| 1/2, we see that which implies Next, we consider the case of 2 p ∞. A portion of the following calculations are based on [4, Proposition 3.1]. Using the Bernstein and Hölder inequalities and the Plancherel theorem, it holds . Since it holds This completes the proof. 3. Proof of Theorem 1.1 Now, we are in a position to present the proof of Theorem 1.1. The proof is separated into three parts. (i) The case of 1 p ∞ and 1 q < ∞. Let ψ ∈ S (R 2 ) satisfy (2.1) and let where M 10 is a constant to be determined later. We note that u N is a divergencefree real valued vector field. Since supp u N ⊂ {ξ ∈ R 2 ; M − 2 |ξ| M + 2}, we see that By Lemma 2.1, we have N . We see by Lemma 2.4 that It follows from the Bernstein inequality and the embedding L 1 ( Hence, we obtain by (3.1) and (3.2) that for some positive constants c 0 = c 0 (p, q) and C 0 = C 0 (p, q). Then, choosing M so large that c 0 M 2 − C 0 ψ 2Ḣ 1 c 0 , we complete the proof of this case. (ii) The case of 2 < p ∞ and q = ∞. Let ψ ∈ S (R 2 ) satisfy (2.1) and let Then, by the similar calculation as in the previous step (i), we see that for N 10. By Lemma 2.1, we may decompose the bilinear term as It follows from the Bernstein inequality and the embedding Thus, we obtain (−∆) −1 P div(u N ⊗ u N ) for N 10, which completes the proof of this case. (iii) The case of 1 p 2 and q = ∞. Let ψ ∈ S (R 2 ) satisfy (2.1). Following the idea in [14], we define Then, since the support of the Fourier transform of ∇ ⊥ ψ(x) cos(2 j 2 x 1 ) is included in {ξ ∈ R 2 ; 2 j 2 − 2 |ξ| 2 j 2 + 2}, we have By the similar calculation as in the proof of Lemma 2.1, we see that for all k −2. Then, by (3.3) and (3.4), we obtain Thus, we complete the proof. Conflict of interest statement. The author has declared no conflicts of interest.
2023-04-18T01:16:39.741Z
2023-04-17T00:00:00.000
{ "year": 2023, "sha1": "ae7fe34dea0de0e98b7783abb4a69cc212b2094c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ae7fe34dea0de0e98b7783abb4a69cc212b2094c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
230642901
pes2o/s2orc
v3-fos-license
Comparison of environmental inference approaches for ecometric analyses: Using hypsodonty to estimate precipitation Abstract Ecometrics is the study of community‐level functional trait–environment relationships. We use ecometric analyses to estimate paleoenvironment and to investigate community‐level functional changes through time. We evaluate four methods that have been used or have the potential to be used in ecometric analyses for estimating paleoenvironment to determine whether there have been systematic differences in paleoenvironmental estimation due to choice of the estimation method. Specifically, we evaluated linear regression, polynomial regression, nearest neighbor, and maximum‐likelihood methods to explore the predictive ability of the relationship for a well‐known ecometric dataset of mammalian herbivore hypsodonty metrics (molar tooth crown to root height ratio) and annual precipitation. Each method was applied to 43 Pleistocene fossil sites and compared to annual precipitation from global climate models. Sites were categorized as glacial or interglacial, and paleoprecipitation estimates were compared to the appropriate model. Estimation methods produce results that are highly correlated with log precipitation and estimates from the other methods (p < 0.001). Differences between estimated precipitation and observed precipitation are not significantly different across the four methods, but maximum likelihood produces the most accurate estimates of precipitation. When applied to paleontological sites, paleoprecipitation estimates align more closely with glacial global climate models than with interglacial models regardless of the age of the site. Each method has constraints that are important to consider when designing ecometric analyses to avoid misinterpretations when ecometric relationships are applied to the paleontological record. We show interglacial fauna estimates of paleoprecipitation more closely match glacial global climate models. This is likely because of the anthropogenic effects on community reassembly in the Holocene. | INTRODUC TI ON Functional traits are measurable features that influence an organism's interaction with its environment (McGill et al., 2006;Violle et al., 2007). When measured in the fossil record, functional traits can be used for a thorough understanding of biotic responses to corresponding environmental changes (Eronen, Polly, et al., 2010), which can contribute to improved predictions of future faunal communities as they face severe impacts from environmental change (Barnosky et al., 2011;Ceballos et al., 2005Ceballos et al., , 2015. With climate expected to continue changing at unprecedented rates (Intergovernmental Panel on Climate Change, 2014; Wuebbles et al., 2017), it is important to better understand the past so that we can anticipate future faunal responses. Ecometric analyses were developed to estimate paleoclimatic conditions from fossil assemblages by providing a linkage between paleontological data, modern data, and projections of functional responses to impending climate change (Polly et al., 2011;Polly & Head, 2015). These studies use the trait-environment relationship to study assemblage-level responses over spatial and temporal scales (Eronen, Polly, et al., 2010;Polly et al., 2011;Polly & Head, 2015). When there is a strong trait-environment relationship, the traits can act as predictors of environment (Eronen, Polly, et al., 2010;McGill et al., 2006), and paleontology can inform conservation efforts by providing a long-term record of change (Barnosky et al., 2017;Dietl & Flessa, 2011;Dietl et al., 2015). Previous research has demonstrated relationships between community-level trait composition and environmental variables, including for plant leaf margins (Nicotra et al., 2011;Peppe et al., 2011;Royer et al., 2012;Wolfe, 1979), herbivore teeth (Eronen, Polly, et al., 2010;Eronen et al., 2010a;Evans, 2013;Fortelius et al., 2016), and locomotor skeletal elements of bovids (Barr, 2017), carnivorans (Polly, 2010), and snakes (Lawing et al., 2012), but the estimation methods have varied. Wolfe (1979) used linear regression to demonstrate that areas with high mean annual temperatures are dominated by leaves with entire margins while areas with low temperatures are dominated by leaves with nonentire margins. Eronen et al. (2010b) used linear regression and regression tree analysis to estimate Eurasian paleoprecipitation from large mammal hypsodonty values. Barr (2017) used general linear models to study the relationship between bovid postcranial elements and vegetation cover and precipitation. Fortelius et al. (2016) used regression and k-nearest neighbor (kNN) analyses on dental characters to investigate paleoenvironment in the Turkana Basin between 7 and 1 million years ago. Polly (2010) and Lawing et al. (2012) used maximum-likelihood estimation to explore the ecometric value of carnivoran calcaneal morphology and relative snake tail length, respectively. The community of scientists using ecometrics for conservation paleontology will benefit from a discussion of when to use which methods because less accurate methods will cause misinterpretations when ecometric relationships are applied to the paleontological record. Although the use of ecometrics has increased in recent years, only Fortelius et al. (2016) compare multiple methods-regression and k-nearest neighbor (kNN)-by also using hypsodonty as the ecometric trait. In this case, the authors discuss merits of both including that regression is easier to interpret because it produces an equation and that kNN is more sensitive to variation because it is nonlinear. An analysis of additional estimation methods will enable better comparisons and address potential weaknesses of paleoenvironmental interpretations. | Herbivore hypsodonty Hypsodonty is the ratio of the tooth crown height to root height of the molars, and the relationship between hypsodonty and annual precipitation and is highly correlated in large and small mammals (Eronen, Polly, et al., 2010;Eronen et al., 2010a;Lawing et al., 2017). Hypsodonty is functionally related to the durability of teeth in herbivores and provides biomechanical advantages, including more restricted areas of stress and increased occlusal pressure, to support more efficient mastication of grass and other tough, poor quality vegetation (Demiguel et al., 2016;Solounias et al., 2019). With increasing aridity, increasing dietary roughage and increasing environmental grit often coincide, so that both diet and habitat play a role in the development of hypsodont dentition Toljagić et al., 2018;Williams & Kay, 2001). Therefore, as environments have changed, so too have community-level hypsodonty values. Records from the Great Plains and the western United States suggest that North American habitats became more open and grass-dominated in the Miocene (Edwards et al., 2010;Strömberg, 2011). There were approximately 4 million years between the establishment of C 3 grasslands and the origination of equid hypsodonty in the Great Plains of North America (Strömberg, 2006); it was approximately another 10 million years until specialized grazing ungulates appeared (Janis, 2008). However, rodents and lagomorphs responded millions of years earlier than the ungulates (Samuels & Hopkins, 2017). Here, we use the trait-environment relationship between hypsodonty and annual precipitation to compare four methods of ecometric estimation -linear regression, polynomial regression, nearest neighbor, and maximum likelihood. We aim to (a) explore differences in the predictive ability of each method and (b) apply each method to Late Pleistocene fossil localities to demonstrate the potential impact of method selection on paleoenvironmental interpretations. We expect maximum likelihood to produce the most accurate estimates of precipitation from community hypsodonty values because the method estimates precipitation by fitting a model to a localized subset of communities that have similar trait values. For that reason, we also expect maximum likelihood to estimate paleoprecipitation that most closely align with global climate models. | MATERIAL S AND ME THODS We used modern communities of herbivores and annual precipitation data to evaluate four estimation methods for ecometric analyses and investigate the capacity of each method to estimate paleoprecipitation for paleontological sites. | Study area and taxa We use the extant species of Artiodactyla, Perissodactyla, Rodentia, and Lagomorpha (n = 404) in North America, because they represent the primary herbivores in North American mammalian communities. Jardine et al. (2012) suggested not including fossorial rodents and lagomorphs in studies of precipitation because these taxa are under selective pressures that do not covary with aridity. However, hypsodonty, as well as fossorial behavior, of small mammals increased as habitats became more dry and open (Samuels & Hopkins, 2017;Schap et al., 2020), and the relationship between hypsodonty and precipitation occurs in Dipodidae, which includes fossorial species outside of North America (Ma et al., 2017). Thus, we have included all Glires here to encompass the majority of the herbivorous mammal community. We recognize that the North American fauna is biased following the Pleistocene mass extinction (Barnosky et al., 2011;Carrasco et al., 2009) and, therefore, the predictive abilities of the estimation methods may be lower. Megaherbivores, though geographically widespread, do contribute to the community-level trait values at Pleistocene sites and are not represented in the modern data. However, because the relationship between hypsodonty and precipitation is well-established, it provides a good dataset for relative comparisons of paleoenvironmental reconstruction methods. | Traits and communities Hypsodonty data for this paper came from an existing dataset, which has been used to investigate trait composition at the community level in North America . Crown height for each species was assigned a value of 3 (hypsodont, high crown height), 2 (mesodont, moderate crown height), or 1 (brachydont, low crown height) ; Figure 1). An additional 72 species were assigned hypsodonty values based on literature for a total of 446 species. Some members of Rodentia and Lagomorpha have evolved hypselodont dentition in which the teeth continue to emerge throughout the lifespan; these taxa are classified as hypsodont for the purposes of this study following Fortelius et al. (2003). Community composition was sampled using an equidistant 50-km point system (9,699 sampling points) in North America (Lawing et al., 2012Polly, 2010) Canada-WILDSPACE; Patterson et al., 2007). Taxonomy associated with hypsodonty data and range maps were reviewed to insure consistency following Wilson and Reeder (2005). Only sampling points with a species richness of five or more were kept. Although this may exclude certain communities, it allows for more robust estimates of community-level measures in our models and enables more rigorous comparisons across estimation methods. We calculated the mean ( Figure 2a) and standard deviation of hypsodonty for every sample point. Our dataset on communities includes the presumed presence or absence of species at each sampling location across North America because the ranges are not based only on direct observations. Another measure of community composition could include recording the presumed abundance of species within communities. That would allow us to weigh the traits by the most commonly occurring taxa F I G U R E 1 Three levels of hypsodonty examined here. Left, Hypsodont, or high tooth crown-root ratio, as represented by Equus caballus; Middle, Mesodont, or moderate tooth crown-root ratio, as represented by Cervus canadensis; Right, Brachydont, or low tooth crown-root ratio, as represented by Tapirus terrestris (sensu Faith et al., 2019). Faith et al. (2019) show that using abundance instead of occurrence allows for weighted ecometric means that can produce more robust paleoclimate estimates. Despite these benefits, we chose to use occurrences rather than abundance to (a) use range maps in place of observational data for the modern communities, insuring larger coverage, (b) mirror available data at fossil sites that lack abundance descriptions, (c) overcome potential sampling bias that occurs in a dataset that includes both small and large mammals, and (d) replicate methods most commonly used in ecometric studies. In addition, gathering abundance data from the fossil record is highly susceptible to taphonomy and collection practices (Crees et al., 2019;Hernández Fernández & Vrba, 2006). | Environmental data Annual precipitation data were downloaded from the WorldClim database at the 2.5-degree grid scale (Hijmans et al., 2005) and extracted at each sampling point across North America (Figure 2b). The natural log of annual precipitation was used to transform the data for normality. | Ecometric analyses Four inference methods were selected for comparison of mean community hypsodonty to annual precipitation: linear regression, polynomial regression, nearest neighbor, and maximum likelihood. Linear regression and polynomial regression produce estimates using the formula of a line of best fit that is either linear or nonlinear, respectively. Nearest neighbor estimates precipitation by using training data and the k closest communities of hypsodonty values. We used Maps of estimated annual precipitation were produced using the community hypsodonty data and each of the inference methods. This estimation step allows for precipitation estimates to be evaluated through comparisons with the observed precipitation dataset. Estimated values were subtracted from the observed values, and differences were mapped to generate anomaly maps (Polly & Sarwar, 2014); smaller differences between estimated and observed values indicate a less biased prediction. Estimates were used to test the Pearson correlation of each method with observed precipitation and the other three methods. An ANOVA test was used to compare the group means across the methods. Fossil sites were categorized as interglacial or glacial using literature sources that primarily reported relative dating with many of the site descriptions including either Sangamonian (i.e., interglacial) or Wisconsinan (i.e., glacial) terminology. This final requirement excluded a number of well-known Pleistocene sites, including Rancho La Brea, American Falls, and Natural Trap Cave, because they could not be easily categorized as either glacial or interglacial. | Fossil sites application Global climate models (GCM) were downloaded for the last glacial maximum at 2.5 min resolution (Fick & Hijmans, 2017) -Bliesner et al., 2006). Precipitation values were extracted from the GCM models at each site, and an average value was used for the two glacial GCMs-CCSM4 and MIROC-ESM. These models provided additional precipitation estimates to evaluate the accuracy of the ecometric estimates. For each fossil community, hypsodonty mean and standard deviation were calculated using only one occurrence of each species to prevent duplicating the trait value of any repeated taxa. | Paleoenvironment of fossil sites Most of the paleontological case study sites are glacial (72%; | D ISCUSS I ON Trait-environment relationships can be used for understanding past environmental changes and corresponding biotic responses (Eronen, Polly, et al., 2010;Polly et al., 2011). Because there are minimal differences between estimation methods (Figures 3, 4), we expect that, when a strong ecometric relationship exists, any of the investigated estimation methods will capture the relationship between hypsodonty and precipitation. Therefore, any of the methods can be used to estimate the environment from the distribution of trait values within a community. Hypsodonty and annual precipitation have a well-established relationship, but these methods may show more differences with a weaker trait-environment relationship. Each method has constraints that should be considered when (Table 1). Similarly, polynomial regression uses a fitted regression curve of best fit for the estimation model. Estimates of precipitation using polynomial regression place a known hypsodonty value along that curve. In this study, precipitation estimates from polynomial regression are more highly correlated with observed precipitation values than those from linear regression or nearest neighbor (Table 1). However, polynomial regression is unable to predict precipitation values under 4.45 log mm because of the sinusoidal shape of the regression curve. Because of this lower limit of the curve, polynomial regression analyses will overestimate precipitation for communities dominated by taxa with hypsodont dentition because the model cannot estimate low precipitation values. This is particularly relevant for arid regions, such as deserts, that are inhabited by faunal communities with high hypsodonty values. Nearest neighbor uses a subset of data, that is, training data, to construct a model. A training dataset should be large enough to provide a robust sample for model fit; thus, it is more advantageous to use k-nearest neighbor with a large dataset (Bhatia, 2010). In this study, the training data were 20% of the whole dataset. The k value can also be changed to include more or fewer surrounding data points to determine the precipitation value associated with a known reference value. Here, the spatial pattern of overestimation in the arid southwest, tundra, and Rocky Mountains and underestimation in the Pacific Northwest and eastern North America is generally consistent with the other methods (Figure 3c), but precipitation estimates from nearest neighbor have the lowest correlations with the estimates from the other three methods (Table 1). Maximum likelihood cannot predict precipitation for communities with a trait composition outside of the ecometric trait space used to calibrate the likelihood space. The ecometric trait space is constructed from the trait composition of modern communities. Therefore, in the paleontological case studies, two interglacial sites and seven glacial sites (21% of total sites) did not receive a maximum-likelihood estimate of precipitation because the hypsodonty values fall outside of the occupied bins designated based on the modern communities ( Figure 6). This limitation of the method should be considered when working with potentially nonanalog communities, either in the past or the future, that occur outside of the ecometric trait space. Despite this limitation, precipitation estimates from maximum likelihood are the most highly correlated with observed precipitation (Table 1) Table S1 underestimate paleoprecipitation. Conversely, because of lags between environmental change and the evolution of hypsodonty (Janis, 2008;Strömberg, 2006), we might expect estimates built on extant taxa to generally overestimate paleoprecipitation. Here, the analyses are on a geologically small temporal scale of approximately 125,000 years, so it is unlikely this evolutionary pattern affected the trait-environment relationship, and the four methods mostly overestimated or accurately predicted precipitation for the fossil sites when compared to the global climate models ( Figure 6). It might also be expected that today's interglacial fauna should more accurately estimate paleoprecipitation at interglacial sites rather than glacial sites. However, the interglacial estimates are consistently offset from the interglacial global climate models, but more closely align with the glacial global climate models (Figure 7). While this could be an effect of the interglacial precipitation model, it may also be that today's interglacial faunal communities are more similar to the glacial communities. If the extant fauna is lagging behind the climate, the fauna may not have fully responded to today's interglacial conditions. On the timescale of interglacial and glacial cycles, changes in trait composition are driven by community reassembly rather than evolutionary adaptation . In the Holocene, community assembly is largely affected by anthropogenic effects that have changed community structure patterns to include more segregated species pairs and restricted the interglacial community reassembly that would have occurred without the human impacts (Lyons et al., 2016). | Limitations In this paper, we used community species lists extracted from expert drawn polygons of species geographic ranges, which typically overestimate species' presence within communities (Cantú-Salazar & Gaston, 2013;Jetz et al., 2008). This could affect the trait values of communities that occur along distribution margins and weaken the predictive ability of the estimation methods. Furthermore, although species occurrence data are from distribution estimates updated in 2007 (Patterson et al., 2007), precipitation is an average of data from 1970 to 2000 (Fick & Hijmans, 2017). This temporal mismatch may introduce a bias as faunal assemblages are increasingly affected by anthropogenic pressures, such as land use and habitat loss (Hobbs et al., 2018;Lyons et al., 2016). For example, a current species range map may no longer capture precipitation regime from 1970 to 2000, but may be a reflection of distribution constraints, such as habitat loss and competition from invasive or introduced species. We have limited our modern community species lists to only native and reintroduced taxa. Extirpation or extinction of native species and the presence of invasives and non-native species can change the trait values of a community (Žliobaitė et al., 2018), but, with a strong trait-environment relationship, it is unlikely that these taxa would change the trait values enough to notably change the environmental interpretation (Polly & Sarwar, 2014). For instance, it was expected that the Pleistocene megafaunal extinction would create a bias and make the functions unable to estimate precipitation of glacial sites. However, the glacial estimates more closely aligned with the global climate models ( Figure 6). Fossil sites were designated as interglacial and glacial using relative dating. Literature often described the fossil sites as having a Sangamonian (interglacial) or Wisconsinan (glacial) fauna, which made it difficult to use finer temporal resolution. Because of the consistent estimates within interglacial sites and glacial sites (Figure 6), it is unlikely that this caused a misinterpretation of results. It would be beneficial to further evaluate the pattern of overestimating interglacial precipitation across sites using only fossil sites with absolute dating. In general, more studies on fossil communities are needed to increase the applicability of trait-based models to the past and the future. | Implications Evaluating ecological and evolutionary processes from data archived in the fossil record provides critical information about biodiversity to researchers, conservationists, and managers by facilitating a better F I G U R E 7 Four estimates of precipitation for interglacial fossil sites compared to glacial global climate model estimates. (a) Estimates for interglacial sites; and (b) density plot of anomalies for interglacial estimates of precipitation and glacial global climate models. Logged precipitation values are provided in Table S1 understanding of anticipated biological responses to expected environmental changes (Barnosky et al., 2017;Dietl & Flessa, 2011;Dietl et al., 2015). Paleobiological records provide a broader and deeper perspective that allows us to forecast how impending climate change will affect species and communities (Burney & Burney, 2007;Lawing et al., 2016). Therefore, researchers are increasingly considering conservation implications in their paleontological work and, as such, it is important that we consider the methods used to define the traitenvironment relationship. Here, we show that the hypsodonty-precipitation relationship is identifiable with four different estimation methods (Figure 3), although maximum likelihood produces a better fit to observed data and more neutral anomalies than the other methods ( Figure 4). In this study, paleoprecipitation estimates of interglacial fossil communities were more closely aligned with glacial global climate models (Figures 6, 7). This pattern may be due to anthropogenic constraints on community reassembly in the Holocene (Lyons et al., 2016 For a more complete understanding of community responses to environmental change through time, it is imperative that we further explore trait-environment relationships in the paleontological record that can be used in conjunction with other proxies and models, such as global climate models. By using multiple proxies either in parallel or in merged multiproxy models, we can provide a more complete interpretation of past communities, which will be needed to anticipate faunal responses to ongoing environmental changes. ACK N OWLED G M ENTS We are grateful to Laura Emmert for providing Figure 1. We also thank Jim Mead and Perry Barboza for providing comments on an earlier version of this paper. This research was supported by fund- CO N FLI C T O F I NTE R E S T The authors report no conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T All data and code are available on the Dryad Digital Repository
2020-12-10T09:05:00.730Z
2020-12-04T00:00:00.000
{ "year": 2020, "sha1": "09b6f590268e35b1d76dc58dbb88a445a1e39f07", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.7081", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86b5c8b27fb824fe010f8cee423a64aec2aaff2f", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
6126777
pes2o/s2orc
v3-fos-license
Retrospective study and immunohistochemical analysis of canine mammary sarcomas Background Canine mammary sarcomas (CMSs) are rarely diagnosed in female dogs, which explains the scarcity of immunohistochemical findings concerning those tumors. This paper presents the results of a retrospective study into CMSs and discusses the clinical features of the analyzed tumors, the expression of intermediate filaments CK, Vim, Des and α-SMA, and the expression of p63, Ki67, ERα, PR and p53 protein. Results Four percent of all canine mammary tumors (CMTs) were classified as CMSs, and they represented 5.1% of malignant CMTs. The mean age at diagnosis was 11.1 ± 2.8 years. Large breed dogs were more frequently affected (38.7%). The majority of observed CMSs were fibrosarcomas (2.1%). All CMSs expressed vimentin, and higher levels of vimentin expression were noted in fibrosarcomas and osteosarcomas. Ki67 expression was significantly correlated with the grade of CMS. Conclusions Our results revealed that CMSs form a heterogeneous group, therefore, immunohistochemical examinations could support differential and final diagnosis. Although this study analyzed a limited number of samples, the reported results can expand our knowledge about CMSs. Further work is required in this field. Background Canine mammary tumors (CMTs) are the most common neoplasms that account for nearly one-half of all tumors diagnosed in female dogs. Approximately 41% to 53% of CMTs are malignant [1,2]. Most CMTs are epithelial in origin, and they have been extensively researched [3,4]. However, little is known about canine mammary sarcomas (CMSs), which are malignant tumors originating in mesenchymal tissue of the mammary gland, including osteosarcoma, chondrosarcoma, fibrosarcoma and hemangiosarcoma [5,6]. They are considered to have a very poor prognosis and, therefore, pose a great challenge in veterinary practice [1,6,7]. Mammary sarcomas are more often diagnosed in dogs than humans where breast sarcomas constitute less than 1% of malignant breast tumors [8]. The discussed tumors are very rare, they remain poorly investigated, and the majority of published studies into CMSs involve case reports or experiments performed on a small number of samples [7,[9][10][11][12]. Tumor samples and histopathological examination In this study, all cases of CMTs described in the archives (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012) of the Division of Animal Pathomorphology at the Department of Pathology and Veterinary Diagnostics have been analyzed. The histological type of the tumor was assessed based on the World Health Organization (WHO) Histological Classification of the Mammary Tumors of the Dog and Cat, the histological grade was based on the assessment of tubule formation, the degree of differentiation and the mitotic index [13,14]. For the needs of this study, CMTs were classified as benign or malignant, and were further subdivided into carcinomas and sarcomas. The age at diagnosis, the affected animal breed and the morphological features of the tumor, including size, location and histological type, were recorded. Eighteen paraffinembedded specimens of canine mammary sarcomas were analyzed. The final diagnosis was based on IHC results and tumor grade. The histological type of CMS was determined based on the proposed criteria [6] and the World Health Organization classification [14]. The mitotic index (MI) was determined as the mean number of cells in mitosis evaluated in 10 high-power fields (HPF) under 40x objective lens (field area 0.239 mm 2 ) [15]. A specific grading system for CMSs has not yet been established, therefore, the method used in this study was based on the degree of cell differentiation (pleomorphism), the mitotic index and the presence of necrosis. CMSs were classified into two groups: low-grade malignancy (well-differentiated and moderately differentiated, I and II) and high-grade malignancy (poorly differentiated, III) [16]. Where an agreement on a diagnosis could not be achieved, a round-table discussion was staged using a multi-headed microscope. Scoring of immunohistochemical data Immunohistochemical analyses involved at least 10 images of sarcomatous areas acquired at x40 HPF (Olympus microscope BX41). Positive immunostaining for Vim, CK, Des and α-SMA was observed as a brown cytoplasmic precipitate, and for p63as a brown nuclear precipitate. The number of immunoreactive cells was classified as: -= none, ± = slight (positive cells constituted less than 10%); + = moderate (10-50% of cells were positive); and ++ = intense (more than 50% of cells were positive) [20]. The colorimetric intensity of IHC-stained antigen spots was determined in a computer-assisted image analyzer (Olympus Microimage Image Analysis version 4.0 for Windows, USA), and antigen spot color intensity was expressed as mean pixel optical density on a 1-256 scale. Positive immunostaining for Ki67, ERα, PR and p53 was defined as nuclear pattern (brown precipitate). Antigen density was determined by counting at least 1000 cells in 10 HPF. The number of positive cells was expressed as the percentage of positively stained cells in the total number of cells [21,27]. Areas with necrosis were omitted. Statistical analysis Data was processed in Prism 5.00 software (GraphPad Software, California, USA) using one-way ANOVA, Tukey's HSD (Honestly Significant Difference) post-hoc test, Spearman's and Pearson's correlation. P-values <0.05 (*) were regarded as significant, whereas p-values <0.01 and <0.001 were considered to be highly significant. Retrospective data for CMSs A total of 841 CMT cases were found in the archives (1996-2012) of the Division of Animal Pathomorphology. CMSs constituted only 4% (34/841) of all CMTs and 5.1% (34/666) of malignant CMTs. The ratio of sarcomas to carcinomas was 1:18 (34/616). Malignant mixed mammary tumors (carcinosarcomas) were excluded from these groups. The mean age at diagnosis in female dogs was 11.1 ± 2.8 years, within a range of 5 to 17 years. The mean tumor size, defined as the largest diameter, was 8.8 ± 6.1 cm (range of 2-20 cm). The fourth left mammary gland was most commonly affected. In the group of purebred dogs, the percentage of CMSs was higher in large breeds (38.7%, n = 12), i.e. German Shepherds (n = 4) and Rottweilers (n = 2), and single CMS cases were noted in the following breeds: St. Bernard, Tosa Inu, Doberman Pinscher, Flat Coated Retriever, Labrador Retriever and German Shorthaired Pointer. The analyzed specimens included 16 high-grade and 2 low-grade CMSs. All osteosarcomas, liposarcomas and other sarcomas were high-grade tumors; 66.7% (n = 4) of fibrosarcomas were high-grade, whereas the remaining fibrosarcomas were classified as low-grade. No significant differences were observed between patients' age or breed, tumor size, sarcoma type or degree of malignancy. The staining pattern was evaluated in sarcomatous areas of the tumor (Figures 1, 2, 3 and 4). All 18 cases were positive for Vim. The mean optical density related to Vim expression in liposarcomas and other sarcomas was significantly lower than in osteosarcomas and fibrosarcomas (p <0.05) ( Figure 5A, 5B). Focal moderate (+) expression of α-SMA was found in osteosarcomas (n = 3) and fibrosarcomas (n = 3) (Figures 1 and 2). Moderate cytoplasmic expression of α-SMA and Des was observed in one fibrosarcoma cell (Figure 2). The expression of CK and p63 was not observed in neoplastic cells. Ki67 was expressed by all CMSs, p53by 50% CMSs (n = 9), PRby 27.8% (n = 5), and ERα expression was reported only in one sample (Figures 1, 2, 3 and 4). Detailed data is given in Tables 1 and 2. Spearman's analysis revealed that Ki67 expression (p = 0.034) was significantly correlated with the CMS Figure 1 Histological and immunohistochemical images of canine mammary osteosarcoma. The histological sample was stained with the standard hematoxylin and eosin (H-E) method. Immunopositivity (nuclear or cytoplasmic) is shown as brown precipitate in neoplastic cells. The EnVision + System-HRP detection system was used, and the signal was visualized with chromogen 3,3-diaminobenzidine 3-3' (DAB). Arrows indicate positive nuclear staining of cells. An asterisk (*) indicates a negatively stained sarcomatous area. Images were obtained under the Olympus BX41 microscope. Original magnification: H-E (x400), Vim (x400), α-SMA (x400), Ki67 (x400), ERα (x200, x400), PR (x400), p53 (x400). grade ( Figure 6). No significant correlation was found between proliferation markers (Ki67 expression and MI) and the expression levels of hormone receptors ERα and PR. Discussion Canine mammary sarcomas (CMSs) are a rare type of tumors, and little is known about their biology. Our results provide new insights into clinical data regarding CMSs. Researchers are divided over the frequency of CMS occurrence. Previous research demonstrated that CMSs accounted for a small percent (0.45-16.7%) of CMTs [5,28,29], and similar results were reported in our study. According to some authors, CMSs constitute 40% of all mammary malignancies [29,30]. It should be noted, however, that in selected analyses, carcinosarcomas were included in the group of sarcomas [31]. In this study, CMSs were observed in older female dogs, which is consistent with reports describing the mean age at CMT diagnosis [3,32]. One study found that younger dogs (mean age of 9 years) were more likely to be affected by CMSs than canine mammary carcinomas [5], but other morphological features were similar to those reported by Misdorp et al. [5]. Interestingly, we noted that CMSs were more likely to affect the left rather than the right mammary gland. However, considering the small number of samples and the scarcity of published data, our results could be purely coincidental. In view of previous reports indicating that the location of the tumor had no effect on the outcome, the above information has no clinical significance [2,32]. Selected studies indicated that purebred dogs were more predisposed to mammary tumors than mongrels [33]. In a survey of 101 CMTs, one case of sarcoma was observed in small-breed dog and another in a different dog breed [34]. In our study, the majority of CMSs were diagnosed in large-breed dogs. To date, only one study has demonstrated that CMSs were common in medium-and large-breed dogs [11]. In our opinion, the fact that CMSs are most frequently observed in largebreed dogs could be attributed to the high popularity of those breeds in Poland. Due to a limited number of cases, breed predilection for only CMSs has not been established, but it has been reported for all canine mammary tumors [1,28,33]. In the retrospective analysis, fibrosarcoma was the most frequent type of sarcoma in the group of malignant CMTs. Our results corroborate the findings of other authors [5,35]. Gómez et al. [35] described fibrosarcoma as the second most common malignant CMT after carcinoma. Fibrosarcoma affected 1.3-5.56% of the populations surveyed in different studies [36,37]. The fact that sarcomas are malignant tumors is commonly accepted. Several studies described malignant progression from complex carcinomas to simple carcinomas and sarcomas [1,32]. In our study, the majority (88.9%) of the examined CMSs were high-grade tumors, but no significant correlation was observed between the histological type and the grade of CMSs. The above could be attributed to the small number of samples and the low statistical power to detect differences. The origin of mesenchymal tumors and mesenchymal elements (in particular cartilage and bone) in the mammary gland has been under debate for many years. Some authors suggested that mesenchymal components originated from myoepithelial cells [20] or pluripotent stem cells [38]. Studies of CMT histogenesis examined the expression of microfilaments [1,6,9,17]. Analyses of intermediate filament expression support differentiation between canine mammary sarcomas (in particular fibrosarcomas and spindle cell carcinomas or malignant myoepitheliomas and hemangiopericytomas). All CMSs examined in our study showed expression of Vim (at various levels) regardless of their histological type, which could indicate that they originated from mesenchymal stem cells [14,39]. Various expression levels of Vim could be related to differences in CMS structure. Terra et al. [40] recently reported significantly higher expression of vimentin in malignant canine mammary tumors than in benign lesions. To the best of our knowledge, there are no published reports regarding CMSs. In this study, the expression of α-SMA was observed in three osteosarcomas and three fibrosarcomas, and Des expression was noted in one case of fibrosarcoma. Those observations could point to myofibroblastic focal differentiation that was previously described in feline mammary and breast sarcomas [41,42]. Some authors suggested the myoepithelial origin of myofibroblasts [43], but the absence of CK and p63 immunoreactivity combined with strong Vim expression in this study points to the mesenchymal nature of CMSs. Myofibroblastic differentiation was observed in breast malignancies (malignant fibrous histiocytoma, low-grade myofibroblastic sarcoma) [42,44], but it was not noted in mammary tumors in animals. In this study, CK expression was not observed in any of the examined CMSs. Our findings are consistent with previously published reports where no direct transitions between carcinoma and sarcoma were noted [13,14]. In two reports [9,17], CK expression was observed in canine mammary osteosarcomas, and it was limited to epitheliallike cells. In the present study, high levels of Ki67 expression were noted in the examined CMSs. In other studies, cell proliferation activity varied significantly across different histologic grades [15,45]. In our study, the expression of Ki67 was correlated with the sarcoma grade. According to many authors, proliferative activity can predict the biological behavior of canine mammary carcinomas: metastases, disease-free survival (DFS) and overall survival (OS) [15,21,45,46]. Canine mammary sarcomas were characterized by higher levels of Ki67 expression than carcinomas [21]. To the best of our knowledge, the expression of p53 in CMSs has been investigated by very few authors [47,48]. Similar expression levels of p53 in various types of CMSs (mostly high grade sarcomas) were demonstrated in this study. This is a relative new observation, and to date, p53 expression has been demonstrated mainly in osteosarcomas [47,49]. Our findings could suggest that p53 plays a role in malignant progression of the tumor [27,50]. Overexpression of p53 has been described in 50% of breast sarcomas [50]. Those results largely corroborate our findings. Interestingly, other authors have suggested the possible prognostic role of p53 expression in breast sarcomas [50]. Similar results were noted in this study, but the utility of p53 as a prognostic maker should be examined on a larger number of samples and in view of follow-up data. Most of the examined CMSs showed no expression of ERα or PR. In literature, high levels of ERα and PR have been observed in well-differentiated tumors with a low proliferation rate [21,22]. Hormonal therapy is thought to be beneficial in those types of tumors [22,51]. As expected, although the majority of CMSs were hormoneindependent, ERα and PR expression was not correlated with tumor type or grade. Those findings support our observations that hormonal treatment, which is often recommended in mammary carcinomas [24], could be ineffective in sarcomas due to an absence of hormone receptors [52]. The differences between the hormonal status of carcinomas and sarcomas could be attributed to variations in their histogenetic origin. Our study demonstrated an absence of correlations between ERα and PR expression and proliferation status. Hormone receptors did not show antiproliferative activity that is observed in mammary cancers [46,51].
2017-06-26T06:06:44.273Z
2013-12-09T00:00:00.000
{ "year": 2013, "sha1": "89741f18875f87e359fde76bdc4753312c99e65d", "oa_license": "CCBY", "oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/1746-6148-9-248", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "89741f18875f87e359fde76bdc4753312c99e65d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245856289
pes2o/s2orc
v3-fos-license
Efficacy of Neuroendoscopic Treatment for Septated Chronic Subdural Hematoma Objective: Neuroendoscopic treatment is an alternative therapeutic strategy for the treatment of septate chronic subdural hematoma (sCSDH). However, the safety and efficacy of this strategy remain controversial. We compared the clinical outcomes of neuroendoscopic treatment with those of standard (large bone flap) craniotomy for sCSDH reported in our center. Furthermore, the safety and efficacy of the neuroendoscopic treatment procedure for sCSDH were evaluated. Methods: We retrospectively collected the clinical data of 43 patients (37 men and six women) with sCSDH who underwent either neuroendoscopic treatment or standard (large bone flap) craniotomy, such as sex, age, smoking, drinking, medical history, use of antiplatelet drugs, postoperative complications, sCSDH recurrence, length of hospital stay, and postoperative hospital stay. We recorded the surgical procedures and the neurological function recovery prior to surgery and 6 months following the surgical treatment. Results: The enrolled patients were categorized into neuroendoscopic treatment (n = 23) and standard (large bone flap) craniotomy (n = 20) groups. There were no differences in sex, age, smoking, drinking, medical history, antiplatelet drug use, postoperative complications, and sCSDH recurrence between the two groups (p > 0.05). However, the patients in neuroendoscopic treatment group had a shorter length of total hospital stay and postoperative hospital stay as compared with the standard craniotomy group (total hospital stay: 5.26 ± 1.89 vs. 8.15 ± 1.04 days, p < 0.001; postoperative hospital stay: 4.47 ± 1.95 vs. 7.96 ± 0.97 days, p < 0.001). The imaging and Modified Rankin Scale at the 6-month follow-up were satisfactory, and no sCSDH recurrence was reported in the two groups. Conclusions: The findings of this study indicate that neuroendoscopic treatment is safe and effective for sCSDH; it is minimally invasive and could be clinically utilized. INTRODUCTION Chronic subdural hematoma (CSDH) refers to subdural hemorrhage, which usually occurs 3 weeks after traumatic brain injury (1). However, the pathogenesis of CSDH has not been fully elucidated. If the hematoma is surrounded by an envelope, repeated bleeding can occur. A pseudomembrane or fibrous septum in the hematoma cavity can form septations in the hematoma. This can lead to the formation of a septated CSDH (sCSDH) with septate cavities. Four methods are commonly used to treat CSDH: conservative (medical) treatment, twistdrill craniostomy, burr hole evacuation, and large bone flap craniotomy (2,3). For CSDH, treatment with atorvastatin combined with lowdose dexamethasone achieved favorable clinical results (2). However, Hutchinson et al. reported that dexamethasone was not beneficial in the conservative treatment of CSDH due to more adverse events than placebo (4). For non-septated CSDH, twist-drill craniostomy, and burr hole evacuation may be the optimal treatment options (5, 6); however, treatment of sCSDH remains a therapeutic challenge. Treatment with twistdrill craniostomy, burr hole evacuation, or irrigation with saline during and after the surgery can fail to achieve the satisfactory clinical results due to the septation of the hematoma by a fibrous septum, which prevents outflow of the hematoma fluid. Therefore, unsatisfactory sCSDH outcomes are often associated with hematoma recurrence or residual hematoma and can require a further large bone flap craniotomy (7). Although patients with sCSDH treated with large bone flap craniotomy are reported to have relatively favorable outcomes (7), it is a massively traumatic neurosurgery that results in large wounds, high infection rates, and high incidence rate of epilepsy (8). Advances in microscopy and neuroendoscopy technology allow sCSDHs to be removed effectively by neuroendoscopic treatment (9). Currently, the neuroendoscopic treatment is considered the conventional treatment for CSDH. However, the advantages and disadvantages of performing neuroendoscopic treatment in the treatment of sCSDH, as compared with large bone flap craniotomy, are rarely reported. Therefore, this study retrospectively collected the clinical records of 43 patients with sCSDH treated with either neuroendoscopic treatment or standard (large bone flap) craniotomy in our center, and the clinical data and characteristics were compared and analyzed. Ethics Statement The Ethics Committee of the First Affiliated Hospital of Fujian Medical University, Fujian, China, approved the study and waived the requirement for written informed consent (the number/ID of the ethics approval: MRCTA, ECFAH of FMU [2019] 123). General Information This study retrospectively collected the clinical data of 43 patients with sCSDH who underwent neuroendoscopic treatment or standard (large bone flap) craniotomy, such as sex, age, smoking, drinking, medical history, use of antiplatelet drugs, postoperative complications, sCSDH recurrence, length of hospital stay, and postoperative hospital stay. The surgical procedures, and neurological function recovery after surgery and 6 months following surgical treatment, were recorded. The diagnoses of sCSDH were based on imaging using head CT or MRI, clinical manifestations, and the medical history of patient. The thickness of the subdural hematoma in patients with sCSDH needed to be >2 cm to receive neuroendoscopic treatment. Surgical Procedure Following the diagnosis of sCSDH, if there were no obvious surgical contraindications, the CSDH was evacuated under general anesthesia. In a standard (large bone flap) craniotomy, a question mark-shaped skin incision was made, and a fronto-temporoparietal bone flap procedure was performed. The subdural hematoma was evacuated, and an outer membranectomy was performed following the incision of the dura mater. However, an inner membranectomy was not performed to avoid injury to the cortical brain surface. For a more detailed craniotomy protocol, please refer to the surgical procedure for refractory CSDH by Matsumoto et al. (10). For neuroendoscopic treatment, following general anesthesia, the head was tilted to one side at a 30-45 degrees angle and a conventional disinfection drape was set. A C-shaped incision in the frontal region was made based on the location of the hematoma indicated on preoperative imaging. The length of the surgical incision was ∼9 cm; the musculocutaneous flap was turned back, and the bone flap was located on the coronal suture ( Figures 4B, 6B). A single hole was drilled, and a free bone flap was created. It should be noted that the inner table of the bone flap needed to be milled obliquely, and a flaskshaped bone window with a small outer diameter and a large inner diameter (Figures 1, 6H) was created. If necessary, the bone window required further special treatment to account for the special requirements of the neuroendoscope. The inner table of the skull needed to be milled obliquely; therefore, a section of it was removed using a drill (Figures 1, 2, 6H). The dura mater was sutured to the periosteum to prevent epidural hematoma, and a cross incision was made ( Figure 6D). In cases where the hematoma was not visible, the capsules were cut further until the hematoma and its cavity were revealed. Next, the hematoma under the bone window ( Figure 4C) was removed; this facilitated the introduction of the neuroendoscope into the subdural space. Suction was applied using an S-shaped curved aspirator prior to the subdural hematoma evacuation (Figure 3). The neuroendoscope was held with one hand, and an S-shaped aspirator was operated with the other hand, at various positions of the hematoma cavity (Figures 4D-F). During the operation, the arc of an S-shaped aspirator was used to push the brain tissue away to reveal and remove the hematoma that was located distant from the bone window (Figures 4F, 6E). Effective hemostasis was necessary during surgery; therefore, bipolar cauterization was required. If the angle of bipolar insection was not convenient, the location of the bleeding could not be reached safely; therefore, the tip of the bipolar forceps was bent backward with a rongeur (Figure 5). The intraoperative fibrous septum was cut using sharp scissors ( Figure 6F). The neuroendoscope and aspirator were inserted into the corresponding position for the evacuation of residual hematoma in the temporal or forehead region. If necessary, another monitor was placed in the direction of the lower limbs or forehead of patient (Figure 7) to facilitate hematoma evacuation in the temporal or forehead region. In addition, the bipolar tip was shaped to facilitate intraoperative hemostasis. Following hematoma removal, the visceral layer of the subdural hematoma remained intact without any treatment. After the neuroendoscope was withdrawn at the end of the surgery, the subdural space was filled with saline. After the dura was closed tightly, a small syringe and needle were used to fill the intracranial cavity with normal saline to avoid gas accumulation, and a drainage tube was placed under the dura mater to prevent intracranial hematoma formation. Once the dura mater was closely sutured, the bone flap was repositioned and fixed, and the drainage tube was fixed beside the incision. Finally, the surgical incision was closed layer by layer. Postoperative Treatment Following the surgery, the patient was placed in the supine position, the head was inclined to the operative side, and fluid replacement was increased to promote brain tissue reduction. Epilepsy was prevented and hemostatic treatment was performed. Perioperative antibiotics were used within 24 h to prevent infection in the elderly. A head CT scan was performed within 24 h following the operation to determine whether the subdural hematoma was cleared and to check for the presence of an intracranial hematoma. The patient's consciousness, pupils, vital signs, and the amount and properties of the drainage fluid were closely observed. If the patient was conscious, their vital signs were stable on the first day following the operation, and a CT scan indicated that the subdural hematoma was cleared with the absence of any new intracranial hematoma, the drainage tube was removed. If a CT scan indicated an intracranial hematoma, the drainage continued. If necessary, urokinase was placed into the drainage tube to dissolve any intracranial blood clots, and the drainage tube was removed within 72 h post-surgery. Neurological function assessment, brain CT reexamination, and MRI examination were performed 6 months after discharge. Statistical Analysis Statistical analyses were performed using SPSS for Windows (version 26.0; IBM Corp., Armonk, NY, USA). The Kolmogorov-Smirnov test was used to determine whether the data had a normal distribution. Student's t-test or a one-way ANOVA was used to determine significant differences in continuous data. The chi-squared test (χ 2 test) or Fisher's exact test was used to determine significant differences in qualitative data. RESULTS A total of 43 patients with sCSDH were enrolled in our study (37 men and six women). They were categorized into neuroendoscopic treatment and standard (large bone flap) craniotomy groups, with 23 and 20 patients, respectively. The ages of the patients were 40-86 years (median: 65 years) in the standard (large bone flap) craniotomy group, and 39-84 years (median: 66 years) in the neuroendoscopic treatment group. There were no differences in sex, age, smoking, drinking, medical history, antiplatelet drug use, postoperative complications, and sCSDH recurrence between the two groups (p > 0.05). However, the patients in neuroendoscopic treatment group had a shorter total hospital stay and postoperative hospital stay (total hospital stay: 5.26 ± 1.89 vs. 8.15 ± 1.04 days, p = 0.000; postoperative hospital stay: 4.47 ± 1.95 vs. 7.96 ± 0.97 days, p = 0.000). The imaging and Modified Rankin Scale (mRS) at the 6-month follow-up were satisfactory, and no sCSDH recurrence was reported in the two groups. No deaths were reported in the present study. The clinical symptoms of all our patients gradually improved within the first week after surgery, and the neurological function returned to normal within 6 months. The subdural hematomas were cleared in 42 patients; however, there was a single case with a new subdural hematoma in the surgical cavity following neuroendoscopic treatment and standard craniotomy (4.35 vs. 5.00%, p = 0.919). Finally, the patient was discharged after active drainage using urokinase, which was used to dissolve intracranial blood clots. There was one case (4.35%) of intracranial infection after neuroendoscopic treatment, and three (15.0%) after standard craniotomy (p = 0.230). There were no cases of wound infections in the neuroendoscopic treatment group and two (10.0%) in the standard craniotomy group (p = 0.120). While there were no cases of postoperative epilepsy in the neuroendoscopic treatment group, there were two (10.0%) such cases in the standard craniotomy group (p = 0.120). A single patient (4.35%) developed pulmonary infection in the neuroendoscopic treatment group, whereas there were three (15.0%) such patients in the standard craniotomy group (p = 0.230). All patients with postoperative complications were discharged following treatment without any complications ( Table 1). DISCUSSION Neuroendoscopic treatment is a potential therapeutic strategy for the treatment of sCSDH, but the safety and efficacy of neuroendoscopic treatments remain controversial. For symptomatic patients with CSDH, burr hole evacuation is often the first-line treatment choice (5). However, treatment of sCSDH using burr hole evacuation is reported to be associated with a high rate (7-10%) of hematoma recurrence (11,12) or residual hematoma and, therefore, requires further surgical intervention (7). While standard (large bone flap) craniotomy is an efficacious treatment in sCSDH, it requires a longer incision and a longer operating time, has high rates of postoperative intracranial infections, wound infections, epilepsy, and requires a relatively longer duration in the hospital (2,(13)(14)(15); these are not consistent with the current trend of minimally invasive surgery. Our study showed that there were no differences in sex, age, smoking, drinking, medical history, taking antiplatelet drugs, postoperative complications, and sCSDH recurrence between the neuroendoscopic treatment and standard craniotomy groups (p > 0.05). However, patients in the neuroendoscopic treatment group had a shorter length of total hospital stay and postoperative hospital stay than those in the standard craniotomy group. Furthermore, the imaging and mRS at the 6-months follow-up were satisfactory, with no reports of sCSDH recurrence. Therefore, sCSDH can be safely and effectively treated using neuroendoscopic treatment. With advancements in microscopy and neuroendoscopic techniques, the treatment of patients with sCSDH has gradually become minimally invasive. None of the 23 patients in the neuroendoscopic treatment group developed wound infections or epilepsy. Our study showed that there were no differences in postoperative complications and sCSDH recurrence between the neuroendoscopic treatment and standard craniotomy treatment. Patients with limb weakness in the neuroendoscopic treatment group were able to perform morning bedside early rehabilitation measures, which helped to ease wound pain and prevent complications. This led to the patients in the neuroendoscopic treatment group having a shorter total hospital stay and postoperative hospital stay (p = 0.000). Considering its efficacy, we believe that the neuroendoscopic treatment technique must be mastered by other clinical centers. Here, we summarized, analyzed, and compared the clinical data of 43 patients with sCSDH treated with neuroendoscopic treatment or standard (large bone flap) craniotomy. The surgical procedures for neuroendoscopic treatment are described in detail. First, we focused on making the incision as small and minimally invasive as possible. The frontal C-shaped incision was moved backward, making more space for the endoscope to be closer to the skull, so that the subdural tip of the neuroendoscope could be operated away from the brain tissue, thereby facilitating operating the neuroendoscope from the forehead to the back. Based on the minimally invasive incision, we improved the formation of the bone window, milling, and cutting of the skull into a flask-shaped bone flap, with a smaller outside surface and a larger inside surface, that facilitates entry and exit for neuroendoscopy. In addition, we bent the suction device into an S shape that allowed the upturned tip to be positioned away from the cerebral cortex and be used to push the brain tissue away to facilitate the removal of the remnant hematoma. In addition, it was necessary to shape the tip of the bipolar forceps for hemostasis of the dura mater and fibrous septum. Our study had some limitations. This was a retrospective study, it was limited by the number of cases, and the groups we analyzed did not include a multiple burr hole group or orally medicated Lipitor group. These limitations should be addressed in future clinical research on the endoscopic treatment of patients with sCSDH. In conclusion, our study showed that neuroendoscopic evacuation is an effective and safe treatment for sCSDH, it is minimally invasive, and it should be utilized clinically. However, it should be noted that there were only 23 cases in this study, and the results of this study should be verified using clinical trials that include a larger number of patients. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of the First Affiliated Hospital of Fujian Medical University (no,: MRCTA, ECFAH of FMU [2019] 123). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
2022-01-12T14:20:43.016Z
2022-01-11T00:00:00.000
{ "year": 2021, "sha1": "051d150fe2b5a38ad3acab90a500d97ba7b6e93d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.765109/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "051d150fe2b5a38ad3acab90a500d97ba7b6e93d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239878633
pes2o/s2orc
v3-fos-license
Smart Technologies, Artificial Intelligence, Robotics, and Algorithms (STARA) Competencies During COVID-19: A Confirmatory Factor Analysis using SEM Approach Public sector organisations have changed to be online offices and services while the COVID-19 outbreak attacked. The transformations also shift not only the paradigm but also the working methods of public sectors into an online system requiring a capacity to use smart technologies, artificial intelligence, robotics, and algorithms (STARA). Nevertheless, the research on the issue is still rare. This paper bridging the gap by analysing a confirmatory factor of STARA competencies of public employees during COVID-19 pandemic in Indonesia. We tested twelve items that relied on STARA competencies. The research used a survey method on 305 public servants in the Province of Special Region of Jakarta. A structural equation modelling (SEM) was utilised to assess the data. An SEM analysis suggests that the twelve indicators are valid and reliable in predicting STARA competencies' constructs. Our findings may be used by the subsequent researchers in examining STARA competencies. Introduction In late 2019, the world faces a severe pandemic in human chronicle, SARS-Coronavirus-2 (COVID- 19). Started in Wuhan, China, the virus rapidly spreads around the globe. Indonesia was officially infected by the COVID-19 in March 2020. The COVID-19 outbreak induces numerous victims both to die and be infected. Moreover, after appearing the new variants of the COVID-19, the sufferer sharply enhances. Following the Indonesian COVID-19 task force data, there are 3,033,339 individuals positively infected by the COVID-19, 2,392,923 person recovery, and 79,032 persons died (07/22/2021) [1]. The COVID-19 affects not only human health but also human life, such as economic and social dimensions. In the financial aspect, many people lose their job and income because of COVID-19 [2]. In social life, many individuals are challenging to contact each other because of various physical and social distancing during the COVID-19 crisis. The government in many countries introduces a variety of policies to prevent the spreading of the virus. One of the policies is to work and study from home during COVID-19 disease [3]. Indonesia has implemented work from home policy for central and local government public employees after several months of the COVID-19 attacked. To work from home effectively and productively, the public employee needs to use information and technology devices to deliver public services. One of the capabilities required by a public employee is smart technologies, artificial intelligence, robotics, and algorithm, recognised as STARA competencies [4]. Smart technologies are related to the use of intelligent tools for monitoring, analysing, and reporting utilities. Artificial intelligence is the use of various features of artificial intelligence, such as machine learning, deep learning, big data, and data mining, to help the duty. Robotics refers to the ability of the employee to apply mechanical devices in finishing their job. The algorithm is defined as the skill in designing and implementing the algorithm in a daily appointment. The concept of STARA competencies is currently evolved in the academic literature. Brougham & Harr are the two scholars who initially acknowledged the idea of STARA competencies in the literature of human resources management (HRM) [5]. They obtained warm regard from several researchers after they published their seminal work in 2018. In fact, several studies have analysed STARA competencies at the workplace. Ogbeibu et al. sought the effect leader STARA competence and environmental dynamism on green creativity skill among the manufacturing leaders in Nigeria. They found that leader STARA competence significantly impacts leader green creativity skills than environmental dynamism [4]. Ogbeibu et al. tested the influence of leader STARA competence on employee turnover intention. In addition, they also investigated the moderating role of the leader STARA competence in the relationship between green talent management and turnover intention. They reached three-manifold findings. First, they noted that the negative impact of leader STARA competence on subordinate turnover intention. Second, the positive impact of green hard talent management on turnover intention was reduced by leader STARA competence. Lastly, the negative effect of green soft talent management on turnover intention was invigorated by leader STARA competence [6]. Even though several prior works have investigated STARA competencies, it still has several empty spaces to be filled. The first one is that we still lack information about the validity of the STARA constructs. Although numerous studies have examined the effect of STARA competencies on several HRM dimensions, we still lack knowledge about the variables or dimensions constructing STARA competencies and how precise are they. The second one is that much of the research shed light on the STARA competencies among private sectors employee. There is rarely a study investigating STARA competencies in the context of public sector organisations. The third one is that previous studies merely focused on the leader's STARA competencies and overlooks STARA competencies from the perspective of the employee. The last one is that the study on the STARA competencies is conducted in normal circumstances. We have no sufficient knowledge about employee STARA competencies during the crisis. Therefore, a confirmatory factor analysis (CFA) is required to identify what factors and indicators predicted STARA competencies among public employees in time of COVID-19 and how well its validity and reliability. Based on the research gaps identified above, the current study has three novel investigations, respectively. First, this is the first study analysing STARA competencies in the context of the employee. Because several investigations highlight leader STARA competencies, we choose to focus on the STARA competencies of the workers. Second, we assess the employee's STARA competencies in the time of the COVID-19 pandemic. The COVID-19 has transformed the culture of government workers into online service [7]. Thus, it is critical to analyse how the transformation influences the STARA competencies of public servants. Lastly, the present study examines STARA competencies in the setting of Indonesian public sector organisation. CFA is a method of determining how well a smaller number of constructs are represented by measurable variables [8]. Unlike few studies utilising exploratory factor analysis (EFA), the CFA technique needs scholars to specify each indicator in its latent form before computing, whereas EFA does not. More crucially, CFA may help with the interconnections between the STARA competency aspects, referring to latent variables. By combining component analysis, this latent variable modelling expands and deepens CFA. Furthermore, we enable to measure the degree of independency across variables using CFA, which is exceptionally relevant in STARA competencies studies. The central aim of this study is to assess the constructs of public employee's STARA competencies during the COVID-19 crisis. This study focused on analysing the validity and reliability of four subscales of STARA competencies, including smart technologies, artificial intelligence, robotics, and algorithm established from several indicators ( Table 1 and Figure 1). Methods This study applied a cross-sectional survey approach in the Provincial Government of the Special Region of Jakarta (DKI Jakarta). DKI Jakarta was chosen as a study site because it was one of the provincial governments with the most extensive employee and most innovative local government in Indonesia. The questionnaire was transformed into an online form using google docs before distributing it to the respondents. It was conducted because of the COVID-19 pandemic situation, followed by lock-down and work from home policy in the DKI Jakarta. The research was approved by the head of departments in DKI Jakarta earlier. To help distribute the online questionnaire, we coordinated with the HRM manager in each department. The questionnaires were distributed through the WhatsApp Group (WAG) of the departments and the institutional email of the employees. To attract much more participation, we provided several balances, such as Shopee, Gojek, and Ovo, for few chosen respondents. The STARA questionnaire was performed in the current research. It was adopted and developed by Ogbeibu et al. [4], [6]. It included four latent variables and twelve items, as summarised in Table 1. These latent variables included smart technologies (ST), artificial intelligence (AI), Robotics (RO), and algorithm (AL). The items were coded with the number respectively regarding the variable. The questions or statements were adapted in the context of Indonesian public sector organisations. To anticipate a misunderstanding on each item, we shortly explained the meaning of the construct and items in the questionnaire's background. All items were measured using a five-point Likert's scale, 1 (strongly disagree), 2 (disagree), 3 (neutral), 4 (agree), and 5 (strongly agree). The data were collected from all departments in DKI Jakarta. A total of 305 public employees responded to this survey. The participants in this study consisted of various backgrounds. More than half of the respondents were male (61.97%). Much of the respondents were young, 20-30 years old (26.89%), and 31-40 years old (32.79). The three tops of working experience of the respondent were less than 5 years (28.20%), 5-10 years (26.89%), and 11-15 years (21.97%). Most of the respondents was a permanent employee (68.85) coming from undergraduates (61.97%). The data were run employing covariance-based structural equation modelling (SEM) using analysis of moment structure (AMOS) 24.0 [9]. An SEM approach was utilised to assess whether the data were characterised by the models defined. Statistical analyses conducted included confirmatory factor analyses and reliability analyses to determine the internal consistency of the inventory and its four STARA competencies variables, incorporating smart technologies, artificial intelligence, robotics, and algorithm (Table 1). In this study, a Maximum Likelihood parameter was employed to enforce the confirmatory factor analyses. Although it still lacks consensus about the fit indices of the model, we measured goodness of fit indices through a series of parameters, including absolute fit measures and incremental fit measures. It involved the assessment of adjusted goodness of fit index (AGFI), root means square error of approximation (RMSEA), the goodness of fit index (GFI), Tucker-Lewis index (TLI), normed fit index (NFI), incremental fit index (IFI), and comparative fit index (CFI) ( Table 3). If the initial model did not pass the goodness of fit evaluation, we converted the model based on the suggestions of the AMOS programme. To present the results of this study, we used Schreiber et al. guidance in this paper [10]. Firstly, we offered the assessment of data distribution to measure whether the data were standard or not. The normality evaluation was applied by calculating the critical ratio (CR) of multivariate using AMOS. If the data were not normal and conceived many outliers, we changed the analytical procedure to a bootstrapping method with 5,000 resamples [11]. Then, we assessed the model fit using several parameters of the goodness of fit recommended by SEM, followed by measuring mean, standard deviation (SD), internal reliability, and intercorrelations among the variables. In the final step, we measured the validity and reliability of the constructs. The validity was assessed using convergent and discriminant validity [12], [13]. Table 2 indicates the skewness and kurtosis of the twelve parameters of STARA competencies. It shows a non-normal distribution of the data in order not to fill the normality assumption because the critical ratio (CR) of the multivariate statistics is 5.22. The data are expected if the CR is less than 3 [8]. Therefore, there are two solutions to address the issue, explicitly deleting the outliers or transforming the estimation method to be a bootstrap approach. After analysing the data, we have to delete more than 100 outlier's Data if we choose the first option. Thus, we decided to change the estimation method using bootstrap to maximize the data exhausting collected. Evaluation of Model Fit A goodness of fit indices criteria was applied to measure the model of fit [14]. Two series of the measurement model of fit were performed in this study to evaluate the model's goodness, namely absolute fit and incremental fit calculation. Absolute fit measures were conducted by analysing GFI (goodness of fit index), AGFI (adjusted goodness of fit index), and RMSEA (root means square error of approximation). In contrast, incremental fit calculations were measured by evaluating the value of NFI (normed fit index), IFI (incremental fit index), TLI (Tucker-Lewis index), and CFI (comparative fit index). Furthermore, we also measured Chi-square (χ2), degree of freedom (df), and Chisquare/degree of freedom (χ2/df). Similar to the coefficient of determination (R 2 ) in multiple linear regression, most validity measures required that the value be as high as possible. According to the rule of thumb, χ2 should has p-value more than 0.05 (p > 0.05) and χ2/df was < 3 [15]. In terms of absolute fit indices, there were several cuts of value entailed for each measure, such as GFI (0.90), AGFI (0.90), and RMSEA (0.07) [8]. For incremental fit indices, the measures should be greater than 0.90 for NFI and higher than 0.95 for CFI, TLI, and IFI [16]. Table 3 showed that initial model successively; χ2 (p < 0.05), χ2/df (3.88), GFI (0.85), AGFI (0.79), RMSEA (0.10), NFI (0.87), CFI (0.90), TLI (0.88), and IFI (0.90). It implied that the initial model had not filled the measurement of goodness of fit. To address the problem, we modified the model by adding few correlations among the items as recommended by AMOS. A total of five correlations was executed, specifically e6 to e7, e5 to e16, e10 to e11, e8 to e9, and e12 to e13 (Figure 1). After modifying the model, it was more fit and better while it yielded the value of measures as follow, χ2 (p > 0.05), χ2/df (2. Note: χ2 = Chi-square; df, degree of freedom; NFI, normed fit index; CFI, comparison fit index; Tucker-Lewis index; RMSEA, root mean square error of approximation We also display mean, standard deviation (SD), reliability, and intercorrelation among studied variables. As shown in Table 4, smart technologies (ST) have a greater mean than other variables. Otherwise, the algorithm (AL) has a lower mean compared to other variables. The mean of all variables ranges from three to four, indicating public employee STARA competencies during COVID-19 are high. The data are more diffuse because the standard deviation of the variables is above 0.8, which is high. The Cronbach's alpha (α) of the variables is greater than 0.6, exhibiting the variables were reliable [17]. Table 4 also presents coefficient correlations among the variables. Smart technologies tremendously correlate with artificial intelligence (r = 0.952, p < 0.001), robotics (r = 0.857, p < 0.001), and algorithm (r = 0.596, p < 0.001). Artificial intelligence strongly correlates with robotics (r = 1.104, p < 0.001) and algorithm (r = 0.792, p < 0.001). Eventually, robotics positively and immensely correlate with algorithm (r = 0.924, p < 0.001). Table 5 and Figure 1 summarize the validity and reliability of the construct. The convergent validity was measured by concerning the factor loadings of the item [18]. All items were valid because the factor loadings exceed 0.5 as recommended by Hair et al. [8]. The discriminant validity was evaluated by concerning average variance extracted (AVE). The AVE values above 0.5 indicated that the data filled discriminant validity. Regarding our results, only smart technologies and algorithms were valid, while artificial intelligence and robotics were conversely valid. Even though the discriminant validity has a problem, the data were convergently accurate. The data were also reliable because Cronbach's alpha was greater than 0.7 and composite reliability was higher than 0.7 [19]. and twelve items of STARA competencies. We note that smart technologies 1 (ST1), "I understand how to use smart technologies in my duty," is the item with the highest factor loadings. Meanwhile, the item of artificial intelligence 1 (AI1) has the lowest factor loadings. It indicates that all statements or items in this study can represent and measure public employee STARA competencies. The findings of this study are not similar to Ogbeibu et al.'s research examining STARA competencies in the private sector [6]. The factor loadings of STARA competencies subscales found by the current study are quietly lower than previous research. However, Ogbeibu et al. [6] solely consisted of one item for each STARA competencies variable, while our research has four items. The difference among the studies is clearly shown in Table 6. Note: Factor loadings of our study are extracted from the average of each dimension The work presented here makes several main research contributions, not only to the body of literature but also to the practice. In theoretical terms, our study enriches the literature of STARA competencies by offering validated constructs of STARA competencies. In addition, this study contributes to the body of knowledge because it highlights the items, variables of public servant's STARA competencies at the time of the COVID-19 outbreak. Thus, the constructs and items validated in this research can be further used and tested by scholars. The findings of our study also have several implications for the practice of HRM in the public sector. First, the government should offer continuous training to improve the knowledge of public employees in applying smart technologies, artificial intelligence, robotics, and algorithm. Second, the government should provide sufficient technologies and facilities, including hardware and software supporting public service delivery and
2021-10-26T20:07:15.840Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "7f968cee38266c73228d68a427a3a3c458d0f0c8", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2049/1/012014/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7f968cee38266c73228d68a427a3a3c458d0f0c8", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
51941721
pes2o/s2orc
v3-fos-license
Clostridium difficile exosporium cysteine-rich proteins are essential for the morphogenesis of the exosporium layer, spore resistance, and affect C. difficile pathogenesis Clostridium difficile is a Gram-positive spore-former bacterium and the leading cause of nosocomial antibiotic-associated diarrhea that can culminate in fatal colitis. During the infection, C. difficile produces metabolically dormant spores, which persist in the host and can cause recurrence of the infection. The surface of C. difficile spores seems to be the key in spore-host interactions and persistence. The proteome of the outermost exosporium layer of C. difficile spores has been determined, identifying two cysteine-rich exosporium proteins, CdeC and CdeM. In this work, we explore the contribution of both cysteine-rich proteins in exosporium integrity, spore biology and pathogenesis. Using targeted mutagenesis coupled with transmission electron microscopy we demonstrate that both cysteine rich proteins, CdeC and CdeM, are morphogenetic factors of the exosporium layer of C. difficile spores. Notably, cdeC, but not cdeM spores, exhibited defective spore coat, and were more sensitive to ethanol, heat and phagocytic cells. In a healthy colonic mucosa (mouse ileal loop assay), cdeC and cdeM spore adherence was lower than that of wild-type spores; while in a mouse model of recurrence of the disease, cdeC mutant exhibited an increased infection and persistence during recurrence. In a competitive infection mouse model, cdeC mutant had increased fitness over wild-type. Through complementation analysis with FLAG fusion of known exosporium and coat proteins, we demonstrate that CdeC and CdeM are required for the recruitment of several exosporium proteins to the surface of C. difficile spores. CdeC appears to be conserved exclusively in related Peptostreptococcaeace family members, while CdeM is unique to C. difficile. Our results sheds light on how CdeC and CdeM affect the biology of C. difficile spores and the assembly of the exosporium layer and, demonstrate that CdeC affect C. difficile pathogenesis. Introduction Clostridium difficile [1], first reclassified as Peptoclostridium difficile [1] and more recently reclassified as Clostridioides difficile [2], is a Gram-positive, sporogenic anaerobic bacterium that is the most common cause of antibiotic-associated diarrhea within healthcare systems of the developed world [3,4]. The clinical manifestation of the infection is diarrhea and in severe cases can produce pseudomembranous colitis, toxic megacolon and death [5]. Mortality of C. difficile infections (CDI) may reach up to 5% of CDI cases, but in several outbreaks, it has increased up to 20% [3]. Conventional metronidazole and/or vancomycin treatment (depending on the severity of the symptoms) although resolve single episodes of CDI, exhibit high rates of recurrence of the infection after a first episode. The rate of recurrence of CDI of a first, second and third episode may reach up to 20%, 40% and 60%, respectively [6,7]. During the infection, C. difficile colonization leads to secretion of large toxins (TcdA and TcdB) that glycosylated intestinal epithelial cell proteins, induce massive inflammation of the gut epithelium, causing disease symptoms ranging from mild diarrhea to pseudomembranous colitis, toxic megacolon and even death [8]. However, before C. difficile can colonize a susceptible host, the highly resistant and metabolically dormant spore must germinate in response to secondary bile salts present in high levels in the gastrointestinal tract of antibiotic-treated host [9,10]. In addition to toxin-production during C. difficile colonization of the host, a subset of C. difficile vegetative cells initiates a sporulation program that culminates with the formation of metabolically dormant spores [11,12]. These spores have intrinsic resistance properties enabling their survival to enzymatic degradation [13,14], phagocytic cells [15] and chemicals normally found in the host´s gastrointestinal (GI) environment [16], enabling their persistence in the host´s GI tract. To persist in the host, C. difficile spores must interact with the host´s colonic mucosa through specific interactions mediated by spore-ligand(s) molecules and host cellular receptor (s) [17]. In this context, as demonstrated in other spore-former species [18], the surface of C. difficile spores is likely to be the primary site of spore-host interactions that contributes to spore persistence. Consequently, there is keen interest to understand fundamental aspects of the outermost exosporium layer of C. difficile spores [19]. Notably, the exosporium layer of C. difficile spores differs from previously described outermost layers [19][20][21]. For example, in contrast with the exosporium layer of spores of the Bacillus cereus group, where an interspace spores [23]. Functional analysis of CdeC in the epidemic R20291 strain demonstrated that this protein is required for the correct assembly of the exosporium layer of R20291 spores [13]. However, the higher levels of CdeC observed in 630erm spores, suggests that CdeC might have a more predominant role in the assembly of the exosporium layer in 630erm spores, while the role of CdeM remains unclear. Both proteins are encoded by monocistronic genes whose promoters are controlled by the late-mother cells specific sigma factor, σ K (Fig 1A), which the late-mother cells specific [28,29]. cdeC in 630erm is flanked by genes encoding uncharacterized proteins transcribed by σ E -regulated promoters; by contrast, cdeM, located 570,775 bp downstream of cdeC, is flanked by genes encoding enzymes involved in amino acid biosynthesis ( Fig 1A). The 1218-bp cdeC gene encodes a 405-amino acid protein with a predicted molecular weight of 44.7-kDa, and a high content of cysteine residues (9% of the amino acid content), suggesting that it might be prone to disulfide bridge formation and therefore, play a role in the crosslinking of other exosporium proteins [27] [26]. Analysis of the amino acid sequence revealed no conserved domains, but several noteworthy sequence repeats conserved in all sequenced genomes of C. difficile: i) in the N-terminal domain (NTD) two motifs of unknown function were identified (i.e., KKNKRR and three consecutive histidine residues); ii) a 3xHistidine repeat near the NTD; iii) in the central region, a 6 NPC repeat followed by two CCRQGKGK repeat; and iv) cysteine rich sequence CNECC at the C-terminal domain (CTD) of CdeC ( Fig 1A). The 483-bp cdeM gene encodes a 161-amino acid encoded protein with a predicted molecular weight of 19.1-kDa, and a high content of cysteine residues (8.7% of the amino acid sequence), suggesting that CdeM, similarly as CdeC, might also be prone to disulfide bridge formation contributing to the crosslinking of exosporium proteins. Analysis of the primary sequence of CdeM gave no conserved domains, but some interesting features: i) three RREA repeats near the NTD of CdeM; ii) two NGNNGGNNNNC and three CHK repeats in the central region of CdeM; and iii) two CNCCNCCRK repeats at the CTD (Fig 1A). The CdeC and CdeM cysteine rich proteins are highly conserved in Peptostreptococcaceae family members, while CdeM is unique to C. difficile Since we observed unique sequences in these two proteins, we wondered how conserved the CdeC and CdeM was among other C. difficile and related Peptostreptococcaceae family members, due to a recent reclassification of C. difficile into the Clostridioides genus, a member of the Peptostreptococcaceae family rather than in the Clostridiaceae family [1]. To assess the conservation of the cysteine rich proteins, CdeC and CdeM, in other Clostridial organisms, we searched for protein homologues to the C. difficile CdeC and CdeM in a blastp search (Fig 1B). This analysis was performed in a chosen subset of strains of a wide variety of ribotypes and C. difficile genome groups (S2 Table, When a blastp against C. difficile CdeC and CdeM was expanded to include additional members of the Peptostreptococcaceae, we observed that CdeC was conserved in all 8 Peptostreptococcaceae family members analyzed (Figs 1B and S2, S2, S3 and S4 Tables). By contrast, CdeM was unique to C. difficile (Figs 1B, S2, S3 and S4). Notably, despite the absence of CdeM, the genomes of Clostridioides mangenotii, Paraclostridium bifermentas, Paraclostridium These results collectively suggest that, while CdeM is specific for C. difficile, CdeC is a conserved exosporium protein in members of the Peptostreptococcaceae family. We sought to apply a similar analysis to a subset of Clostridiaceae and Lachnospiraceae family members to evaluate whether C. difficile CdeC and CdeM were present (S4 and S5 Tables). Strikingly, only CdeC but not CdeM, was found in members of the Clostridiaceae family, specifically in Clostridium dakarense and 5 Clostridium sp. (Fig 1B and S4 and S5 Tables). Despite the phylogenetic divergence (S5 Fig), the cysteine residues in the conserved motifs of CdeC are highly conserved in members of the Peptostreptococcaceae and Clostridiaceae families (S6 and S7 Figs). CdeC and CdeM were not present in members of the Lachnospiraceae family. Collectively, these results indicate that although CdeC is present in a few members of the Clostridiaceae family, the amino acid sequence is highly conserved in them. Construction of cdeC and cdeM mutant strains in a 630erm background To evaluate the functional role of CdeC and CdeM in C. difficile 630erm strain, we used Clos-Tron mutagenesis by redirecting the group II L1.ltrB intron into the antisense strands of the N-terminal domain of both genes at positions 30 and 123 to inactivate cdeC and cdeM, respectively (S8A, S8B and S8C Fig). After many attempts to inactivate each individual gene, we were able to obtain several independent mutant clones of cdeC and cdeM as shown by PCR screening for insertions (S8A and S8B Fig) [33]. Mutants were confirmed by PCR using flanking primers and sequencing of the PCR amplicons (S8A and S8B Fig). Clones C2, C4 and C8 for cdeC mutant strain and C2, C3 and C4 for the cdeM mutant strain. These clones were used for further phenotypic characterization. CdeC and CdeM cysteine rich proteins are essential for the morphogenesis of the exosporium layer of C. difficile spores Unlike the exosporium layer of most epidemic strains, 630erm spores have an exosporium layer that does not exhibit bumps and the typical hair-like extensions [19,20], and also have higher levels of CdeC in the spore surface layers than R20291 spores [23]. Given these differences, we hypothesized that CdeC would have a greater impact in exosporium and spore coat assembly than previously observed in epidemic R20291 strain [13]. Insertional inactivation of cdeC lead to the formation of cdeC spores with an outermost exosporium layer (i.e., 29.6 nm) that was 50% thinner than wild-type spores (i.e., 55 nm) (Fig 2B). We observed that inactivation of cdeC in 630erm spores affected the thickness of the spore coats ( Fig 2B) to a greater extent than in our previous observations in previous observation in C. difficile R20291 function protein and upstream of CD1068 which is an antisense encoding ORF of a putative protein involved in polysaccharide biosynthesis, and expressed during sporulation protein encoded in the antisense complementary sequence; all three are monocistronic genes. Notably, a putative σ K -regulated promoter is located immediately upstream of cdeC, whose position was mapped by RNA-Seq [28] (shown in the scheme). By contrast, CD1066 and CD1068 have putative σ E -regulated promoter immediately upstream of their ORFs [28,30]. By contrast, cdeM is found downstream of CD1580 which encodes a putative homoserine dehydrogenase (Hom2), and upstream of CD1582 encoding a putative histodinidol dehydrogenase (HisD). Transcription of cdeM is predicted (by RNA-Seq) [28] to be under the control of a σ K -regulated promoter immediately upstream of cdeC; however, transcription of hom2 and hisD is not dependent on sporulation [28,30]. The main repeats in CdeC and CdeM are shown in the magnification of the predicted protein primary sequence in color and described in the text. Blue spirals and arrows indicate the predicted beta sheets and alpha helixes [31,32]. (B), Gene neighborhoods of predicted-coding regions whose products have homology to C. difficile 630erm CdeC and CdeM by a blastp search. The diagram is abridged to show only the first neighborhood in each genome for the cdeC locus and cdeM locus. Predicted proteins with homology to C. difficile CdeC and CdeM were clustered by sequence identity (S2 and S3 Figs). The percentage of identity with C. difficile 630erm CdeC and CdeM of homologues in other species is shown in green. epidemic strain [13]. A significant decrease of 32% (wild-type 32.8 nm and cdeC 22.1 nm) in the thickness of the external spore coat was evidenced in cdeC spores compared to wild-type spores, while an increase of 35% in the thickness of the inner spore coat was observed in cdeC spores compared to wild-type spores (i.e., wild-type, 22.5 nm; cdeC, 30.6 nm) (Fig 2B). Despite these differences, the overall thickness of the spore coat (i.e., inner coat plus outer coat) remained similar between wild-type (i.e., 55.3 nm) and cdeC (i.e., 52.7 nm) spores ( Fig 2B). Collectively, these observations indicate that: i) CdeC affects the exosporium assembly and the thickness of the inner and external spore coat of 630erm spores; ii) the impact of insertional inactivation of cdeC in the thickness of the inner and external spore coat is greater in 630erm spores than in epidemic R20291 spores [13]. To explore the impact of insertional inactivation of cdeM in the assembly of the exosporium layer of C. difficile spores, cdeM spores were also analyzed by transmission electron microscopy. Strikingly, analysis of more than 50 individual cdeM spores revealed that inactivation of cdeM yielded spores with almost complete absence of the exosporium layer (Fig 2A). Upon comparison of the thickness of the exosporium layer of wild-type and cdeM spores (Fig 2A), we evidenced a striking decrease of 85% in the thickness of the exosporium layer of cdeM spores (i.e., 8.1 nm) compared to that of wild-type spores (i.e., 55 nm) ( Fig 2B). In contrast to the effect of inactivation of cdeC on the spore coat, inactivation of cdeM led to a slight but significant increase in the thickness of the external spore coat layer, from 32.8 nm (i.e., wild-type spores) to 36.4 nm (i.e., cdeM spores) ( Fig 2B). Conversely, a significant decrease in the thickness of the inner spore coat from 22.5 nm (i.e., wild-type spores) to 16.5 nm (i.e., cdeM spores) was observed (Fig 2B). Despite these differences, the overall thickness of the spore coat varied slightly from 55.3 nm in wild-type spores to 52.9 nm in cdeM spores. Collectively, these observations clearly indicate that CdeM is essential for the morphogenesis of the exosporium layer and, affects to some degree the assembly of the spore coat layer of 630erm spores. The morphological defects observed as described above suggest that CdeC and CdeM are surface proteins. Indeed, previous work has demonstrated that CdeC and CdeM are located mainly in the exosporium layer [23]. To evaluate whether CdeC is surface-located, immunofluorescence of wild-type and cdeC spores; significant immunofluorescence signal was detectable in wild-type spores, while no detectable fluorescence signal was evidenced in cdeC mutant spore ( Fig 2C). Similarly, immunofluorescence assay with anti-CdeM detected immunofluorescence signal in wild-type but not in cdeM spores ( Fig 2D). These results indicate that both cysteine-rich proteins are accecible to antibodies. Effect of CdeC and CdeM in the abundance of the major protein species in the outer layers of C. difficile 630erm spores The fact that cdeC and cdeM spores had defective exosporium layers suggested that the protein profile of cdeC and cdeM spores might differ from that of wild-type spores. Reasoning that the protein profile would differ due to the mutations, we standardized the amounts of spores loaded by optical density, ensuring that the same number of spores were loaded in each lane. Our first observation from the SDS-PAGE analysis of the Laemmli buffer-extracted spore coat and exosporium proteins from wild-type spores was that the protein profile of 630erm spores differed from the previously reported one from R20291 strain [13]. Analysis of the spore coat and exosporium extracts of cdeC spores revealed the appearance of 6 major protein species of molecular weights estimated in 150-, 58-, 53-, 50-, 18-and 16-kDa, levels of which decreased to 34,12,16,34,63 and 28% relative to wild-type levels (Fig 3A and 3B). Strikingly, complementation of cdeC mutation, albeit had no effect on the levels of the 18-and 16-kDa protein species, and increased the levels of 150-, 50-kDa proteins but not to wild-type levels ( Fig 3B). The thickness of the exosporium and outer and inner coat layers of C. difficile wild-type (white bars), cdeC (gray bars) and cdeM (black bars) strains were A similar protein profile was observed in Laemmli-extracts of the spore coat and exosporium (remnants) extracts of cdeM spores; the levels of the protein species of 150-, 58-, 53-, 50-, 18and 16-kDa were decreased to 78, 88, 77, 66, 56 and 6% relative to levels in wild-type spores ( Fig 3A). Complementation of the cdeM mutation increased the levels of most of the dominant protein species to levels near or higher than those in wild-type spores (Fig 3B). These results indicate that the absence of both cysteine rich proteins, CdeC and CdeM, affect the relative abundance of the major protein species in the spore coat and exosporium extracts. Effect of CdeC and CdeM on the presence of immunodominant proteins of the exosporium layer of C. difficile 630erm spores Previous work, using an anti-630erm spore goat antiserum [13,34], demonstrated that the immunodominant proteins are located in both, the spore coat and exosporium layer [13,34]. Therefore, since inactivation of cdeC and cdeM affected the assembly of the exosporium layer of 630erm spores, we evaluated how their inactivation affects the presence of immunodominant proteins in the spore coat and exosporium extracts analyzed by western blots with anti-spore goat serum. Several loading controls of C. difficile spores have been applied recently to normalize immunoreactive intensities. Given the defects observed in the spore coat and exosporium in cdeC and cdeM spores, we first sought to evaluate whether mutations in cdeC and cdeM would affect the abundance of a loading control protein, SpoIVA, which has been used as a loading control in several studies [35,36]. Notably, inactivation of cdeC caused a~7-fold increase on the levels of SpoIVA, complementation of cdeC with wild-type cdeC restored SpoIVA levels to near wild-type level ( Fig 3C). By contrast, inactivation of cdeM had no effect on SpoIVA levels, and complementation of cdeM with wild-type cdeM did not affect SpoIVA levels ( Fig 3C). Therefore, to analyze the relative amounts of immunoreactive proteins we loaded similar amounts of spores based on optical density measurements. Analysis of the spore coat/exosporium extracts of cdeC spores revealed that the levels of the 180-and 107-kDa immunoreactive protein species significantly decreased 35 and 50% relative to that of wild-type spores, respectively ( Fig 3D). Levels of the 103-kDa immunoreactive protein species increased bỹ 2-fold relative to wild-type spores ( Fig 3D). Complementation of cdeC with wild-type cdeC had no effect on the levels of the immunoreactive proteins of 180-and 107-kDa; however, the levels of the 103-kDa immunoreactive protein species were restored to wild-type levels ( Fig 3D). Analysis of the spore coat/exosporium extracts of cdeM spores revealed that the levels of the 180-and 107-kDa, but not 103-kDa, immunoreactive protein species significantly increased by 9-and 2.5-fold relative to wild-type levels ( Fig 3D). Complementation of cdeM lead to spores with wild-type levels of all three immunoreactive protein species (Fig 3D). Collectively, these results indicate that: i) CdeC is required for the normal levels of immunoreactive protein species of the outer layers of C. difficile spores; ii) absence of CdeM leads to spores with increased levels of immunoreactive proteins. Absence of CdeC, but not CdeM, affects C. difficile spore coat permeability to lysozyme The spore coat of C. difficile spores acts as an impermeable barrier to enzymes with molecular masses higher than 14 kDa, such as lysozyme, proteinase K and trypsin [14]. The impact of analyzed by transmission electron microscopy of at least 10 individual spores with an apparent thick-exosporium morphotype. Error bars denote standard errors of the means. Asterisks ( Ã ) denote statistical difference at P < 0.05 and ( ÃÃ ) denote statistical difference at P < 0.001 respect to wildtype. Scale bars are shown in each figure: the bars in the upper panels represent 1 nm, middle panel 100 nm and the bars in the lower panels represent 200 nm. (C) The surface accessibility of CdeC on C. difficile 630erm wild-type and cdeC mutant spores was analyzed by immunofluorescence with rat anti-CdeC serum as described in Methods section.(D) The surface accessibility of CdeM on C. difficile 630erm wild-type and cdeM mutant spores was analyzed by immunofluorescence with rabbit anti-CdeM spores as described in Methods section. https://doi.org/10.1371/journal.ppat.1007199.g002 , Western blot analysis of spore coat/exosporium fractions of wild-type, cdeC and cdeM spores blotted with goat antiserum raised against C. difficile 630erm spore. Black arrows highlight the immunoreactive bands. Densitometry analysis of the major imunoreactive bands was determined with ImageJ, and the results are expressed as relative to those determined in wild-type spores. The SDS-PAGE and Western blots are a representative experiment. Data of densitometric analysis represent the mean of three representative experiments and error bars are standard error of the mean. Asterisks denote statistical difference at ( Ã ) P < 0.01, ( ÃÃ ) P < 0.05 and ( ÃÃÃ ) P < 0.001 respect to wild-type. https://doi.org/10.1371/journal.ppat.1007199.g003 insertional inactivation of cdeC and cdeM in the protein profile of spore coat/exosporium extracts raised the question of whether absence of CdeC and/or CdeM would impact the permeability of the spore coat to lysozyme triggered-germination. Hence, to answer this question, we explored a lysozyme permeability assay of cdeC and cdeM mutant spores. After treatment of wild-type spores with 1 mg/mL of lysozyme for 5 h at 37˚C, only a small fraction of spores (1%) changed to phase dark (Fig 4A and 4B). Contrastingly, under similar treatment conditions,~90% of cdeC spores changed to phase dark (Fig 4A and 4B). However, less than 1% of cdeM spores changed to phase dark upon lysozyme treatment (Fig 4A and 4B). cdeC complementation partially restored the resistance of the spore coat to lysozyme, where only 34% of the spores became phase dark (Fig 4A and 4B). Despite the negligible effect of a cdeM mutation in lysozyme resistance, complementation of cdeM strain with wild-type cdeM caused 38% of the spores to become phase dark after lysozyme incubation ( Fig 4B). Altogether, these results indicate that, despite the impact of both cysteine-rich proteins (i.e., CdeC and CdeM) on the spore coat and exosporium proteins, only the absence CdeC increases the permeability barrier of the spore coat to lysozyme, which is consistent with those results previously reported for a insertional inactivation of cdeC in epidemic R20291 spores [13]. CdeC, but not CdeM, is required for ethanol-, heat-and macrophageresistance of C. difficile spores The previous work in spores of the epidemic strain R20291 demonstrated that inactivation of cdeC led to spores with an increased sensitivity to ethanol and heat resistance [13]. First, we evaluated whether absence of CdeC and/or CdeM affected ethanol resistance of C. difficile 630erm spores. Hence, when wild-type spores were treated with ethanol for 1 h at 37˚C, spore viability decreased by 0.2 log reduction ( Fig 5A). When cdeC spores were treated with ethanol under similar conditions, a significant decrease of 2 log cycles was observed ( Fig 5A). By contrast, no significant difference in loss of spore viability was observed between wild-type and cdeM spores after ethanol-treatment ( Fig 5A). These results indicate that CdeC increases ethanol-killing, presumably via an increase in the permeability of the spore inner membrane. To gain more insight of the effects of CdeC and CdeM on resistance of C. difficile spores, heat resistance of wild-type, cdeC and cdeM spores at 75˚C was assessed. Heat treatment (75˚C) of wild-type spores progressively decreased spore viability ( Fig 5B); after 60 min of treatment, only 4.5% of wild-type spores remained viable ( Fig 5B). Upon heat treatment of cdeC spores, higher levels of inactivation became evident as early as 5 min after treatment Cysteine-rich proteins are involved in C. difficile spore surface assembly ( Fig 5B); after 60 min at 75˚C only 0.06% of cdeC spores remained viable ( Fig 5B). When cdeM spores were subjected to similar heat treatment conditions, a significantly higher extent of inactivation than wild-type was observed after 5 min at 75˚C (Fig 5B). After 60 min at 75˚C, only 0.5% of cdeM spores remained viable, amount that was 10-fold lower than wild-type spores but 10-fold higher than cdeC spores ( Fig 5B). To address whether the decreased heat resistance of cdeC and cdeM spores was attributed to the levels of dipicolinic acid (DPA), spores of all strains were assayed for spore-core DPA content, yet no significant difference was observed in spore-core DPA content between the strains ( Fig 5C). These results indicate that the absence of both exosporium morphogenetic proteins affect the resistance of C. difficile spores to heat. C. difficile spores are resistant to phagocytic cells, and capable of surviving for more than 48 h without significant macrophage-mediated inactivation of C. difficile spores [15]. Therefore, we assessed whether the inactivation of cdeC and cdeM affected the viability of C. difficile spores during infection of Raw 264.7 macrophage-like cells. As expected, infection of Raw 264.7 cells with wild-type spores led to no significant spore-inactivation after 24 h of infection. Notably, a slight but significant increase in spore colony formation was observed after 48 h of infection (Fig 5D), suggesting that macrophage factors activated C. difficile spores to germinate in BHIS plates supplemented with taurocholate. Strikingly, while no significant inactivation of cdeC spores was observed after 24 h of infection of Raw 264.7 murine macrophage-like cells, 1 log reductions in spore viability were observed after 48 of infection, respectively (Fig 5D). By contrast, no inactivation of cdeM spores was evidenced upon infection of Raw 264.7 macrophage-like cells after 48 of infection ( Fig 5D). Collectively, these results indicate that the absence of CdeC, but not CdeM, leads to C. difficile spores susceptible to macrophage-killing. Heat resistance of C. difficile wild-type (gray white), cdeC (gray bars), and cdeM (black bars) spores was measured by heat treating aliquots at 75˚C for various times, and survivors were enumerated as described in the Material and Method section. (C) Equal amounts of spores derived from C. difficile strains 630erm (wt), cdeC, cdeM, cdeC/cdeC and cdeM/cdeM were boiled 60 min, and the amount of DPA was quantified based on Tb 3+ . The data shown represent the average results from three independent experiments, and the error bars represent standard error from the means. n.s., indicates no significant difference relative to wild-type. (D) Resistance of Raw 264.7 macrophages was determined by infecting at a MOI of 10 with C. difficile 630erm wild-type, cdeC and cdeM after 0,5, 24 and 48 of incubation at 37˚C. Asterisks ( Ã ) denote statistical difference at P < 0.01 respect to wild-type. https://doi.org/10.1371/journal.ppat.1007199.g005 Cysteine-rich proteins are involved in C. difficile spore surface assembly Effect of CdeC and CdeM in the adherence of C. difficile spore to the colonic mucosa Previous work demonstrated that inactivation of cdeC in R20291 epidemic strains lead to an increased adherence to components of the intestinal mucosa (i.e., mucin, fibronectin and adherence to intestinal epithelial Caco-2 cells) [17], suggesting that CdeC contributes to decrease the persistence of C. difficile spores in the intestinal tract. To begin answering this question, we used a colonic loop mouse model to evaluate the impact of an insertional inactivation of cdeC and cdeM in C. difficile spore adherence to healthy intestinal mucosa by confocal fluorescence microscopy (S9 Fig). In contrast to our expected results, data shown in Fig 6 demonstrates that cdeC mutant spores have significantly reduced adherence in comparison to wild-type spores (Kluskal Wallis test P = 0.036) (Fig 6A, 6B and 6D), while cdeM mutant spores seemed to adhere lower than wild-type to the colonic mucosa (Kluskal Wallis test P = 0.101) (Fig 6A, 6C and 6D). These data indicate that, in a healthy colonic mucosa, CdeC, and perhaps CdeM, contribute to reduce the adherence of C. difficile spores to the colonic mucosa, contrasting with the proposed observations from in vitro studies [17]. Role of CdeC and CdeM in the initiation and recurrence of the disease As mentioned, the absence of a correctly assembled exosporium layer affects spore adherence to healthy colonic mucosa. Therefore, to investigate the implication of CdeC and CdeM in an infectious context, we used a mouse model of infection and recurrent infection of C. difficile. Antibiotic-treated mice were infected with C. difficile spores of wild-type (n = 6), cdeC (n = 6), and cdeM (n = 5), and at day 3 of infection, mice were treated with vancomycin for 5 days and subsequently monitored to evaluate the recurrence of the infection (Fig 7A). Mice infected with wild-type and cdeC spores yielded more animals developed significantly higher diarrhea scores than those infected with cdeM spores (Fig 7B). Mice infected with cdeC spores also had higher weight lost than those infected with wild-type and cdeM spores (S12A Fig). Recurrence Cysteine-rich proteins are involved in C. difficile spore surface assembly was observed after vancomycin treatment as described in Fig 7A. Diarrhea became evident at day 4 after vancomycin treatment, and 6 of 6 (100%) of the mice infected with cdeC developed recurrent diarrhea, whereas only 3 of 6 (50%) and 3 of 5 (60%) of the mice infected with wildtype and cdeM spores developed recurrent diarrhea (Fig 7C). Mice infected with cdeC spores also had higher diarrhea score than those infected with wild-type and cdeM spores (Fig 7C). The higher recurrence rate in mice infected with cdeC spores correlated with higher toxin titer ( Fig 7D) and CFU (Fig 7E) recovered post-mortem from cecum contents. To further evaluate whether the increased colonization of cdeC spores could be attributed due to differences in spore germination, we evaluated whether inactivation of cdeC and cdeM affected spore germination. A reduced extent of germination in cdeC spores versus wild-type spores was evidenced in the presence of taurocholate after 60 min of incubation (S10A Fig). By contrast, no significant germination defect was evidenced in cdeM spores, which germinated similarly as wildtype spores (S10B Fig). It is also noteworthy that the colony formation efficiency of cdeC and cdeM spores in BHI agar plates with taurocholate was 25±5 and 50±5% relative to that of wildtype spores, respectively. Note that cytotoxic assay of culture supernatant on Vero cells showed no difference between strain (S11 Fig) and therefore, these parameters were not responsible for the differences observed in the in vivo severity and cytotoxic between strains. We also found no differences in the levels of fecal C. difficile spore loads and anti-vegetative and -spore antibodies raised during the infection (S12 Fig). Taken together, these data indicate that during the infection, insertional inactivation of cdeC, but not cdeM, leads to increased colonization and recurrence of the diarrhea after vancomycin treatment. Role of CdeC and CdeM in the fitness of C. difficile in a mouse model To gain more insight on how the absence of CdeC affected C. difficile colonization, we performed a competitive assay where healthy C57BL/6 mice (n = 10 per group) were orally infected after antibiotic cocktail treatment with an equal number of viable wild-type and cdeC or wild-type and cdeM spores (1 x 10 7 spores of each strain), and the numbers of fecal-shedded spores were monitored for 8 days after the challenge. cdeC spores were detected at significantly higher levels than wild-type spores at days 1, 2 and 4 post-challenge (Fig 8A and 8C). Interestingly, the persistence dynamics of cdeM strain differed from that of cdeC strain; cdeM spores were present at significantly lower levels than 630erm spores in fecal sampled only at day 4 post infection (Fig 8B and 8D). These results suggest that absence of CdeC, but not CdeM, increases the fitness of C. difficile during infection. Effect of inactivation of cdeC and cdeM in the presence of spore coat and exosporium proteins To gain a better understanding on how these cysteine-rich proteins affect the assembly of the exosporium layer, we sought to evaluate the abundance of known proteins of the exosporium layer (i.e., BclA1, BclA2, BclA3, CdeA, CdeB, and CdeM) and of the spore coat (i.e., CotA and CotB) proteins [23], by using wild-type and cdeC mutant spores containing plasmids expressing FLAG fusion proteins (S1 Table). First, we evaluated whether the absence of CdeC and/or CdeM affected the abundance of the collagen-like BclA glycoproteins. All three BclA proteins were detectable in wild-type spores; BclA1 and BclA3 were detected forming high molecular mass complex of 110-kDa as well as a low molecular mass species of 48-kDa, while BclA2 was detectable as a 48-kDa species (S13A, S13B and S13C Fig and S14A, S14B and S14C Fig). In the absence of CdeC or CdeM, a significant reduction in the high molecular mass complex of both, BclA1 and BclA3, was evidenced (Table 1, S13A and S13C Fig, S14A and S14C Fig). By contrast, absence of CdeC leads to an increase in low molecular mass complex of all three BclA orthologues, whereas absence of CdeM leads to a decrease in the low molecular mass complex of all three BclA proteins (Table 1, S13A, S13B and S13C Fig, S14A, S14B and S14C Fig). Note that further dilution of the amount of anti-flag used provides similar results in the case of BclA1-FLAG (S15 Fig). These results demonstrate that: i) CdeC is essential for the presence of the high, but not low, molecular mass complexes of all three BclA proteins, while CdeM is essential for the presence of high and low molecular mass complexes of all three BclA proteins. As previously described [23], the cysteine-rich protein, CdeA, was found in the spore surface as a 19-and 47-kDa immunorreactive species (Table 1, S13A, S13B, S13C Fig and S14A, S14B and S14C Fig). Absence of CdeC or CdeM lead to a significant increase of 47-kDa CdeA species, and a significant decrease of the 19-kDa CdeA species (Table 1, S13D Fig and S14D Fig). Another exosporium protein previously identified is CdeB, which was found to be present in wild-type spores as a 48-kDa immunoreactive species as previously described [23]. Notably, while the absence of CdeC lead to a significant increase of CdeB, the abundance of CdeB in absence of CdeM lead to lower levels of CdeB compared to wild-type spores ( Table 1, S13E Fig and S14E Fig). Note that further dilution of the amount of anti-flag used provides similar results in the case of CdeA-FLAG (S15 Fig). These data indicate that the levels of CdeA and CdeB are affected by CdeC and CdeM. The aforementioned results suggest that levels of CdeC depend on the presence of CdeM or vice versa. To explore this hypothesis, levels of CdeC in cdeM spores relative to wild-type and levels of CdeM in cdeC spores relative to wild-type spores were assessed. Results evidenced that while a significant increase of CdeM was observed relative to wild-type spores ( Table 1, S13F Fig). By contrast, a significant decrease in high (120-kDa) and low (44 kDa) molecular mass CdeC species was evidenced in cdeM spores relative to wild-type spores ( Table 1, The altered thickness of cdeC spores evidenced by transmission electron micrographs suggest that the absence of CdeC might affect the levels of spore coat proteins. To address this question, we evaluated the levels of two spore coat proteins (i.e., CotA and CotB) [37]. CotA and CotB were present as 47-kDa immunoreactive protein species, as reported previously in wild-type spores [23]. CotA was found at similar levels in cdeC spores relative to wild-type, but significantly lower levels of CotB were observed in cdeC spores compared to wild-type spores ( Table 1, S13G and S13H Fig). Next, we addressed whether the absence of CdeM affected CotA and CotB levels. As shown in Table 1 (S14G and S14H Fig), cdeM spores had significantly lower levels of both CotA and CotB than wild-type spores ( Table 1, S14G and S14H Fig). These results indicate that only CdeM affects CotA, but that CdeC and CdeM affect CotB. Discussion C. difficile spores exhibit an outermost exosporium layer that provides the first site of interaction with the host. Recent studies on the outermost exosporium layer of C. difficile spores have uncovered the ultrastructural variability, composition and functional properties of this layer [14,20,23,[38][39][40]. Extensive studies have demonstrated that cysteine-rich proteins have been involved in the assembly of the exosporium layer of spores of members of the B. cereus group and in the outer crust layer of B. subtilis spores [18,[24][25][26]. In C. difficile, there are three cysteine-rich proteins identified in the exosporium layer of C. difficile spores, CdeC, CdeM and CdeA [23]. Previously, we demonstrated that CdeC is an exosporium morphogenetic protein in epidemic C. difficile strain R20291 by performing functional analysis of a cdeC mutant strain [13]. In this work, we have used the laboratory strain 630erm due to its genetic ease, to investigate how two exosporium cysteine-rich proteins, CdeC and CdeM, contribute differentially to the spore biology and pathogenesis of C. difficile: CdeC and CdeM are both required for the correct formation of the exosporium layer. Whereas cdeC mutant exhibit defective spore coat assembly (Fig 2A and 2B) and permeability to lysozyme (Fig 4A and 4B), increased susceptibility to ethanol, heat-and macrophage-inactivation (Fig 5A, 5B and 5C), cdeM spores behaved as wild-type spores. Notably, CdeC is specific to C. difficile and related Peptostreptococcaceae family members, while CdeM is specific to C. difficile (Fig 1). In a healthy colonic mucosa, spore adherence of cdeC and cdeM spores was lower than wild-type spores (Fig 6); while during infection cdeC mutant, but not cdeM, exhibited higher diarrhea score, and persistence during recurrence of infection (Fig 7). In concordance, cdeC mutant, but not cdeM mutant, exhibited increased fitness in a competitive infection mouse model. Thus, this work contributes to our understanding on the mechanisms underlying exosporium assembly, and how this impacts C. difficile spore biology and pathogenesis. It was surprising to observe that despite the fact that both, CdeC and CdeM, are cysteine rich proteins, they have cause differential impacts in the integrity of the exosporium layer and spore coat. C. difficile spores. Altogether, the results provided in Table 1 and S13 and S14 Figs allow the elaboration of a first interaction map and exosporium model (Fig 9A). Reasoning that we observed that the presence of CdeC was CdeM-dependent and not vice-versa (Table 1, S13F Fig and S14F Fig), CdeC-dependent proteins were defined as those with reduced levels in a cdeC genetic background; consequently, CdeM-dependent proteins were defined as those whose abundance were reduced in a cdeM but not cdeC genetic background. In this context, suggested CdeC-dependent proteins include CdeA, CotB and the high molecular complex BclA1 and BclA3 (Fig 9A). By contrast, CdeM-dependent proteins include CotA, CdeB, and the low molecular mass complex formed by BclA1, BclA2, BclA3 and CdeB (Fig 9A, S13 Fig and S14 Fig). It is noteworthy, that the high molecular, and to some extent, the low molecular mass complex formed by CdeC, are CdeM-dependent (Fig 9, S14F Fig and Table 1). Coupling these findings with previous localization studies [23], we propose putative locations of these proteins in the spore outer surface (Fig 9B). CotA and CotB were previously shown to be located in the spore coat layers [23], while the BclA and Cde proteins are located in the exosporium; however, the fact that the absence of CdeC affects the abundance of CotB and causes a permeable spore coat, suggests that the location of monomeric CdeC might be on the interface of the spore coat and exosporium layers, while the high molecular complex CdeC forms might be more exosporium oriented; CdeM, by contrast seems to be located uniquely on the exosporium layer. The recruitment of CotA might be related to additional unidentified proteins. Since these experiments were performed with plasmid-based complementation, we were unable to evaluate how restoring the wild-type gene into the mutant strain affected the relative abundance of FLAG-tagged proteins. A major difference between CdeC and CdeM, was that CdeC had profound implications in the assembly and permeability of the spore coat and spore resistance. It was somewhat surprising that cdeM spores had an impermeable spore coat to lysozyme, while the majority of cdeC spores germinated in the presence of lysozyme (Fig 4). A plausible explanation could be attributed to the lower levels of the CotB, additional key spore-coat constituents or to the absence of CdeC in cdeC spores. It is likely that the presence of monomeric CdeC in cdeM spores, might sufficient to be implicated in the spore coat resistance to lysozyme, or that it might be recruiting additional constituents. However, a major question that remains unanswered, is how is CdeC, but not CdeM, implicated in spore resistance? the increased permeability of the spore coat to enzymes and of the spore inner membrane to chemicals is consistent with the elevated levels of killing of C. difficile spores to Raw 264.7 cells, where spores are subjected to low pH and a variety of stressors (i.e., release of hydrogen peroxide, lysozyme and proteases) [41], suggesting that CdeC is essential for C. difficile spores ability to survive host´s phagocytic cells. Dipicolinic acid is a known factor that contribute to heat resistance of C. difficile spores; thus, it was interesting to find that the levels of this molecule in the spore core was unaffected by the inactivation of cdeC and cdeM (Fig 5). Another major question raised by this work is how can the absence of CdeC, but not CdeM, contribute to a decreased spore adherence to healthy intestinal mucosa, but during infection to an increased colonization, fitness and severity of the infection and recurrence? Our finding that cdeC spores adhere to lower levels than wild-type spores to healthy colonic mucosa in the colonic loop mouse model (Fig 6), suggests that CdeC, and/or additional exosporium proteins with reduced levels in cdeC spores, play a role in spore adherence to the colonic mucosa during health. By contrast, we observed an increased severity of the infection in mice infected with cdeC spores and increased recurrence of the infection (Fig 7C and 7E) as well as fitness (Fig 8A and 8C). A possible explanation for these contrasting observations could be attributed to the differences between a healthy and damaged colonic mucosa. For example, during infection experiments (Fig 7 and Fig 8), C. difficile toxins TcdA and TcdB cause significant remodeling of the colonic environment, including disruption of tight junctions, mucosal ulcerations and epithelial erosion [8]. These toxin-mediated epithelium damage will in turn, expose new spore-binding sites rich in extracellular matrix components to which C. difficile spores have already been shown to bind, and that include vitronectin and fibronectin [17]. Therefore, as previously shown for cdeC mutant spores in C. difficile R20291 genetic background, which have higher affinity against components of the intestinal mucosa such as adherence to intestinal epithelial cells, fibronectin and vitronectin [17], suggests that it is conceivable that the absence of CdeC, and/or lower additional exosporium proteins, contribute to a greater persistence of C. difficile in the host during infection, indicating that CdeC negatively contributes to C. difficile pathogenesis. In this context, the fact that 630erm spores have~100fold higher levels of CdeC in the spore surface than R20291 spores [23], might explain why strain R20291 is able to cause more episodes of recurrent infection than 630erm strain in a mouse model [42]. An increased amount of low molecular mass immunoreactive species of BclA1, BclA2 and BclA3 was observed in cdeC spores (Table 1, S13A, S13B and S13C Fig) that might also contribute to disease. Further studies to address how CdeC, and/or BclA proteins, contribute to interactions of C. difficile spores with components of the colonic mucosa could identify mechanism through which CdeC and/or BclA proteins modulate C. difficile sporehost interactions and may also provide insight into the mechanisms underlying the reduced adherence to healthy colonic mucosa (Fig 6), increased severity of infection and recurrence (Fig 7) and fitness during infection (Fig 8). In summary, in identifying two cysteine rich proteins, where one is conserved (i.e, CdeM) in C. difficile and the other (i.e., CdeC) conserved in other Peptostreptococcaceae family members, our study provides insight into the mechanism of assembly of the exosporium layer of C. difficile spores and in the implications of these proteins during C. difficile infection. While many unanswered questions remain, the correct assembly of the exosporium layer is subjected to CdeC and CdeM, where CdeC seems to have a pleiotropic role in the assembly of C. difficile spores, contributing to spore resistance and persistence as well. By contrast, given that CdeM is unique to C. difficile, it can be considered as a potential target for spore-targeted therapies given the limited conservation of CdeM in other spore-forming organisms. Ethics statement All experiments using mice were conducted in agreement with the ethical standards and according to the local animal protection law. All experimental protocols were conducted in strict accordance with, and under the formal approval of the Institutional Animal Ethics Committee of the Universidad Andrés Bello (Protocol number 020/2010 and 026/2018) in strict accordance to the Chilean national Law 20.380. Once experiments finalized, animals were sacrificed by euthanasia by 4 times the anesthetic doses of ketamine/xylazine combinations were administered intraperitoneally. The name of the Universidad Andrés Bello Institutional Animal Care and Use Committee is: "Comité de Bioética de la Vicerrectoría de Investigación y Doctorados". The "Comité de Bioética" provided ethical approved in the Acta # 014/2015. Bioinformatic analysis Genome assemblies for selected strains (shown in Fig 1) were obtained via ftp from NCBI Assembly which included genomes of 336 Peptostreptococcaceae (taxid:186804), 214 Lachnospiraceae (taxid:186803) and 338 Clostridiaceae (taxid:31979). Many of these genomes were incomplete and were not annotated completely, therefore they were reannotated using Prodigal v2 2.6.3 [43]. A database of predicted proteins was created and searched locally using makeblastdb tool from the BLAST+ 2.3.0 package [44] using the C. difficile 630erm CdeC and CdeM proteins as queries (UniProt id: Q18AS2 and Q186D6, respectively). Since CdeC and CdeM have no protein motives, in order to reduce the number of false positive hits, we used blastp instead of delta-and psi-blast. Matching proteins with a threshold < 50 bits [45]. Multiple sequence alignment was performed using localpair FLAG of MAFFT v7.294b [46]. The inference of phylogenetic trees was calculated using distance-based UPGMA model of Segotron [47]. The logo was created using Seq2Logo V2.0 [48] with a minimum stack width of 0.1 and probability weighted Kullback-Leibler Logo. Construction of both C. difficile cdeC and cdeM mutants and complemented strains Two derivatives of C. difficile strain 630erm with an intron inserted in cdeC or cdeM genes, respectively, were constructed as follows. To target the L1.ltrB intron to each gene cdeC or cdeM, we used plasmid pDP306 and pDP370 (S1 Table). Three short sequence elements from the intron RNA involved in base pairing with the DNA target sites were modified by PCR, using cdeC-specific primers P68, P69, P70 and universal primer IBS described elsewhere [13]; and cdeM specific primers P85 (5'-AAAAAAGCTTATAATTATCCTTACAGTTCGAAC CTGTGCGCCCAGATAGGGTG-3'), P86 (5'-CAGATTGTACAAATGTGGTGATAACAG ATAAGTCGAACCTCTTAACTTACCTTTCTTTGT-3') and P87 (5'-TGAACGCAAGTTTC TAATTTCGGTTAACTGTCGATAGAGGAAAGTGTCT-3'). The clostron plasmids pDP306 or pDP370 were transformed into E. coli HB101 (pRK24) and subsequently transferred through conjugation to C. difficile strain 630erm. Thiamphenicol resistant clones were selected and re-grown on BHIS plates containing thiamphenicol and FeSO 4 to induce expression of the Targetron system. Erythromycin-resistant clones were selected and then isolation streaked on BHIS plates supplemented with erythromycin (5 μg/mL). Positive clones were screened by colony PCR for a 2. To evaluate whether the observed cdeC and cdeM phenotypes were attributed to inactivation of cdeC and cdeM, these strains were complemented with cdeC-and cdeM-FLAG fusions using plasmids pDP345 and pDP360 (S1 Table). Briefly, C. difficile 630erm cdeC and cdeM mutants were complemented by conjugating with E. coli HB101 containing plasmids pDP345, pDP360, pPCR3 and pPCR4 respectively (S1 Table). Trans conjugants were selected in BHIS agar plates containing erythromycin and thiamphenicol. Spore purification Spore suspensions were prepared by plating a 1:100 dilution of an overnight culture onto a 70:30 medium (63 g Bacto peptone (BD Difco), 3.5 g proteose peptone (BD Difco), 0.7 g ammonium sulfate (NH 4 ) 2 SO 4 , 1.06 g Tris base, 11.1 g brain heart infusion extract (BD Difco) and 1.5 g yeast extract (BD Difco) for 1L) and incubating it for 7 days at 37˚C under anaerobic conditions. After incubation, plates were removed from the chamber and the surface was scraped up with ice-cold sterile water. Next, the spores were washed five times gently with icecold sterile water in micro centrifuge at 14,000 rpm for 5 min. Spores were loaded onto a 50% Nycodenz solution, centrifuged (14,000 rpm, 40 min). After centrifugation, the spores pellet was washed five times (14,000 rpm, 5 min) with ice-cold sterile water to remove Nycodenz remnants. The spores were counted in Neubauer chamber and volume adjust at 5x10 9 spores per mL. Transmission electron microscopy To analyze the ultrastructure of spores of C. difficile 630ermB wild-type, cdeC and cdeM mutant spores (~2x10 8 ) were fixed with 3% glutaraldehyde and 0.1 M cacodylate buffer (pH 7.2) overnight at 4˚C, and stained for 30 min with 1% tannic acid. Samples were further processed and embedded in spurs resin as previously described [38]. Thin sections obtained with a microtome were placed on glow discharge carbon-coated grids and double-lead stained with 2% uranyl acetate and lead citrate. Grids were analyzed with a Phillips Tecnai 12 Bio Twin at the Electron Microscopy facility of the Pontificia Universidad Católica de Chile. Immunofluorescence of C. difficile spores C. difficile wild-type, cdeC and cdeM mutant spores were fixed with 3% paraformaldehyde (pH 7.4) for 20 min in poly-L-lysine-coated glass cover slides. Fixed spores were rinsed three times with PBS and blocked with 1% bovine serum albumin (BSA) for 30 min and further incubated for 2 h at room temperature with primary antibodies 1:50 of rat antiserum raised against CdeC [13] or with 1:100 of rabbit antiserum raised against CdeM (kindly provided by Dr. Adriano Henriques, Universidade Nova Lisboa). Next, covers containing fixed spores were incubated for 1 h at room temperature with 1:500 anti-rat IgG-Alexa488 conjugate (Thermo Fisher) or with 1:500 anti-rabbit IgG-Alexa488 conjugate (Thermo Fisher) in PBS-1% BSA and washed three times with PBS and once with distilled water. Dried samples (30 min at room temperature) were mounted with Dako fluorescence mounting medium (Dako North America) and sealed with nail polish. Samples were analyzed with a BX53 Olympus fluorescence microscope. Western blot analysis Samples (10 μl) of coat and exosporium extracts of 5x10 7 spores of C. difficile 630erm wild-type and cdeC or cdeM mutant strains were treated twice at 100˚C for 5 min in SDS-PAGE loading buffer and run on SDS-PAGE gels (12% acrylamide). Proteins were transferred to a nitrocellulose membrane (Bio-Rad) and blocked for overnight at 4˚C with 2% bovine serum albumin (BSA) in TBS (pH 7.4). These western blots were probed with a 1:1,000 dilution of anti-FLAG for 1 h at room temperature and then with 1:10,000 dilution of anti-mouse-horseradish peroxidase (HRP) conjugate (Promega) for 1 h at room temperature in PBS 1X with 1% BSA and 0.05% Tween20. In the western blot with goat antiserum raised against C. difficile 630erm spore [30] and anti-SpoIVA (kindly provided by Dr. Shen Tufts University, U.S.A.), after the transference, the nitrocellulose membrane was blocked for 1 h at room temperature with 10% milk-Tris-buffered saline (TBS) (pH 7.4). These western blots were probed with a 1:500 goat antiserum raised against spores of C. difficile 630erm, 1:2500 rabbit antiserum raised against SpoIVA [32] for 1 h and then with a 1:10,000 dilution of anti-goat and anti-rabbit horseradish peroxidase (HRP) conjugate (Promega) for 1 h at room temperature in PBS-1X BSA-0.1% Tween 20. In both cases, HRP activity was detected with a chemoluminescence detection system (Fotodyne Imaging system) by using PicoMax sensitive chemiluminescent detection system HRP substrate (RockLand Immunochemicals). Each western blot also included 1 μl of PageRuler Plus prestained Protein Ladder (Fermentas). Each western blot was repeated at least 3 independent times, and analyzed by densitometry to quantify the relative amounts of protein by ImageJ as previously described [13]. Antibodies against SpoIVA were a gift from Dr. Aimee Shen [36]. Spore colony forming efficiency To quantify the effect of a cdeC and cdeM mutation on spore forming efficiency, aliquots of C. difficile 630erm wild-type and cdeC and cdeM spores (1x10 7 spores/mL) were plated with or without a heat activation (65˚C, 20 min) onto BHIS-ST agar plates and incubated anaerobically for 36 h at 37˚C. Spore viability was calculated using the following formula: [(c.f.u. mL -1 )/ (spore particles mL -1 )] x 100 and expressed relative to wild-type strain. Spore resistance treatments Ethanol resistance of C. difficile 630ermB wild-type, cdeC and cdeM spores was measured by resuspending 3x10 6 spores in 30 μl of 50% ethanol in PBS 1X. Spores were incubated with ethanol for 30 min at 37˚C and shaking (200 rpm). Aliquots were plated onto BHIS-ST agar plates and incubated anaerobically for 36 h at 37˚C. Heat resistance of C. difficile spores was determined as previously described [13]. Briefly, 3x10 6 spores of strains C. difficile 630erm wild-type, cdeC and cdeM were resuspended in 30 μl of PBS 1X pH 7.4 and heat treated at 75˚C for 60 min. Aliquots at appropriate dilutions were plated onto BHIS-ST agar plates and incubated anaerobically for 36 h at 37˚C. As a control of non-heat-treated spores, an aliquot was plated onto BHIS-ST agar plate prior to the experiment and colonies counted as described above. C. difficile spore-lysozyme resistance was measured by resuspending 3x10 6 spores in 30 μl of PBS 1X with 1 mg/mL of lysozyme and incubated for up to 5 h at 37˚C with shaking (200 rpm). Germinated spores were analyzed by phase contrast microscopy. Spore viability was measured by plating aliquots onto BHIS-ST agar plates and incubated anaerobically at 37˚C for 36 h and colonies counted. In some experiments, lysozyme-treated C. difficile 630erm wildtype, cdeC and cdeM spores were subsequently treated with 50% ethanol for 30 min at 37˚C with shaking (200 rpm) and aliquots plated onto BHIS-ST agar plates and colonies counted after 36 h of incubation under anaerobic conditions. DPA assay To quantify spore-core DPA content, 200 μl of 5x10 9 spores/ml were boiled 60 min, cooled on ice for 2 min, centrifuged at 14,000 rpm x 5 min, and 190 μl of the supernatant was mixed with 10 μl 800 μM TbCl 3 in a 96-well plate, and DPA release was monitored with an excitation of 270 nm and emission of 545 nm in a Synergy H1 Hybrid Multi-Mode Reader (BioTek) as described [49,50]. Infection of Raw 264.7 macrophages To measure the adherence of C. difficile 630erm wild-type cdeC and cdeM mutant spores to Raw 264.7 cells (ATCC, U.S.A.), a 96-wells plate was seeded (5x10 5 cells per well) and incubated at 37˚C in 5% CO 2 atmosphere. Confluent Raw 264.7 monolayers were infected with 40 μl of RPMI containing C. difficile 630erm wild-type, cdeC and cdeM spores at an MOI of 10. After 30 min of incubation at 37˚C, macrophages were washed three times with PBS 1X to rinse out unbound spores. Infected macrophages were lysed with 0.01% Triton X-100, and adhered spores were counted by plating appropriate aliquots onto BHIS-ST agar plates and incubated for 36 h anaerobically at 37˚C. Colonies were counted and expressed as c.f.u. mL -1 for colony counts, no additional colonies appeared upon further incubation periods. Total spores were counted by lysing the infected macrophages prior to rinsing off the unbound spores and plating appropriate dilutions onto BHIS-ST agar plates and colonies counted after 36 h of incubation at 37˚C under anaerobic conditions. To evaluate C. difficile spore survival during infection of macrophages, after monolayer of Raw 264.7 cells were washed three times with PBS, macrophages were infected at an MOI of 10 as described above and unbound spores were rinsed off with three washes with PBS and macrophages were resuspended in 80 μl of RPMI with FBS 1% (to avoids macrophage replication). Viability of C. difficile spores was determined at 0.5, 24, 48 and 72 after infection by lysing infected macrophages with 0.01% Triton X-100, and serial dilutions plated onto BHIS-ST agar plates. Germination assay The purified spores were heat activated for 30 min at 60˚C. Next, were diluted in BHIS only or BHIS supplemented with 10 mM sodium taurocholate (Sigma-Aldrich). Heat-activates spores in BHIS only was used as control. The OD 600 was monitored immediately (zero time) and various times for 1h at 37˚C. Cytotoxicity of C. difficile To determine citotoxicity of C. difficile strains an aliquot from a C. difficile was inoculated into BHIS broth and incubated for 24 h at 37˚C under anaerobic conditions. Next, 1 mL of a 24-h BHIS culture was centrifuged and filtered and diluted 1:100 in Dulbecco Minimum Eagles Medium (Lonza, USA) supplemented with 10% filtered fetal bovine serum and 100 μL to each well of a 96-well plate containing Vero cells. The cells were incubated at 24 h under 5% CO 2 . The circularity of the cells was recorded (more than 50% of the cells). The cytotoxicity was measured with the following formula: Log 10 ((percentage of rounded cells) x 100). Animals 6-8 weeks old C57BL/6 (male or female) were obtained from breeding colony at the Facultad de Odontología de la Universidad de Chile (Santiago, Chile) that was originally established using animals purchased from Jackson Laboratories. All mice used in the experiments were housed individually cages and were acclimated for 1 week at the Animal Infection Facility of the Microbiota-Host Interactions and Clostridia Research Group at the Universidad Andrés Bello before the experiment. Water, bedding and cages were autoclaved, and mice has a 12-hour cycle of light and darkness. Competitive colonization assays The C. difficile murine model of infection was used to perform competitive index (CI) experiments. For each competitive assay, wild-type C57BL/6 mice (n=5) were challenged with 10 7 spores via gavage in 0.2 mL PBS. Equal amounts of spores (5x10 6 ) from the parental wild-type 630erm, cdeC and cdeM mutant were used. Fecal samples were collected and enumerated by plating on TCCFA agar, with and without erythromycin, and incubated for 48 h. Agar supplemented with erythromycin selected for the knockout containing the ermB cassette. The CI number was determined using the following ratio: [(630 cdeC or cdeM/630 wild-type) output] / [(630 cdeC or cdeM/630 wild-type) input]. Statistical testing was performed using the Mann Whitney test applied to Log 10 values of the CI ratios. Mouse model of recurrent infection To induce C. difficile susceptibility in mice, prior the infection mice were administrated with a wide spectrum antibiotic, cefoperazone (0,5mg/mL) (Sigma) in drinking water for 5 days, following 2 days of normal water as has been previously described [51,52]. Then animals were orogastrically infected with 3x10 7 C. difficile spores strain 630erm (n = 6); cdeC (n = 6) or cdeM (n = 5). All procedures and mouse handling were performed aseptically in a biosafety cabinet to contain spore-mediated transmission. To evaluate recurrence of CDI, from days 3 to 9, all groups of mice were orogastrically administered 100 μl of PBS containing vancomycin (50 mg/ kg; Sigma-Aldrich). During all the experiment, mice were daily monitored, and weight loss and diarrhea score and C. difficile spore shed. Sickness behaviors monitored daily, and fecal samples, and at the end of the assay, animals were sacrificed with a lethal dose of ketamine/ xylazine and cecum content and colonic tissue were collected. The clinical condition of mice was monitored daily with a scoring system (CDI). The presence of diarrhea was classified according to severity as follows: (i) normal stool (score = 0); (ii) color change/consistency (score = 1); (iii) presence of wet tail or mucosa (score = 2); (iv) liquid stools (score = 3). A score higher than 0 was considered as diarrhea [52]. Quantification of spores from feces and colon Collected fecal samples were stored at -20˚C until spore quantification. Feces were hydrated with 500 μL sterile MilliQ water ON at 4˚C and then added 500 μL of absolute ethanol (Merck) and at RT incubated for 60 min. Serially diluted of sample were plated on onto selective medium supplemented with taurocholate (0.1% w/v), Cefoxitin (16 μg/mL), L-cycloserine (250 μg/mL) (TCCFA plates). The plates were incubated anaerobically at 37˚C for 48 h, colonies counted, and results expressed as the Log 10 [CFU/g of feces] [52]. Colonic tissue was collected from mice, washed three times with PBS with a syringe. The spore load in the colon was determined in two sections: proximal colonic tissue, medium colonic tissue and distal colonic tissue and cecum tissue. First proximal colonic tussue was collected in three sections (proximal, medium, distal) and the first cm of each section (from the cecum) was obtained. For cecum tissue 1 cm from the base was obtained. After, tissue was weighted, and PBS: Absolute ethanol (1:1) was added (10 μl/mg of tissue), homogenized and incubated by 1 hour. The amounts of spores were quantified plating the tissue homogenization onto TCCFA plates. The plates were incubated anaerobically at 37˚C for 48 h. Finally, the colony count was expressed as the Log 10 [CFU/gram of colon]. Cecum content cytotoxicity assay Vero cell cytotoxicity was performed as described previously [51]. Briefly, 96-well flat bottom microtiter plates were seeded with Vero cells at a density of 10 5 cells/well. Mice cecum contents were suspended in PBS at a ratio of 1:10 (10 μL of PBS per mg of cecum content), vortexed and centrifuged (14,000 rpm, 5 min). Filter-sterilized supernatant was serially diluted in DMEM supplemented with 10% FBS and 1% penicillium streptomycin; 100 μL of each dilution was added to wells containing Vero cells. Plates were screened for cell rounding 16 h after incubation at 37˚C. The cytotoxic titer was defined as the reciprocal of the highest dilution that produced rounding in at least 80% of Vero cells per gram of luminal samples under X200 magnification. Detection of C. difficile spore and vegetative cells by serum from challenged mice Serum from infected animals were tested against or 630erm vegetative cells by ELISA. 1.6x10 7 spores or 3.0 x 10 6 vegetative cells prefixed in PFA 4% per well were incubated in 96-wells plate by 16 hrs at 4˚C. Plates were washed with PBS-Tween20 0.05%, 3 times and blocked with 2% BSA by 1 hr at 37˚C. After 3 washes, wells were incubated with serum dilutions 1:200, and incubated for 2 hr at 37˚C. After 5 washes, secondary anti-mouse HRP antibody was added at 1:10,000 and incubated at 30˚C for 1 hour and finally washed 5 times. Colorimetric reaction was initiated upon addition of 50 μL of reaction buffer (0.05 M citric acid, 0.1 M disodium hydrogen phosphate) containing 2mg/mL of o-phenlyendiamine (Sigma-Aldrich, U.S.A.) and 0.015% of H 2 O 2 (Merck, Germany). Reaction was stopped after 20 min with 25 μL of 4.5 N of H 2 SO 4 and absorbance was measured at 492 nm. Background reactivity was performed using IgY from eggs obtained prior immunization. Intestinal loop assay Before to surgery mice were deeply anesthetized in a general way with Small Animal Anesthesia Machine for which the mice were induced in a chamber with 5% isoflurane (RWD), then the mice were maintained with 1.5% isoflurane during the surgery administered by air. Briefly, after a midline laparotomy, 1.5 cm ileal and proximal colon were ligated and injected with 3.3x10 8 spore/cm in 0.1 mL of PBS (pH 7.2) for intestinal loops (n = 6 for wild-type and cdeC; n=5 for cdeM). The abdomen was closed with superglue, and the animals were allowed to regain consciousness. The mouse was kept for 5 h at which time the animal was euthanized, and the ligated loops were removed and washed gently in PBS and fixed in 4% paraformaldehyde 30% sucrose during 16h, washed and subjected to indirect immunofluorescence. Tissue were made permeable by incubation with 0.2% Triton X-100 in PBS 1X and blocked with 3% BSA in PBS for 3h. Tissue were made permeable by incubation with 0.2% Triton X-100 in PBS 1X and blocked with 3% BSA in PBS for 3h. The same buffer was used for subsequent incubation with antibodies. Intestine fragments were incubated with a primary polyclonal IgY anti-C. difficile spore and fluorescently labelled phalloidin (Alexa Fluor 568) for 12-16h at 4˚C. Following PBS washed, samples were reacted with goat anti-chicken IgY secondary antibodies (Alexa Fluor 488) and Hoechst. For mounting was applied a drop of DAKO fluorescent mounting medium onto the tissue segment and mount cover glass over it and sandwich the tissue section. The ends of the cover glass should be fixed to the glass slide with a vinyl tape to hold the tissue sections in place. Confocal microscopy and imaging analysis To acquiring images Leica TCS LSI microscope was used, with 5X (optical zoom 20X), numerical aperture 0.5. Confocal Imaging 405 nm, 488 nm and 532 nm excitation wavelengths were used for nuclei staining (Hoechst), Alexa Fluor 488-llabeled bacteria and Alexa Fluor 568-labelled phalloidin, and signals were detected with an ultra-high dynamic PMT spectral detector (430-750nm). Emitted fluorescence was split with four dichroic mirrors (QD405nm, 488nm 561nm and 635nm). Images (1024x1024) acquired with a 0.7-μm Z step were smoothed by median filtering at kernel size 3x3 pixels. Z projection of intestinal epithelium were performed using ImageJ software (NIH). Villi and crypt were visualized by Hoechst and phalloidin signals. For quantification of tissue associated bacterial signals stacks Z step were smoothed by median filtering at kernel size 3x3 pixels. Nuin PBSmber of positive spots/1,000 μm 2 from ileal and proximal colon and area occupied by individual spots were analyzed. Data were not normally distributed and were analyzed by non-parametric tests. Statistical analysis Student's t-test was used for pairwise comparison in most experiment. Where stated, nonparametric test were used. shows the general progression for this preparation. (TIF) S10 Fig. Germination of C. difficile wild-type, cdeC and cdeM. (A,B) Germination of C difficile spores of wild-type, cdeC and cdeM mutant strains and their respective strains complemented with wild-type, cdeC and cdeM genes, respectively, were assessed for germination with 10 mM sodium taurocholate. For clarity, panel A shows spore germination of wild-type, cdeC and cdeC/cdeC spores. The same data for wild-type spores is presented in panels A and B for representative purposes. (C,D) Germination of C. difficile spores of wild-type, cdeC and cdeM mutant strains and their respective strains complemented with wild-type, cdeC and cdeM genes, respectively, were assessed for germination with phosphate buffer saline. For clarity, panel C shows spore germination of wild-type, cdeC and cdeC/cdeC spores. The same data for wild-type spores is presented in panels C and D for representative purposes.
2018-08-14T20:08:23.068Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "4723944620c2fb11e9f0ae71711ea6294b8475c7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1007199&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4723944620c2fb11e9f0ae71711ea6294b8475c7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234483642
pes2o/s2orc
v3-fos-license
Association between gonadal hormones and osteoporosis in schizophrenia patients undergoing risperidone monotherapy: a cross-sectional study Objective Patients with schizophrenia are at increased risk of osteoporosis. This study first determined the osteoporosis rate in patients with schizophrenia and then then explored the association between serum gonadal hormone levels and osteoporosis among these patients. Methods A total of 250 patients with schizophrenia and 288 healthy controls were recruited. Osteoporosis was defined by decreased bone mineral density (BMD) of the calcaneus. Serum fasting levels of gonadal hormones (prolactin, estradiol, testosterone, progesterone, follicle-stimulating hormone, luteinizing hormone) were determined. The relationship between osteoporosis and hormone levels was statistically analyzed by binary logistic regression analysis. Results Our results showed that patients with schizophrenia had a markedly higher rate of osteoporosis (24.4% vs. 10.1%) than healthy controls (P < 0.001). Patients with osteoporosis were older, had a longer disease course, and had a lower body mass index (BMI) than patients without osteoporosis (all P < 0.05). Regarding gonadal hormones, we found significantly higher prolactin, but lower estradiol, levels in patients with osteoporosis than in those without osteoporosis (both P < 0.05). The regression analysis revealed that PRL (OR = 1.1, 95% CI [1.08–1.15], P < 0.001) and E2 level (OR = 0.9, 95%CI [0.96–0.99], P = 0.011) were significantly associated with osteoporosis in patients with schizophrenia. Conclusion Our results indicate that patients with schizophrenia who are being treated with risperidone have a high rate of osteoporosis. Increased prolactin and reduced estradiol levels are significantly associated with osteoporosis. INTRODUCTION Osteoporosis is a degenerative disease that is characterized by a decrease in bone mineral density (BMD) and results in an increased risk of fractures (Liang et al., 2019). Approximately 200 million people worldwide are affected by osteoporosis, which increases their morbidity and mortality (Cui et al., 2018). Numerous studies have been conducted to explore risk factors for osteoporosis, and commonly reported risk factors are old age, female sex, insufficient calcium intake, inadequate physical activity, excessive smoking, excessive drinking and use of antipsychotics (Li et al., 2017;Crews & Howes, 2012). Schizophrenia is a severe, chronic, and debilitating disorder that affects approximately 1% of the global population (Stępnicki, Kondej & Kaczor, 2018). Antipsychotic drugs are considered the primary treatment for schizophrenia. Although these drugs have significant benefits for psychotic symptoms, they can induce health problems such as metabolic syndrome, cardiovascular diseases, sexual dysfunction, and osteoporosis (Andrade, 2016). Previous studies have demonstrated that patients with schizophrenia have a higher risk of osteoporosis than the general population (Cui et al., 2018;Stubbs et al., 2015). The underlying mechanisms of increased osteoporosis risk in patients with schizophrenia are still unclear. However, studies have shown that, in addition to poor nutrition, reduced physical activity, excessive smoking, and drinking, the main reason that patients with schizophrenia develop osteoporosis is the use of antipsychotic drugs (Li et al., 2017;Halbreich et al., 2003). Antipsychotics can elevate the secretion of prolactin (PRL) via the dopamine D2 receptor-blocking effect (Peuskens et al., 2014). In addition, hyperprolactinemia caused by antipsychotics can lead to estrogen and androgen deficiencies, which can accelerate bone loss and increase the risk of osteoporosis (Okita et al., 2014). A large body of evidence supports that decreased estrogen levels can increase bone resorption by prolonging the life of osteoclasts (Li et al., 2018;Chopko & Lindsley, 2018) and that androgen deficiency can lead to an imbalance of osteoblast and osteoclast activity, resulting in decreased osteogenesis (Mohamad, Soelaiman & Chin, 2016). Taken together, these findings suggest that abnormal levels of gonadal hormones due to the use of antipsychotic drugs could be associated with osteoporosis in patients with schizophrenia. To further explore this hypothesis, we focused on schizophrenia patients using a single antipsychotic drug, risperidone, which is commonly used in our clinical practice. In clinical practice, risperidone is a widely used second-generation antipsychotic (SGA), as well as a prolactin-elevating compound (Bishop et al., 2012). Early studies of patients treated with risperidone found high PRL levels to be associated with low BMD values (Becker et al., 2003). However, other studies failed to replicate this pattern (Stubbs et al., 2014). The inconsistent results may be related to confounding factors, such as age, gender and varying levels of physical activity. In the present study, we aimed to analyze risk factors associated with osteoporosis in inpatients with schizophrenia receiving risperidone monotherapy. We speculate that abnormal gonadal hormone levels may be associated with the onset of osteoporosis in patients with schizophrenia. We hope that this work can provide clinical data on the osteoporosis rate and its associated risks in patients with schizophrenia, which helps us to take measures to prevent osteoporosis and treat it. Participants A total of 250 inpatients with schizophrenia who were hospitalized in Kangning Hospital Affiliated to Wenzhou Medical University from May 2018 to June 2020 were included in our study. All patients met the following criteria: (1) a diagnosis of schizophrenia according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) (American Psychiatric Association, 1994); (2) age 18-75 years old; (3) Han Chinese ethnicity; and (4) undergoing risperidone monotherapy for at least 6 months. The exclusion criteria were as follows: (1) diagnosis of a psychiatric disorder in addition to schizophrenia or a history of substance abuse/dependence disorder; (2) severe cardiovascular, hepatic, or renal diseases that may affect bone metabolism, such as diabetes or hyperthyroidism; (3) pregnancy or breastfeeding; and (4) a history of bone fracture within one year before enrollment. Since admission to the facility, all patients had followed the same diet and activity schedule, which helps to minimize differences in physical exercise and diet between patients. According to the principles of frequency matching, we selected a group of 288 healthy controls from the physical examination center at our hospital. The control group was comparable to the patient group in terms of age and sex. This study was performed in strict accordance with the Declaration of Helsinki and all other relevant national and international regulations. The study protocol was approved by the Medical Ethics Committee of The Affiliated Kangning Hospital of Wenzhou Medical University (approval number: 20180412001). All participants signed informed consent before the formal study. Written informed consent was obtained from all participants prior to their participation in any procedures related to this study. Assessment of participant characteristics Detailed demographic and clinical data were collected via a standardized form that was specifically designed for this study. Weight and height were measured in a standardized manner. Participants were barefoot and stood upright, while height was measured to the nearest millimeter. An electronic scale was used to evaluate weight while wearing light indoor clothing. Body mass index (BMI) was calculated as weight in kg/square of height in meters. Definition of osteoporosis BMD (g/cm 2 ) of the calcaneus was measured by a trained ultrasound technician blind to our research hypotheses in a separate examination center at the hospital using a 3.01 Sahara Clinical Bone Sonometer (Hologic). Quantitative ultrasound of the calcaneus (QUS) measurements were performed at the right heel (or left heel, if the right heel was inaccessible) using a 6 Broadband ultrasound attenuation (BUA; DB/MHz) and speed of sound (SOS; m/s) at least twice on each calcaneum. The T-score of QUS refers to the number of standard deviations (SD) away from the mean T-score of a database of normal values compiled from a healthy young adult population and was calculated as (0.67 × BUA + 0.28 × SOS) − 420. According to World Health Organization (WHO) criteria (World Health Organization Study Group, 1994), 0-steoporosis is defined by BMD. Statistical analysis Statistical analyses were performed using SPSS software version 26.0 (SPSS, Chicago, IL). First, we used independent samples t-tests or the Mann-Whitney U test for continuous variables and the chi-square test for categorical variables to compare differences between groups. Second, using osteoporosis as the dependent variable and age, BMI, total disease course, estradiol level, prolactin level, FSH level and risperidone doses as independent variables, we performed a binary logistic regression analysis with the ''enter'' method to identify factors independently associated with osteoporosis in patients with schizophrenia. All tests were two-tailed, and the statistical significance level was set as P ≤ 0.05. Demographic characteristics of patients and controls The demographic data of the participants are presented in Table 1. For patients with schizophrenia, the average age was 46.3 ± 10.6 years, and the average BMI was 24.5 ± 4.0 kg/m 2 . For healthy controls, the average age was 46.0 ± 10.8 years, and the average BMI was 23.4 ± 3.5 kg/m 2 . There were no significant differences in sex or age between patients and controls (age: P = 0.613; sex: P = 0.101). Patients had a significantly higher BMI than healthy controls (P = 0.007). Among patients with schizophrenia, the average disease duration was 21.0 ± 9.0 years. Rates of osteoporosis in patients and controls The rates of osteoporosis were 24.4% (61/250) for patients with schizophrenia and 10.1% (29/288) for healthy controls. The patient group had a significantly higher rate of osteoporosis than the control group (P <0.001). Factors associated with osteoporosis in patients with schizophrenia The binary logistic regression analysis found that PRL levels were positively associated with osteoporosis in patients (OR = 1.1, 95% CI [1.08-1.15], P <0.001), and E2 levels were negatively associated with osteoporosis in patients (OR = 0.9, 95% CI [0.96-0.99], P = 0.011) (see Table 3), accounting for 52% of the variance of osteoporosis in patients. DISCUSSION In the present study, we found that schizophrenia patients undergoing risperidone monotherapy had a higher rate of osteoporosis than healthy controls. We demonstrated that 24.4% of patients treated with risperidone had osteoporosis, which represents a 2.4-fold increased risk compared to healthy controls. Of note, a recent meta-analysis reported that osteoporosis is over 2.5 times more common in patients with schizophrenia treated with antipsychotics than in age-and sex-matched controls (Stubbs et al., 2014). In previous studies, the recruited patients used different antipsychotics, which may affect the rate of osteoporosis. When we only used risperidone monotherapy, the results were still consistent with the majority of previous studies (Cui et al., 2018;Stubbs et al., 2014;Gomez et al., 2016). To the best of our knowledge, this is the first study exploring the rate of osteoporosis in schizophrenia inpatients with risperidone monotherapy. Despite some investigations, the precise role of antipsychotics in osteoporosis risk remains unclear. One potential mechanism relates to choice in the hypothalamicpituitary-gonadal axis induced by antipsychotics (Kishimoto et al., 2008). The dopamine D2 receptor-blocking effect of antipsychotics could elevate the secretion of PRL, causing hyperprolactinemia (Meaney et al., 2004). The increased PRL level then leads to attenuation of the bone resorption rate, which consequently lowers the secretion of sex hormones such as E2 and T (Cui et al., 2018;Okita et al., 2014). In light of awareness that different types of antipsychotics have different effects on PRL, a recent meta-analysis and systematic review demonstrated that patients treated with PRL-raising antipsychotics (typical antipsychotics, risperidone, paliperidone, amisulpride) have a higher risk of BMD loss and osteoporosis than patients receiving PRL-decreasing antipsychotics (Tseng et al., 2015;Lally et al., 2019). Risperidone, which has difficulty penetrating the blood-brain barrier, is expected to have longer-lasting D2 antagonistic effects in the pituitary system than in the central nervous system, ultimately leading to prolonged hyperprolactinemia and maximal loss of BMD (Markianos, Hatzimanolis & Lykouras, 2001). Thus, risperidone may be the most likely to cause hormonal dysfunction, thereby increasing osteoporosis. Hence, clinical practice should be alert to the use of risperidone. To date, no study has been conducted to explore the relationship between hypothalamicpituitary-gonadal axis-related hormones and osteoporosis in schizophrenia patients receiving risperidone monotherapy. Our study provides new insights into the relationship between the two and further supports the role of gonadal hormones in the risk of osteoporosis in patients receiving risperidone. Specifically, we found that patients with osteoporosis had significantly higher PRL levels but lower E2 levels than patients without osteoporosis, which is similar to the findings of previous studies (Liang et al., 2016). Moreover, logistic regression analysis confirmed that PRL and E2 were independent predictive factors associated with osteoporosis after controlling for other known associated factors. Despite some research demonstrating significant correlations between P, FSH and TH (Doumouchtsis et al., 2008;Prior, 2018), we did not observe such relationships, which is in line with the majority of previous studies (Aydin et al., 2005;Seven et al., 2016). The association of low E2 levels with osteoporosis is consistent with the high incidence of osteoporosis in postmenopausal women (Liang et al., 2016;Seven et al., 2016) and supports the view that estrogen is involved in bone metabolism. In addition, our study showed that schizophrenia patients with osteoporosis were older and had a longer total disease course than those without osteoporosis. These two risk factors have also been reported previously (Cui et al., 2018;Kinon et al., 2013;Jung et al., 2006). It is well known that the aging process increases bone destruction and decreases bone formation (Tseng et al., 2015). Patients with schizophrenia with a longer disease course may have received longer treatment with antipsychotics, thus resulting in more profound effects on gonadal hormones. However, our logistic regression analysis showed that age and disease course did not have significant clinical effects on osteoporosis. Rather, the results suggested that gonadal hormones, namely, PRL and E2, are important to osteoporosis risk. Nevertheless, the relationship between osteoporosis and disease course warrants further verification in prospective and longitudinal studies. Unexpectedly, we found a lower BMI in patients with osteoporosis than in patients without osteoporosis. Previous research has reported a significant positive correlation between BMI and osteoporosis (Liang et al., 2019;Bulut et al., 2016). One possible reason for this relationship is that the higher BMI caused by SGA use could counteract BMD loss effects in schizophrenia (Doknic et al., 2011). Although risperidone showed a slight effect on body weight, in the logistic regression analysis, the difference between groups disappeared. Extensive longitudinal research with larger samples is needed to confirm these relationships. The strength of this study is the relatively large sample of schizophrenia patients receiving risperidone monotherapy. However, there are several limitations worth mentioning. First, the cross-sectional nature of this research provides a limited capacity to identify a causal relationship between gonadal hormones and BMD loss or osteoporosis. Second, since all patients were inpatients recruited from one hospital in Wenzhou, our findings may not be generalizable to other settings and outpatient populations. Third, although all patients received risperidone monotherapy for at least six months, we did not collect detailed information about medication history, which may have effects on gonadal hormones. Fourth, we only recruited patients following a similar diet and physical exercise schedule, which will cause selection bias to a certain extent. Finally, future research with a prospective and longitudinal design is required to evaluate causal relations between gonadal hormones and osteoporosis in first-episode and drug-naïve patients with schizophrenia. In summary, our results provide further evidence of the increased rate of osteoporosis in patients with risperidone monotherapy compared to controls. Patients with osteoporosis tended to be older, have a longer disease course, have a higher BMI, have significantly higher PRL levels, and have lower E2 levels than patients without osteoporosis. Our binary regression logistic analysis showed that PRL and E2 levels were independently associated with osteoporosis. These findings suggest that PRL and E2 levels are related to osteoporosis in patients treated with risperidone. Prospective and longitudinal research is warranted to confirm these findings and investigate the underlying mechanism. BMD Bone
2021-05-14T05:16:16.631Z
2021-04-27T00:00:00.000
{ "year": 2021, "sha1": "6193f9a67521d462edb2342cb5458a33d187adf6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.11332", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6193f9a67521d462edb2342cb5458a33d187adf6", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
16927810
pes2o/s2orc
v3-fos-license
Human ClC-6 Is a Late Endosomal Glycoprotein that Associates with Detergent-Resistant Lipid Domains Background The mammalian CLC protein family comprises nine members (ClC-1 to -7 and ClC-Ka, -Kb) that function either as plasma membrane chloride channels or as intracellular chloride/proton antiporters, and that sustain a broad spectrum of cellular processes, such as membrane excitability, transepithelial transport, endocytosis and lysosomal degradation. In this study we focus on human ClC-6, which is structurally most related to the late endosomal/lysomal ClC-7. Principal Findings Using a polyclonal affinity-purified antibody directed against a unique epitope in the ClC-6 COOH-terminal tail, we show that human ClC-6, when transfected in COS-1 cells, is N-glycosylated in a region that is evolutionary poorly conserved between mammalian CLC proteins and that is located between the predicted helices K and M. Three asparagine residues (N410, N422 and N432) have been defined by mutagenesis as acceptor sites for N-glycosylation, but only two of the three sites seem to be simultaneously N-glycosylated. In a differentiated human neuroblastoma cell line (SH-SY5Y), endogenous ClC-6 colocalizes with LAMP-1, a late endosomal/lysosomal marker, but not with early/recycling endosomal markers such as EEA-1 and transferrin receptor. In contrast, when transiently expressed in COS-1 or HeLa cells, human ClC-6 mainly overlaps with markers for early/recycling endosomes (transferrin receptor, EEA-1, Rab5, Rab4) and not with late endosomal/lysosomal markers (LAMP-1, Rab7). Analogously, overexpression of human ClC-6 in SH-SY5Y cells also leads to an early/recycling endosomal localization of the exogenously expressed ClC-6 protein. Finally, in transiently transfected COS-1 cells, ClC-6 copurifies with detergent-resistant membrane fractions, suggesting its partitioning in lipid rafts. Mutating a juxtamembrane string of basic amino acids (amino acids 71–75: KKGRR) disturbs the association with detergent-resistant membrane fractions and also affects the segregation of ClC-6 and ClC-7 when cotransfected in COS-1 cells. Conclusions We conclude that human ClC-6 is an endosomal glycoprotein that partitions in detergent resistant lipid domains. The differential sorting of endogenous (late endosomal) versus overexpressed (early and recycling endosomal) ClC-6 is reminiscent of that of other late endosomal/lysosomal membrane proteins (e.g. LIMP II), and is consistent with a rate-limiting sorting step for ClC-6 between early endosomes and its final destination in late endosomes. INTRODUCTION CLC proteins form an evolutionary conserved family of chloride channels and/or transporters that are expressed from bacteria to man [1]. The human genome contains 9 genes (CLCN1-7, CLCNKA, CLCNKB) that encode the pore-forming a-subunits (ClC-1 to -7, ClC-Ka and -Kb). In addition, auxiliary b-subunits that affect plasma membrane location or expression level of the a-subunit, have been described for ClC-Ka and -Kb (barttin) and ClC-7 (Ostm1) [2,3]. More recently it has transpired that asubunits can differ in terms of subcellular location (plasma membrane versus intracellular organelles) and mode of Cl 2 transport (Cl 2 channel versus Cl 2 /H + antiporter) [4][5][6][7]. Consequently, the mammalian a-subunits can be classified in two subgroups, one functioning as plasma membrane Cl 2 channels (ClC-1, -2, -Ka and -Kb) and another as intracellular Cl 2 /H + antiporters (ClC-3 to -7). In mammals antiporter function has only been formally shown for ClC-4 and ClC-5 [5,6], but the presence of a conserved glutamate corresponding to E203 in the E. coli ClC-ec1 that is responsible for H + -coupling of Cl 2 transport [7], suggests a similar antiporter mode for ClC-3, ClC-6 and ClC-7. Some of the intracellular CLC's have been located in specific subcellular organelles: ClC-7 resides in late endosomes, lysosomes and the osteoclast resorption lacuna [8], ClC-5 in endosomes in the proximal tubule of the kidney [9,10] and ClC-3 in (late) endosomes and synaptic vesicles [11]. Intracellular CLC's are thought to facilitate acidification of endosomal and lysosomal compartments by dissipating the lumen-positive membrane potential that arises from the electrogenic H + -transport by the V-type H + -ATPase [12]. Nevertheless, alternative functions have been proposed for intracellular CLC's, such as fusion of intracellular organelles [5] or trafficking of the endocytic receptor proteins megalin and cubulin [13]. In spite of being cloned more than 10 years ago [14] ClC-6 remains an enigmatic member of the mammalian CLC family. Sequence comparison shows ClC-6 to be most closely related to the late endosomal/lysosomal ClC-7 [14], but little is known about its function. Heterologous expression of ClC-6 either in Xenopus oocytes or in COS cells failed to generate specific membrane currents [14][15][16]. It should be added that in some instances membrane currents were recorded in ClC-6 expressing Xenopus oocytes, but identical currents were also observed in oocytes expressing the non-related pI Cln protein and occasionally in control oocytes indicating that ClC-6 expression affected the expression of an endogenous anion channel [16,17]. Very recently, it has been shown in a mouse model that loss of ClC-6 function leads to a lysosomal storage disease resembling neuronal ceroid lipofuscinosis [18]. In the present study we developed a specific antibody against human ClC-6, which recognizes the protein both in Western blotting and in immunofluorescence studies. This made it possible to determine the precise subcellular location of hClC-6 both endogenously in human neuronal SH-SY5Y cells and upon overexpression in COS-1 and Hela cells and to study its N-glycosylation pattern and its association with detergent resistant membranes. Preparation of antiserum against human ClC-6 Rabbit antisera directed against human ClC-6 were raised against the synthetic peptide RKRSQSMKSYPSSEL (corresponding to residues 672-686 in hClC-6a) by Eurogentec (Seraing, Belgium). The peptide was COOH-terminally conjugated to hemocyamin and two rabbits were injected with the immunogen which consisted of an emulsion of the conjugate solution and Freund's adjuvant. Booster injections of the same immunogen with incomplete Freund's adjuvant were given 4 times at 4-week intervals. Both antisera were affinity-purified by the manufacturer. Mutants were made by overlap PCR [20] and involves the amplification of two overlapping mutant fragments (PCR 1 and 2), followed by amplification of the overlap fragment (PCR 3). Reaction conditions were as follows for PCR 1 and 2: initial denaturation at 94uC for 5 min, 30 cycles of denaturation at 94uC for 30 s, annealing at 60uC for 1 min, extension at 72uC for 10 min with a final extension at 72uC for 20 min. For PCR 3 the PCR parameters were similar to those of PCR 1 and 2, except the annealing temperature was augmented gradually between 50uC and 68uC during the 30 cycles. The PCR products were visualized by ethidium bromide staining of a 1% agarose gel. The overlap fragment was eluted from the gel with the GenElute TM Gel Extraction Kit (Sigma-Aldrich, St. Louis, MO, USA) and ligated into the pcDNA3.1(2) expression vector using BamHI and HindIII restriction sites. Mutations were verified using dye terminator-based sequencing (DYEnamic ET Terminator Cycle Sequencing Kit, Amersham Biosciences, Piscataway, NJ, USA) on an automated MegaBACE sequencer (Amersham Biosciences, Piscataway, NJ, USA). Cell culture and transfection The human neuroblastoma SH-SY5Y cell line was obtained from American Tissue Type Culture Collection CRL 1650 (Besthesda, MD, USA). Cells were grown in Dulbecco's modified Eagle's medium supplemented with 15% (v/v) fetal calf serum (FCS), 1% glutamax; 1% (v/v) non-essential amino acids, 100 units/ml penicillin and 100 mg/ml streptomycin. Cells were incubated in a humified incubator at 5% CO 2 and 37uC. From day 1 after seeding, cells were differentiated in the presence of 10 mM all-transretinoic acid (RA, Sigma-Aldrich) in cell medium containing 1% FCS in the absence of light. After 6 days, the medium was replaced by cell medium without FCS, containing 2 nM brainderived neurotrophic factor (BDNF, Sigma-Aldrich). After 48 hours differentiated cells were used for further experiments. Transfections of differentiated SH-SY5Y were performed after 6 days of differentiation with retinoic acid (RA) using Lipofectamine TM 2000 transfection reagent (Invitrogen) as described in the manufacturer's protocol. After 24 hours, the medium was replaced by cell medium without FCS, containing 2 nM BDNF and cells were used for further experiments after 48 hours. COS-1 SV 40 African monkey kidney cells, and HeLa epithelial cells from an epidermoid carcinoma of the human cervix, were cultured in Dulbecco's modified Eagle's medium supplemented with 10% (v/v) fetal calf serum, 3.8 mM L-glutamine, 0.9% (v/v) non-essential amino acids, 85 units/ml penicillin and 85 mg/ml streptomycin. COS-1 cells were incubated in a humified incubator at 9% CO 2 and 37uC. HeLa cells were incubated in a humified incubator at 5% CO 2 and 37uC. COS-1 and HeLa cells were transiently transfected with expression vectors using Gene-JuiceH transfection reagent (Novagen, Darmstadt, Germany) as described in the manufacturer's protocol. Transfections were performed the day after seeding. Membrane preparation Microsomes from transfected COS-1 cells were prepared as described by Verbomen et al. [22]. Protein concentrations were determined by the bicinchonic acid method (Pierce, Rockford, IL, USA). Preparation of detergent resistant membrane fractions (DRM) DRM fractions were prepared from transfected COS-1 cells as described [23]. Cells were washed twice with PBS and lysed for 1 h on ice in excess (10-fold excess (w/w) over protein), ice-cold Triton X-100 (1%) buffer containing 25 mM Tris (pH 7.4), 100 mM NaCl, 90 mM Mannitol, 1 mM EGTA, 2 mM DDT and protease inhibitor cocktail (Sigma Aldrich). The lysate was separated by upward flotation on a sucrose gradient as described earlier. Upward flotation of DRM's was verified by Western blotting and immunostaining with a monoclonal anti-caveolin-1 antibody (1/1000; BD Biosciences) and a polyclonal anti-Fyn antibody (FYN3, 500 ng/ml; Santa Cruz; data not shown). All blots were tested by immunostaining with a monoclonal antitransferrin receptor antibody (1 mg/ml; Invitrogen) as a negative control. Fractions were tested for distribution of hClC-6 (wild type and AAGAA-mutant) by Western blotting and immunostaining with the polyclonal a-hClC-6 antibody (1:1000). DRM fractions were also prepared from GFP-hClC-7 overexpressing COS-1 cells and immunostained for caveolin-1, transferrin receptor and GFP (GFP Monoclonal Antibody, 1:500; Clontech). Deglycosylation studies Digestions with Peptide N-glycosidase F (PNGaseF, New England Biolabs, Ipswick, MA, USA) which removes both core (mannoserich) and complex (trimmed and modified) glycans or with Endoglycosidase H (EndoH, New England Biolabs) which removes core but not complex glycans, were performed as advised by the supplier on 20 mg of glycoprotein with a preliminary denaturation step during 24 hours on 37uC. Tunicamycin (Sigma-Aldrich) which blocks the first step in the N-glycosylation process (transfer of the mannose-rich core glycan from the dolichol carrier to an asparagine acceptor), was added 4 hours after transient transfection of the COS-1 cells with hClC-6a WT or mutants in a final concentration of either 0.05 or 0.1 mg/ml during a 36-h period. SDS PAGE and Western-Blot analysis Microsomes from COS-1 cells, transiently transfected with the different constructs, were analysed by NuPAGE TM 4-12% (v/v) Bis-Tris SDS-PAGE gels using MOPS-buffer (Invitrogen), following the manufacturer's protocol. After electrophoresis, the separated proteins were transferred onto a PVDF membrane (Immobilon-P; Millipore, Bedford, MA, USA) by semi-dry electroblotting. The blots were blocked overnight at 4uC in PBS containing 0.1% (v/v) Tween-20 and 5% (w/v) non-fat dry milk powder. The blots were incubated with the primary antibody and subsequently with the horseradish peroxidase (HRP) conjugated secondary antibody. The immunoreactive bands were visualized with SuperSignalH West Pico Chemiluminescent Substrate (Pierce) and exposed to HyperFilm. The HyperFilm was developed using a KODAK X-Omat 1000 (KODAK, Rochester, NY, USA). RESULTS The polyclonal a-hClC-6 antibody recognizes human ClC-6 (hClC-6) in transiently transfected cells A short peptide (amino acids 672-686) in the COOH-terminal cytosolic tail of hClC-6 was selected to raise affinity-purified polyclonal antibodies against hClC-6 ( Fig. 1). The antibodies were first tested on Western blots using microsomal membrane fractions of COS-1 cells, either wild type or transiently transfected with a hClC-6 expression vector ( Fig. 2A). Incubation with the polyclonal a-hClC-6 antibody resulted in a strong band of approximately 100 kDa (theoretical molecular mass of unglycosylated hClC-6 is 96 kDa, but see further) in hClC-6 expressing COS-1 cells, but not in untransfected wild type COS-1 cells ( Fig. 2A). A similar result was obtained in transfected HeLa cells (data not shown). The specificity of the a-hClC-6 antibody was confirmed by incubating Western blots with pre-immune serum or a-hClC-6 antibody pre-adsorbed to the epitope peptide. The preimmune serum failed to visualize the 100 kDa band in hClC-6 expressing COS-1 cells, whereas pre-adsorption caused a nearly complete disappearance of the 100 kDa band ( Fig. 2A). Subsequently the a-hClC-6 antibody was tested for immunofluorescence experiments (Fig. 2B). Therefore, COS-1 cells transiently transfected with a bicistronic GFP/hClC-6 expression vector [19] were incubated with pre-immune serum ( Fig. 2Ba-b) or the a-hClC-6 antibody ( Fig. 2Bc-d). Specific staining was only observed with the a-hClC-6 antibody and was exclusively associated with GFP-expressing cells. Next we tested whether the a-hClC-6 antibody cross-reacted with hClC-7 as ClC-6 is most closely related to ClC-7 [14]. To do so, COS-1 cells were transiently transfected with a GFP-hClC-7 expression vector that encodes human ClC-7 with a GFP fused at the NH 2 -terminus. Although ClC-7 expression levels were high, as shown by the Western blot using anti-GFP antibody, no crossreactivity was found for the a-hClC-6 antibody (Fig. 2C). We therefore conclude that the polyclonal a-hClC-6 antibody specifically recognizes hClC-6 when transiently expressed in COS-1 cells, both on Western blot and indirect immunofluorescence experiments. hClC-6 is N-glycosylated on multiple positions. Incubation of transfected COS-1 cells with tunicamycin (0.1 mg/ ml) significantly increased the mobility of hClC-6 (65 kDa as compared to 100 kDa) demonstrating that it is N-glycosylated ( Fig. 3A). At a lower concentration (0.05 mg/ml) tunicamycin induced the appearance of several intermediate bands between 100 and 65 kDa indicating multiple glycosylation of hClC-6 (see below). A similar reduction in molecular mass was observed when hClC-6 was treated with PNGaseF ( Fig. 3Aa). In contrast, EndoH did not affect the electrophoretic mobility of hClC-6 ( Fig. 3Ab). The tunicamycin-and PNGaseF-sensitivity in combination with the EndoH resistance indicates that hClC-6 carries complex Nglycans that have been processed and modified in the Golgi apparatus. Furthermore, there is a discrepancy between the apparent molecular mass on SDS-PAGE (100 kDa for glycosylated hClC-6 and 65 kDa for non-glycosylated hClC-6) and the predicted molecular mass (97 kDa for the non-glycosylated protein). Proteolytic cleavage of hClC-6 was excluded since the same band was detected by the a-hClC-6 antibody which recognizes an epitope in the COOH-terminal tail, and an anti-Myc antibody directed at a Myc epitope tag inserted at the hClC-6 NH 2 -terminus (data not shown). The faster migration pattern on SDS-PAGE most likely reflects anomalous migration as has also been reported for ClC-5 [24]. We then proceeded to identify the N-glycosylated asparagine residues in hClC-6. Sequence analysis of hClC-6 revealed 7 potential N-glycosylation motifs (N-X-[S,T] with X any amino acid except proline). Modeling of hClC-6 on the crystal structure of prokaryote ClC indicated that four asparagines, i.e. N137, N410, N422 and N432, were located in an exoplasmic loop or at the exoplasmic end of a membrane helix and are therefore positioned at the correct topological position for N-glycosylation: N137 is located at the exoplasmic end of helix C and N410, N422, N432 are located in an exoplasmic region between helices K and M (Fig. 3B). To find out which asparagines are N-glycosylated these residues were mutated to alanine, either individually or in group. The quadruple mutant (AAAA-hClC-6: N137A/N410A/ N422A/N432A) and the triple mutant (AAA-hClC-6: N410A/ N422A/N432A) migrated on SDS-PAGE with the same mobility as non-glycosylated hClC-6 (tunicamycin treatment; Fig. 3Ca). Since there was no difference between AAA-hClC-6 and AAAA-hClC-6, it appears that N137 is not glycosylated and that Nglycosylation is limited to the asparagine residues in the region between helices K and M. This was tested by introducing single and double mutations for N410, N422 and N432. All double mutants (NAA-hClC-6: N422A/N432A; ANA-hClC-6: N410A/ N432A; AAN-hClC-6: N410A/N422A) were glycosylated as indicated by their higher apparent molecular mass than nonglycosylated hClC-6 and by their PNGaseF sensitivity ( Fig. 3Cb and 3Cd). Importantly, wild type hClC-6 migrated slower than the double mutants which is consistent with hClC-6 carrying more than one glycan moiety. This was confirmed by the analysis of the single mutants (NNA-hClC-6: N432A; NAN-hClC-6: N422A; ANN-hClC-6: N410A, Fig. 3Cc and 3Cd) which contain two potential N-glycosylation sites. These were all PNGaseF-sensitive and migrated slower than the (monoglycosylated) double mutants which is compatible with the addition of a second glycan. However, wild type hClC-6 and the single mutants migrated at the same position indicating that in wild type hClC-6 only 2 of the 3 potential glycosylation sites are effectively used. Endogenous hClC-6 colocalizes with LAMP-1 in a human neuronal SH-SY5Y cell line A crucial question with respect to the intracellular CLC's deals with their specific subcellular location. We therefore examined by means of confocal laser scanning microscopy (CLSM) the subcellular distribution of endogenous human ClC-6 in a differen-tiated neuronal cell line SH-SY5Y cells (Fig. 4). ClC-6 displays a punctuated pattern that is present both around the nucleus in the cell body and in the neuronal outgrowths. There is no substantial overlap with the early endosomal marker EEA-1 (Fig. 4A) nor with transferrin receptor (TfR; Fig. 4B), a marker for recycling endosomes [25]. However, endogenous ClC-6 strongly colocalized with LAMP-1 (a marker for late endosomes/lysosomes) both perinuclearly and in the cell periphery (Fig. 4C). This is in agreement with Poët et al. [18] who have recently reported that in mouse brain sections ClC-6 mainly colocalizes with LAMP-1 and concluded that ClC-6 resides in a late endosomal compartment. Immunolocalization of hClC-6 in transiently transfected COS-1 and HeLa cells Complementary experiments with respect to the subcellular localization of hClC-6 were performed in transiently transfected COS-1 cells (Fig. 5). Typically hClC-6 displayed a perinuclear staining pattern often residing in relatively large (a few micrometer in diameter) vesicular structures (Fig. 5). The distribution pattern of hClC-6 clearly did not overlap with endoplasmic reticulum (BIP, Fig. 5A) nor with the Golgi markers Golgin-97 (Fig. 5B) or GM130 (not shown). A partial overlap with Golgin-97 was observed in a few transfected cells. In contrast to endogenous ClC-6 in SH-SY5Y cells, transiently expressed hClC-6 did not overlap with LAMP-1 in COS-1 cells (Fig. 5C), but it showed substantial colocalization with markers for early endosomes (EEA-1) or the recycling pathway (TfR). The endosomal localization was further dissected via coexpression of hClC-6 with Rab4, Rab5, Rab7 and Rab11 which are established marker proteins for early endosomes (Rab5) [26,27], late endosomes/lysosomes (Rab7) [28] and recycling endosomes (Rab4 and Rab11) [29][30][31][32]. Because of the better morphology, these experiments were conducted in HeLa cells coexpressing hClC-6 and a GFP-Rab fusion protein using CLSM (Fig. 6), but similar data were acquired in COS-1 cells (data not shown). From panels C and D in Fig. 6 it is clear that little or no overlap was found with Rab7 ( Fig. 6C) nor with Rab11 (Fig. 6D). For Rab7 this was not surprising given the lack of colocalization with LAMP-1 (see above). However, there was partial colocaliza-tion with Rab5 (Fig. 6B) and an even better overlap with Rab4 (Fig. 6A). Thus, during transient overexpression in COS or HeLa cells hClC-6 ends up in an endosomal compartment that is positive for early endosomal markers (EEA-1 and Rab5) and a subset of recycling endosomal markers (TfR and Rab4; see Discussion for a further description of this compartment). In this respect it should be pointed out that not only Rab11-positive, but also Rab4positive endosomes are found in the perinuclear region [21] which would account for the perinuclear signal of overexpressed hClC-6. We also investigated whether N-glycosylation is required for endosomal location of hClC-6 in transfected HeLa cells. CLSM of the glycosylation-deficient AAAA-hClC-6 (data not shown) and AAA-hClC-6 showed substantial colocalization with Rab4 (Fig. 6E) indicating that glycosylation is not essentially required for delivery to the endosomal compartment. A similar overlapping pattern with Rab4 was observed for the single and double N-glycosylation mutants (data not shown). Immunolocalization of overexpressed hClC-6 in the neuronal SH-SY5Y cell line Since the expression pattern of overexpressed hClC-6 in COS-1 and HeLa cells differed from the endogenous hClC-6 distribution in SH-SY5Y neuronal cells, we investigated whether this discrepancy reflects cell type-specific differences in protein sorting in the endosomal system or, alternatively, whether this is the result of overexpression. Therefore, we transiently transfected differentiated SH-SY5Y cells with an hClC-6 expression vector and determined the distribution of overexpressed hClC-6 by means of CLSM. The overexpression levels in transfected cells were very high, so that transfected cells could easily be distinguished from non-transfected cells. As in COS-1 and HeLa cells, the overexpressed hClC-6 displayed a perinuclear staining pattern (Fig. 7). This pattern partially overlapped with the early endosomal marker EEA-1 (Fig. 7A) and recycling endosomal pathway marker TfR (Fig. 7B), but not with the late endosomal marker LAMP-1 (Fig. 7C). Furthermore, after cotransfection of hClC-6 with a Rab4-GFP expression vector we observed a high degree of colocalization, analogous to overexpression in COS-1 and HeLa cells (data not shown). Thus, although SH-SY5Y cells can sort endogenous ClC-6 to a LAMP-1 positive compartment, hClC-6 does not reach this compartment upon overexpression. hClC-6 resides in detergent resistant membrane fractions in transiently transfected COS-1 cells In a final series of experiments we investigated whether hClC-6 associates with detergent resistant membrane (DRM) fractions. Transiently transfected COS-1 cells were lysed with Triton X-100 at 4uC and DRM's were separated by flotation on a sucrose gradient. Western blot analysis of the gradient fractions (equal amount of volume) showed that overexpressed hClC-6 codistributed with caveolin-1 in the upper part of the sucrose gradient corresponding to the DRM fractions (Fig. 8A). In contrast, the transferrin receptor which does not associate with detergent resistant membranes [33], did not float upwards in the sucrose gradient (Fig. 8A). It has been shown that DRM association of CD4, an intrinsic membrane protein, critically depends on a cytosolic, membrane-proximal stretch of positively charged amino acids (RHRRR) [34]. Intriguingly, hClC-6 contains a similar positively charged sequence KKGRR (amino acids 71-75) immediately upstream and at the cytosolic side of the first transmembrane helix B. Indeed, mutation of KKGRR into AAGAA disrupted the DRM association (Fig. 8Ab). The large Figure 7. Immunolocalization of overexpressed hClC-6 in SH-SY5Y cells. Double immunofluorescence confocal images of SH-SY5Y cells, transiently transfected with pcDNA3.1(2)/hClC-6a expression vector. Overexpression levels were very high, so that transfected cells could easily be distinguished from non-transfected cells. Overexpressed hClC-6 (left column) was detected using the polyclonal a-hClC-6 antibody and visualized with anti-rabbit IgG antibodies conjugated to Alexa Fluor 488 (green signal). Markers for different endosomal compartments (middle column) were (A) mouse anti-EEA-1 (an early endosome marker); (B) mouse anti-transferrin receptor (TfR, an early/recycling endosome marker); (C) mouse anti-LAMP-1 (a late endosomal/lysosomal marker). Primary antibodies were visualized using anti-mouse IgG antibodies conjugated to Alexa Fluor 594 (red signal). In the merged pictures (right column) colocalization is indicated by a yellow signal. The scale bars represent 10 mm. doi:10.1371/journal.pone.0000474.g007 majority of AAGAA-hClC-6 failed to float upwards and remained at the bottom of the sucrose gradient together with the transferrin receptor. Strikingly, the KKGRR/AAGAA mutation also affected the segregation of ClC-6 and ClC-7. Although Suzuki et al. reported a significant colocalization between ClC-6 and ClC-7 cotransfected in HEK293 cells [35], we observed a clear separation of both CLC proteins when cotransfected in COS-1 cells. Indeed, wild-type hClC-6 and ClC-7 resided either in different vesicles or in different microdomains of the same vesicle upon cotransfection (Fig 8Ba-c). However, the distribution of AAGAA-hClC-6 and ClC-7 overlapped substantially as shown by the yellow pattern in the merged panel ( Fig. 8Bd-f) suggesting (partial) colocalization of AAGAA-ClC-6 and ClC-7. DRM analysis showed that ClC-7 floated upwards in a sucrose gradient, but contrary to ClC-6 it did not reach the caveolin-1-positive fractions at the top of the sucrose gradient (data not shown). Finally, N-glycosylation was not important for DRM association of hClC-6, since the triple mutant hClC-6 (AAA-hClC-6: N410A/N422A/N432A) still floated upwards in the sucrose gradient (data not shown). DISCUSSION We have developed a polyclonal antibody against human ClC-6 which has enabled us to study the N-glycosylation profile and the subcellular distribution of hClC-6 both endogenously in the neuronal SH-SY5Y cell line and in transiently transfected COS-1 or Hela cells. Our data are consistent with hClC-6 being a multiply N-glycosylated membrane protein that is targeted to late endosomes in neuronal cells. In transfected COS-1 cells hClC-6 resides in a lipid raft microenvironment and the association with detergent resistant membrane fractions is critically dependent on the positively charged juxtamembranous KKGRR sequence (amino acids 71-75). The glycosidase profile of hClC-6 (PNGaseF-sensitive, but EndoH resistant) indicates that hClC-6 acquires fully processed, mature N-glycans and therefore that hClC-6 traverses the Golgi apparatus during its biosynthesis. This most likely explains the occasional overlap of ClC-6 with Golgi markers such as Golgin-97 (our observation) or GM-130 [35]. Three asparagine residues (N410, N422 and N432) were identified as N-glycosylation sites, but only two of the three sites are effectively glycosylated in the wild type protein. Whether this is due to steric hindrance, e.g. that the short loop between N410 and N432 can sterically only accommodate two glycan moieties, or to other limiting factors such as the sequence context or associated proteins, is currently not known. An aspartate at the X position in the glycosylation sequon N-X-S/T and a serine at the +2 position reduce the efficiency of N-glycosylation [36]. Thus, the specific sequence context of the hClC-6 glycosylation sites (N410-D-S; N422-S-S; N432-D-T) could contribute to their less efficient usage. The amino acid sequence of the predicted exoplasmic region containing N410, N422 and N432 is poorly conserved between the mammalian CLC's. Furthermore, except for ClC-7, all mammalian CLC's contain one or more consensus sites for N-glycosylation in this region (Fig. 3B). N-glycosylation has been experimentally confirmed for human ClC-1 (N430) [37], rat ClC-K1/K2 (N364 and/or N373) [38] and Xenopus laevis ClC-5 (N470) [39]. Furthermore, it is very likely that one or more of the predicted sites in human and mouse ClC-3 and human ClC-5 are effectively occupied by a N-glycan, because of the PNGaseF sensitivity of these proteins [40][41][42][43]. Thus, based on the glycosylation pattern in the K-M region non-glycosylated (e.g. ClC-7), monoglycosylated (e.g. ClC-1) and multiple glycosylated (e.g. ClC-6) isoforms can be distinguished among mammalian CLC's. Because of these sequence and glycosylation variations the exoplasmic K-M region emerges as a highly divergent stretch in mammalian CLC's and is therefore a prime candidate for isoform-specific interactions with extracellular or luminal proteins affecting their function, biosynthesis, degradation and/or sorting. However, there are few experimental data available with respect to the functional significance of glycans in CLC proteins. For the plasma membrane ClC-1, it has been reported that glycosylation-deficient channels are still functional with no effect on the electrophysiological characteristics [38]. N-glycosylation enhances plasma membrane expression of Xenopus ClC-5, but it is not required for its endosomal localization [39]. Similarly, non-glycosylated ClC-6 was still able to reach its endosomal location in COS-1 and HeLa cells, but we cannot exclude a role of N-glycosylation in late endosomal sorting. Moreover, the lack of a functional read out system for ClC-6 precludes drawing definitive conclusions about the functional importance of N-glycosylation in ClC-6. At first sight there seems to be a contradiction between the late endosomal location of endogenous ClC-6 in SH-SY5Y cells and the colocalization with early/recycling endosomal markers when ClC-6 is overexpressed in COS-1, HeLa and SH-SY5Y cells. However, one can draw an interesting comparison with LIMP II, a bona fide late endosomal/lysosomal membrane protein [44]. When overexpressed in COS cells, LIMP II induces the appearance of large vesicular structures which are positive for EEA-1 and transferrin receptor [45]. Furthermore, LIMP II overexpression impairs membrane traffic out of the early endosomal compartment. Kuronita et al. [45] therefore concluded that LIMP II is sorted to late endosomes/lysosomes via the early endosomal compartment and contributes to the biogenesis of late endosomes/lysosomes by controlling a crucial step in vesicular transport between early and late endosomes. The parallel behaviour of ClC-6 (appearance of enlarged vesicles upon overexpression; endogenous ClC-6 in late endosomes versus overexpressed ClC-6 in early endosomes) prompts several hypotheses with respect to the cell biology of ClC-6. First, it is consistent with the routing of ClC-6 to late endosomes via an early endosomal compartment. In transfected cells exit of ClC-6 from this early compartment seems to be the rate-limiting step, whereas endogenous ClC-6 leaves the early endosomal compartment and is sorted to late endosomes. Second, the enlarged vesicles upon overexpression in COS-1 cells may point to a role of ClC-6 in vesicular transport out of the early compartment. If so, ClC-6 could contribute to late endosomal/lysosomal biogenesis which would explain the lysosomal storage disease phenotype in ClC-6 knock-out mice [18]. It is tempting to interpret the sorting and function of ClC-6 in the context of the Tubular Endosomal Network (TEN) model which has recently been proposed by Bonifacino and Rojas [46]. In this model, the endosomal compartment is divided in a vacuolar part and a tubular extension, the TEN. The vacuolar compartment corresponds to the early endosomal compartment and is the entry site for cargo-containing vesicles derived from the plasma membrane or the trans-Golgi-network (TGN). Proteins that are not destined for lysosomal degradation are separated from the degradative cargo and transported from the early endosomal compartment to the TEN where they are further sorted to specific microdomains from which cargo-loaded vesicles bud off. The microdomains in TEN form exit sites for recycling endosomes that return to the plasmamembrane (e.g. TfR), for retrograde transport vesicles going back to the TGN (e.g. Mannose-6-Phosphate Receptor) and for the lysosomal bypass route via which vesicles containing late endosomal/lysosomal membrane proteins such as LAMP's and LIMP's are sorted to late endosomes [47]. Based on this model, we propose that ClC-6, once it has reached the early endosome, is sorted to the TEN which it leaves via the lysosomal exit site to finally arrive in late endosomes. In transfected cells (COS-1, HeLa, SH-SY5Y) ClC-6 seems to be correctly sorted to TEN as can be deduced from its overlap with early and recycling endosomal markers, but it apparently cannot enter the lysosomal bypass route and therefore does not end up in late endosomes. How would overexpression interfere with the proper sorting of ClC-6? One possibility is that late endosomal delivery of ClC-6 requires an additional factor (a b-subunit as for ClC-Ka/Kb, an adaptor/coat protein required for vesicular transport, …) that is expressed in limiting quantities which are sufficient for correct sorting of the endogenous ClC-6, but insufficient for abundantly overexpressed ClC-6. Alternatively, ClC-6 could be mechanistically involved in the late endosomal sorting (see below) so that overexpression of ClC-6 would block this sorting step. Given that ClC-6 resides in the lysosomal bypass route, what function would it fulfill? Initially, intracellular CLC's were thought off as Cl 2 channels facilitating acidification of the organellar lumen by providing an electrogenic shunt for the lumen-positive membrane potential generated by the V-type H + -pump [12]. However, loss of ClC-6 or ClC-7 does not affect the lysosomal pH in respectively ClC6 2/2 and ClC-7 2/2 mice [48] indicating that ClC-6 and ClC-7 are not essential for lysosomal acidification. Furthermore, intracellular CLC's most likely function as Cl 2 /H +antiporters [5,6] and they can therefore acidify (and increase the luminal Cl 2 concentration) or alkalinize (and decrease the luminal Cl 2 concentration) endosomes depending on the electrical, pH and Cl 2 gradient across the endosomal membrane. This raises the possibility that intracellular CLC's exert an effect on endosomal traffic and/or endosome/lysosome biogenesis by changing the endosomal pH and/or the luminal Cl 2 concentration. An alternative, not necessarily mutually exclusive, mechanism is that endosomal CLC's function as pH or Cl 2 sensors that couple changes in lumenal pH or Cl 2 to conformational changes in their cytosolic domains which could trigger the recruitment of cytosolic factors to the endosome membrane to control specific steps in vesicular transport. Interestingly, such a pH-sensing function has recently been shown for the V-type H + -pump in early endosomes [49]. Also, it has very recent been shown that gating of ClC-0 (i.e. the binding of Cl 2 in the channel pore) causes a conformational change of its carboxyterminus [50]. Finally, our data show that in transiently transfected COS-1 cells hClC-6 associates with detergent-resistant membrane domains suggesting that ClC-6 segregates to lipid rafts. DRM association may be a common theme for CLC proteins since ClC-2 also concentrates in cholesterol-enriched lipid domains which affects the gating properties of the channel [51]. Moreover, the DRM association critically depends on a positively charged amino acid sequence KKGRR which according to the CLC topology model is located immediately N-terminal of helix B, the first transmembrane segment. The positive charge and the cytosolic, membrane-proximal location of this sequence are reminiscent of the RHRRR sequence that functions as a raft localization marker for the CD4 receptor. Surprisingly, mutating the KKGRR sequence also affected the colocalization of ClC-6 with ClC-7. In cotransfection experiments, wild-type ClC-6 and ClC-7 can be spatially resolved on CSLM, whereas AAGAA-ClC-6 and ClC-7 colocalize to a large extent. Whether the differential sorting of wild type ClC-6 and ClC-7 is a lipid-based mechanism (ClC-6 and ClC-7 seem to associate with different lipid domains), or, alternatively, whether protein interactions involving the KKGRR sequence are the driving component, is not clear. Irrespective of the mechanism our data identify the KKGRR sequence as an important cis-acting element for the correct sorting and delivery of ClC-6. However, additional experiments are needed to verify the DRM association of endogenous ClC-6 and the specific effects exerted by the KKGRR sequence on the endogenous sorting process. To conclude, we have shown that human ClC-6 is an Nglycosylated protein and the N-glycosylation sites have been identified. We have also found a positively charged motif in the N-terminus of the protein that affects both DRM association and segregation from ClC-7 in transfected COS-1 and HeLa cells. Furthermore, our data suggest that upon overexpression in COS-1 and HeLa cells, ClC-6 does not reach the late endosomal compartment, but is retained in an early endosomal compartment that may correspond to the tubular endosomal network. We propose that endogenous ClC-6 leaves the tubular endosomal network via the lysosomal exit site to finally reach the late endosomes. This model puts ClC-6 at the heart of the late endosomal/lysosomal biogenesis route which could explain the lysosomal storage disease phenotype in ClC-6 knock-out mice [18].
2014-10-01T00:00:00.000Z
2007-05-30T00:00:00.000
{ "year": 2007, "sha1": "16748ce893b56b793d555904761b17ed559a6927", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0000474&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16748ce893b56b793d555904761b17ed559a6927", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
62899523
pes2o/s2orc
v3-fos-license
Enhanced Efficiency of Dye-Sensitized Solar Cells by Trace Amount Ca-Doping in TiO 2 Photoelectrodes Trace amount Ca-doped TiO 2 films were synthesized by the hydrothermal method and applied as photoanodes of dye-sensitized solar cells (DSSCs). To prepare Ca-doped TiO 2 film electrodes, several milliliters of Ca(NO 3 ) 2 solution was added in TiO 2 solution during the hydrolysis process. The improvements of DSSCs were confirmed by photocurrent density-voltage (J-V) characteristics, electrochemical impedance spectroscopy (EIS)measurements. Owing to the doping effect of Ca, theCa-dopedTiO 2 thin film shows power conversion efficiency of 7.45% for 50 ppm Ca-doped TiO 2 electrode, which is higher than that of the undoped TiO 2 film (6.78%) and the short-circuit photocurrent density (Jsc) increases from 13.68 to 15.42mA⋅cm . The energy conversion efficiency and short-circuit current density (Jsc) of DSSCs were increased due to the faster electron transport in the Ca-doped TiO2 film. When Ca was incorporated into TiO 2 films, the electrons transport faster and the charge collection efficiency ηcc is higher than that in the undoped TiO 2 films. Introduction Dye-sensitized solar cells (DSSCs) based on mesoporous nanocrystalline TiO 2 film have achieved photoelectric conversion efficiency () up to 13%.In order to develop high performance of DSSCs and commercialize successfully, many nanocrystalline semiconductors such as TiO 2 [1], ZnO [2], and SnO 2 [3] have been used as photoanode materials.Among them, TiO 2 has been proven to be the best semiconductor electrode material due to its high chemical stability [4], excellent charge transport capability, and ideal position of the conduction band edge.It is known to be one of the main components of DSSCs and plays a key role in determining the performance of DSSCs. In recent years, Doping has been considered as a promising way to improve the properties of TiO 2 photoanode.TiO 2 films doped with metal and nonmetal have been extensively researched, such as Mg-doping [5], La-doping [6], Nbdoping [7], Ta-doping [8], and N-doping [9], which may increase the photoelectric conversion efficiency.In all the applications mentioned above, the TiO 2 films were doped at very high levels (from 0.1% to 10%).However, few studies have been reported about TiO 2 doped at parts per million (ppm) level applied as photoanode of DSSCs.Xie et al. [10] have found that trace amount of Cr-doping TiO 2 films could improve the efficiency of DSSCs.The improvement was ascribed to Cr additions offers more electrons for TiO 2 and increases the property of electron transport for DSSCs.As we all know, doping semiconductors at parts per million (ppm) level is the most common approach for enhancing the Fermi energy level of semiconductors and then increasing their conductivity (for instance, Si). In this paper, a series of Ca-doped TiO 2 films were synthesized by hydrothermal method and were successfully applied as the photoanode materials in DSSCs, and the shortcircuit current densities ( sc ) and photoelectric conversion efficiencies of DSSCs were found to be increased by trace amount doping in TiO 2 .The change in performance of DSSCs employing Ca-doped TiO 2 films with different concentrations was obvious.We can conclude that the intrinsic increases in the photocurrent and photoelectric conversion efficiency are primarily related to faster electron transport in the Ca-doped TiO 2 film.The effects caused by Ca doping on electron collection, transfer, and recombination of the DSSCs are discussed below. Experimental Section 2.1.Preparation of Undoped TiO 2 and Ca-Doped TiO 2 Pastes.Titanium isopropoxide (TTIP) was used as Ti precursors and calcium nitrate (Ca(NO 3 ) 2 ⋅4H 2 O) was the Ca sources.Pure TiO 2 and Ca-doped TiO 2 pastes were synthesized by a hydrothermal treatment method.The hydrothermal solutions were synthesized as follows [11]. (i) 2.1 g acetic acid was added dropwise into 10 mL of TTIP.Subsequently, the mixture was added to 50 mL of deionized water mixed with different amounts of Ca(NO 3 ) 2 ⋅4H 2 O (Ca/TiO 2 molar ratio: undoped, 20 ppm, 50 ppm, 70 ppm, and 100 ppm) with rapid stirring for 1 h.Then, 0.68 mL of nitric acid was added to the obtained mixture solution.After continuous stirring at 80 ∘ C for 2∼3 h, a transparent mixture solution was obtained. (ii) The transparent mixture solution was filtered to remove insoluble impurities and transferred into an autoclave at 220 ∘ C for 12 h.After cooling to room temperature, 0.4 mL of nitric acid was added into the colloid, using ultrasonicator to disperse. Following the procedure described, a total of five pastes with different concentrations of Ca were prepared. Fabrication of Photoelectrodes and DSSCs. The FTO glass was used as substrate after careful cleaning.The pure TiO 2 and Ca-doped TiO 2 pastes were coated onto FTO substrates using a doctor-blade method, respectively.Next, the photoelectrodes were sintered at 500 ∘ C for 30 min to obtain the mesoporous TiO 2 film photoelectrodes.The thicknesses of these films were around 8 m measured with a TalyForm S4C-3D profilometer.They were dipped into a 0.5 mM N719 dye solution for 24 h at room temperature, and the excessive dye was washed away by using ethanol, followed by drying at 60 ∘ C. The platinum-coated FTO was used as the counter.A drop of electrolyte solution was injected into the photoelectrode and then the counter was clamped onto the photoelectrode; the electrolyte solution consisted of 0.05 mM LiI, 0.03 M I 2 , 0.1 M PMII (1-methyl-3-propyl imidazolium iodide), 0.1 M GNCS, and 0.5 M TBP in mixed solvent of acetonitrile and PC (volume ratio: 1/1).A sandwich-type DSSC configuration was fabricated. Measurements. Photovoltaic measurements were performed by a CHI660C electrochemical workstation (CH Instruments, Shanghai, China) at room temperature.The irradiated area of each cell was kept at 0.25 cm 2 by using a light-tight metal mask.Electrochemical impedance spectroscopy (EIS) technique [12] was employed to investigate the electron transport in DSSCs. Results and Discussion 3.1.J-V Characteristics.The photocurrent density-voltage (-) characteristics of the DSSCs based on the pure TiO 2 film photoelectrodes and Ca-doped TiO 2 film photoelectrodes are shown in Figure 1.The average performance characteristics obtained from multiple cells with the same Ca content were summarized in Table 1, which shows the correlation between the photovoltaic performance parameters and the Ca content in the TiO 2 .The best photovoltaic performance was obtained from 50 ppm Ca-doped TiO 2 .Obviously, as can be seen from the graph, the energy conversion efficiency () went up with the increase of Ca content, which was attributed to the enhancement of the short-circuit current density ( sc ).The sc of DSSCs based on 50 ppm Ca-doped TiO 2 was 15.42 mA⋅cm −2 , which was 12.7% higher than that of undoped cells.The energy conversion efficiency of 7.58% was achieved for cells based on 50 ppm Ca-doped TiO 2 electrode, which accounts for 9.88% higher than that of undoped cells. The effect on open-circuit voltage ( oc ) and fill factor as a result of such a little amount of Ca-doping was negligible.The energy conversion efficiency () increases gradually with the increase of Ca content and reaches an optimum value coinciding with Ca quantity of 50 ppm.However, the higher Ca amounts (>50 ppm) cause electron scattering and trap electrons which increase dark current.As a result, the energy conversion efficiency of DSSCs begin to fall. Electrochemical Impedance Spectroscopy Analysis of DSSCs. To investigate the difference of the charge transport properties between pure DSSCs and Ca-doped DSSCs, we performed electrochemical impedance spectroscopy (EIS) analysis.EIS has been widely employed to investigate the electron transport in DSSCs, for example, measuring the respective time constants for charge combination and for the combined processes of charge collection.From the measured spectra of EIS, we can get reliable value of the parameter.Figure 2 shows the EIS spectra of pure DSSCs and Cadoped DSSCs, the impedance spectra of DSSCs based on the pure TiO 2 and Ca-doped TiO 2 were measured from 0.1 to 10 5 Hz in the illumination at the applied bias of oc .The spectra are composed of two semicircles: the small semicircle in the high frequency range of 10 3 to 10 5 Hz fitted to a charge transfer resistance ( ct ) at the interfaces of the redox electrolyte/Pt counter electrode and FTO/TiO 2 and the large semicircle in the frequency range of 1 to 10 3 Hz fitted to a transport resistance ( ), which is related to the charge transport resistance of the accumulation/transport [13] of the injected electrons within TiO 2 film and the charge transport resistance at the TiO 2 /redox electrolyte interfaces.This large semicircle is the major concern here.As shown in Figure 2, the large semicircle got smaller with the increase of Ca in TiO 2 films.This change reflected the acceleration of electron transport process in TiO 2 photoanode.The modeled internal resistances of the DSSCs based on five different electrodes are exhibited in Table 2, in which is the electron transport time, is the electron lifetime, is charge transport resistance, and cc is the charge collection efficiency of DSSCs. The apparent value of cc can be estimated on the basis of the and data from the following [14]: The electron transport time constants for Ca-doped TiO 2 films decrease, which indicates the electrons transport faster in the Ca-doped TiO 2 films than the undoped TiO 2 films.This enhanced the charge collection efficiency cc and led to higher current density ( sc ) of DSSCs.The electron life time constants for the Ca-doped TiO 2 films also slightly decrease.The electron lifetime in DSSCs is determined by the characteristic frequency peak in the low frequency ( max ) according to the following equation [15]: The shorter electron life time indicates the faster recombination rate in the Ca-doped TiO 2 films, and that could be attributed to impurities of Ca-doping, which acts as a charge trapping site for the electron-hole recombination. The electron lifetime decreases slightly, so we concluded [16], and this result favors the electron transport.The improvement of electron transport ability was helpful to increase the short-circuit current density ( sc ), resulting in higher conversion efficiency. Conclusion In summary, the Ca-doping TiO 2 nanoparticles were successfully applied as the photoanode material in DSSCs.By comparing the Ca-doping TiO 2 with undoping TiO 2 , a faster electron transport and shorter lifetime existed for the Ca-doping DSSCs.Moreover, the higher electron transport rates of Ca-doped TiO 2 photoanode can improve the charge collection efficiency and thus lead to higher short-circuit photocurrent density of DSSCs.The best photovoltaic performance was obtained from 50 ppm Ca-doping with the conversion efficiency of 7.45%.This value was 9.88% higher than that of the undoped device.The short-circuit current density ( sc ) was increased due to the faster electron transport in the Ca-doped TiO 2 film.We can conclude that Ca-doped TiO 2 is a better photoanode material and a more promising alternative for high efficient DSSCs than pure TiO 2 . Figure 2 : Figure 2: EIS of DSSCs based on the undoped and Ca-doped TiO 2 photoanodes measured in the illumination at the applied bias of oc (a) Nyquist plots and (b) Bode phase plots. Table 1 : Performance of DSSCs based on undoped and Ca-doped TiO 2 photoanodes. Table 2 : The mean electron life time ( ), the mean electron transit time ( ), and the charge collection efficiency ( cc ) of DSSCs based on undoped and Ca-doped TiO 2 photoanodes.
2018-12-23T05:50:17.958Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "1eccfe319474aa5cbc11bfce872885182b7ad84e", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jnm/2015/974161.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1eccfe319474aa5cbc11bfce872885182b7ad84e", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
28775172
pes2o/s2orc
v3-fos-license
Anteroposterior perception of the trunk position while seated without the feet touching the floor [Purpose] The purpose of this study was to investigate the trunk position perception in the anteroposterior direction in young participants sitting without their feet touching the floor to avoid the influence of the hamstrings tension and the feet pressure on the perception. [Subjects and Methods] Fourteen healthy volunteers were seated on a chair fitted with an original manual goniometer. There were 7 reference positions set at 5° increments, from −15° to 15°, and reproductions of each position were conducted 5 times. Trunk position perception was evaluated by the absolute error between the reproduced trunk angle and the reference position angle. [Results] The results revealed a significant effect of reference position on the absolute error. The absolute error at the −5° reference position was significantly larger than at the −15° and 15° positions, and the absolute error at the 0° position was significantly larger than at the −15°, 10°, and 15° positions. [Conclusion] These results suggest that the perception of extreme forward- and backward-leaning trunk positions while sitting without the feet touching the floor would be higher than in a neutral sitting position. The relationship between the stability of the posture and the perception may be involved in the sitting position. INTRODUCTION Sitting positions can be categorized into two types: the quiet sitting posture and the functional sitting posture. The functional sitting posture is the position used to execute activities of daily living, occupational activities, and other physical activities mainly using the upper extremities. The anticipatory postural control plays an important role in stabilizing the sitting posture while activity is performed using the upper extremities [1][2][3][4] . The assumed role of the anticipatory postural control is to counteract the expected mechanical effects of perturbation in a feedforward manner 5) . The existence and characteristics of the anticipatory postural control depends on mechanical factors such as the initial and final position of the body 2) . In the standing position, the anticipatory postural control aspect changes according to the body position in the anteroposterior direction 6) . The anticipatory postural control is also performed based on the perception of the body just before moving the body segments. Hence, the accuracy of the perception during various standing positions in the anteroposterior direction has been investigated 7) . The anticipatory postural control may also differ in accordance with the sitting position just before moving the upper extremities. It has been indicated that investigating the anticipatory postural control while sitting is important 1) . The accuracy of perception of the trunk position in the sitting position may therefore be an important factor influencing the anticipatory postural control. The trunk position just before moving the upper extremities is perceived through the reference frame of the moment 8) . In the targeted muscles, the activation pattern and contraction intensity are determined based on the reference system. The measurement of the accuracy of the perception during various trunk positions while seated is necessary to investigate the anticipatory postural control in accordance with various sitting positions. It has been shown that the perception of the trunk position in stroke patients is lower than that in normal subjects based on the reproducibility of the trunk position in a sitting posture 9,10) . The reproducibility of the lumbar position in low back pain patients has also been investigated 11,12) . In these reports, the reproducibility of the reference position at one or two positions was investigated, with the following reproduction method usually adopted: subjects reference a target position from the start position and return to the start position, after which they reproduce the target position from the start position. In this method, the magnitude of the range from the start position to the reference position affects the reproduction error, which is known as the range effect 13) . Therefore, adopting a method that does not set a strict start position and does not require the subject to return to the exact start position each time is more appropriate for evaluating the reproducibility of the reference position. The magnitude of error in the reproduced angle was shown to be affected by the trunk flexion angle 14) . The perception of the trunk position should therefore not be evaluated based only on one or two positions but on a number of different positions. Since the hamstring muscles originate at the ischial tuberosity of the pelvis, the tension in the hamstring muscles has an effect on pelvic posture 15,16) . The forward pelvic tilt according with forward trunk leaning may thus increase the tension in the hamstring muscles when seated with a fixed knee angle and the plantar aspect of the foot in contact with the floor. This increased tension in the hamstring muscles may restrict pelvic forward tilt. Because the present study focused on the perception of the trunk position, it was required that the trunk and pelvis move simultaneously as a single segment while in the sitting leaning posture. In addition, we avoided having the feet touch the floor in order to minimize the hamstring tension effect on the pelvis. Furthermore, as the trunk leans forwardly with the feet touching the floor in sitting position, the feet pressure increase, which is observed in sit-to-stand movement. These increments of the hamstrings tension and the feet pressure may lead to change of the sensory information and to sustaining the sitting stability. The standing perception of extreme forward-and backward-leaning positions is very high, whereas standing positions close to the quiet standing position show the lowest perception 7) . The perception of the standing position therefore seems to be negatively related to the stability of the standing posture. Such a relationship between the perception and the stability of the posture should also exist in the sitting position, particularly when sitting without the feet touching the floor. In the present study, we investigated the perception of the trunk position in the anteroposterior direction in young participants sitting without the feet touching the floor. We hypothesized that the perception of extreme forward-and backwardleaning trunk positions while sitting without the feet touching the floor would be higher than the perception of the neutral sitting (vertical trunk) position. SUBJECTS AND METHODS Fourteen healthy young adults 21 to 25 years of age (6 females, 8 males) volunteered for this study. Their mean (± standard deviation [SD]) age, height and weight were 22.1 ± 1.0 years, 164.6 ± 9.4 cm and 61.3 ± 10.2 kg, respectively. Participants were free from neurological and orthopedic impairments. All participants gave their informed consent to the experimental protocol, which was approved by the institutional ethics committee of Kanazawa University in accordance with the Declaration of Helsinki (No. 462-2). All measurements were taken with the participants seated on a chair with a hard, 50 cm ×50 cm seat surface. The participants first sat down on the chair, aligning the front edge of the seat surface with the point 60% along the length of the thigh from the greater trochanter to determine the initial quiet sitting posture. In this study, the trunk angle was defined as the angle between the vertical line and the longitudinal axis through both the right trochanter and the right acromion. An original manual goniometer attached to an inclinometer with a resolution of 0.1° (BM-801, Ito, Miki, Japan) was used to measure the trunk angle (Fig. 1A). This goniometer was able to move horizontally on a sliding rail set on the right edge of the seat (Fig. 1A). In addition, the axis of the goniometer was also able to move vertically (Fig. 1A). Therefore, the axis of the goniometer was moved manually in these two directions to match precisely with the right trochanter of the participant in the sitting position (Fig. 1B-D). The reference point of the goniometer's movable arm was matched with the right acromion (Fig. 1B, 1D). Measurements were performed with the participants wearing short leggings, sitting barefoot, and with eyes closed. The subjects sat, keeping both arms crossed on the chest, with no support for the trunk or arms. The chair seat height was 1.5 times the subject's lower leg length to allow for free movement of the knee joints and avoid any contact of the feet with the floor. The participant's reproductions of reference positions were measured as follows: Perception of the reference position was evaluated based on the accuracy of its reproduction. There were 7 reference positions, set at 5° increments from −15° to 15°, and reproductions of each position were conducted 5 times. The experiment consisted of seven sets of five random positions with three minutes of rest time between each set. Each reference position was reproduced in accordance with the following procedure ( Fig. 2): (1) The participants maintained the quiet sitting (QS) posture for 3 s. (2) They then voluntarily and slowly (within 10 s) adjusted their sitting position by leaning forward or backward with the hips as pivotal axes until the experimenter gave the verbal instruction "OK" (reference position angle) and then maintained and perceived the position for 3 s. (3) Without returning to the QS posture, they stood up for 3 s. (4) They then sat down again, maintained the QS posture for 3 s, and (5) were asked to reproduce the reference position (reproduced trunk angle). They were instructed to say "yes" when they judged themselves to be sitting in the reference position and maintained this position for 3 s. In each case the time elapsed from initially memorizing the reference position to reproducing it was within 20 s, within the limits of short-term memory 17) . The measured reproduction absolute error (absolute error) was calculated using the following formula: Absolute error=|(reproduced trunk angle) − (reference position angle)|. Shapiro-Wilk tests confirmed that all data were normally distributed. The effects of the reference position on absolute error were tested using a one-way repeated measures analysis of variance (ANOVA). A post-hoc multiple comparison analysis using Holm's test was used to assess significant differences found by the ANOVA. The alpha level was set at p<0.05. All statistical analyses were performed using the SPSS 14.0 J software program (SPSS Japan, Tokyo, Japan). DISCUSSION In this study, the perception of seven positions of the trunk in the anteroposterior direction while sitting was investigated. Almost all previous studies have adopted only one or two target positions and a starting position to investigate the trunk position perception [9][10][11][12] . However, our study adopted a method in which participants were instructed to stand up right after referencing the target position instead of returning to the starting position. This method may reduce the range effect. Furthermore, the goniometer used in this investigation was specially developed to support our method. A unique property of this goniometer is that the axis was able to move in both the horizontal and vertical directions. This study adopted a method in which the subjects stood up right after memorizing a reference position and then sat down to reproduce the reference position. For this reason, the sitting posture, particularly the pelvic angle, and the buttocks position changed in every sitting trial. These changes caused displacement of the trochanter major in both the horizontal and vertical directions. The axis of the goniometer was made to be movable in order to match the trochanter major precisely in every trial. Therefore, in this study, the measurements were conducted with greater precision and accuracy than in other studies due to the mobility of the goniometer axis in dual directions. Ryerson et al. used a magnetic sensor placed on the skin over the spinous process of the first thoracic vertebra to measure the trunk position 10) . The reproduction error in the sagittal plane of the control subjects was 3.2 ± 1.8°, which was similar to our data. In this study, the absolute errors at the −5° and 0° reference positions were larger than those at the −15°, 10°, and 15° positions. The sitting position at −5° and 0° may be located close to the QS posture and may be normally adopted with high frequency. Standing positions located close to the quiet standing position show the lowest perception with a high frequency and stability 7) . Therefore, the trunk position perception in the sitting position may also be lower at positions located close to the QS posture, similar to the reduced standing position perception at positions located close to the quiet standing posture. At least two factors may be involved in posture perception: the stability of the posture and the muscle activity to maintain the posture. In the standing position, the stability was relatively low in high-perception standing positions 7) . While sitting without the feet touching the floor, the high-perception positions were largely forward-and backward-leaning positions from the QS posture, and the stability of these positions may contribute to the perception. Therefore, the relationship between the stability of the posture and the perception may be involved in the sitting position perception, similarly to the standing perception. On the other hand, because the trunk stability and the aspect of the sensory information in the sitting with the feet touching the floor may be different from those in the sitting without the feet touching the floor, the relationship between the stability of the posture and the perception also may behave discretely between these sitting postures. In terms of the relationship between the standing position and the magnitude of muscle activity, the magnitude of the trunk muscle activity may be lower at trunk sitting positions located close to the direction of gravitational force. In contrast, the magnitude of the trunk muscle activity may be higher at trunk positions located largely forward-and backward-leaning from the QS posture. The trunk muscle sensation may play an important role in perceiving the trunk sitting position. This study investigated the perception of the trunk position in the anteroposterior direction of sitting reference positions from −15° to 15°. If this range of reference positions is extended, the perception of more extreme forward-and backwardleaning trunk positions may differ more clearly from that located close to the QS position. Both the relationship between the trunk position and its stability, and the relationship between the trunk position and the muscle activity must be investigated in future studies. In addition, the accuracy of perception of the trunk position in the sitting position with the feet touching the floor also is necessary to be investigated. Hence, comparing results of the accuracy of perception of the trunk position in the sitting both with and without the feet touching the floor may reveal the importance of the feet touching the floor while sitting in physical therapy. The trunk perception in elderly people and subjects with low back pain or stroke may differ from these study results.
2018-04-03T02:37:29.324Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "91eb29bc1cd9981a1c61c7d2a9b24fc54ee8a2af", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jpts/29/11/29_jpts-2017-386/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91eb29bc1cd9981a1c61c7d2a9b24fc54ee8a2af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232258598
pes2o/s2orc
v3-fos-license
A Novel STAT3 Mutation in a Patient with Hyper-IgE Syndrome Diagnosed with a Severe Necrotizing Pulmonary Infection Purpose Autosomal dominant hyper-IgE syndrome (HIES) is a rare primary immune deficiency syndrome caused mainly by mutations in the signal transducer and activator of transcription 3 (STAT3) gene. More information on STAT3 mutations is still needed, and further investigation is warranted. A girl with HIES carrying a novel STAT3 mutation who had no obvious apparent symptoms but presented with a severe necrotizing pulmonary infection is described here. We analysed dynamic changes in blood cells and a series of inflammatory factors in the bronchoalveolar lavage fluid (BALF) before and after each bronchoscopic lavage to relieve her severe pulmonary abscess. Patients and Methods Whole-exome sequencing and Sanger sequencing were used to identify novel STAT3 mutations. Flow cytometry was used for immune analysis of Th17 cells and inflammatory cytokines. Results A novel de novo mutation in STAT3 (c.1552C>T, p.Arg518*) was identified in this patient. The number of eosinophils decreased after each bronchoscopy procedure. Elevated interleukin (IL)-8 and IL-1β levels were detected in her right lung BALF in the acute phase, but they were reduced after four bronchoscopic lavage procedures and the administration of antimicrobial medicine. Conclusion More information on STAT3 mutations is needed to investigate the relationship between the genotype and HIES phenotype. Bronchoscopic lavages are recommended instead of surgery to relieve acute severe pulmonary abscesses and necrotizing pulmonary infections in paediatric patients with HIES. Introduction Hyper-IgE syndrome (HIES) is a rare primary immune deficiency syndrome. Patients exhibit obviously elevated serum IgE levels, eczematous dermatitis, and recurrent skin and pulmonary infections, along with several nonimmune features. Most cases are sporadic, but autosomal recessive (AR) and autosomal dominant (AD) cases have been described. 1 Mutations in signal transducer and activator of transcription 3 (STAT3) can result in autosomal dominant HIES. STAT3 is a transcription factor involved in transducing the signals from numerous cytokines, most notably interleukin (IL)-6, IL-10, IL-11, IL-21, and leptin. 1,2 Thus, STAT3 mutations result in the deregulation of TGF-β, IL-6, or IL-21 signalling, which are required for naive CD4+ T cells to switch to Th17 cells. [3][4][5] As a result, the differentiation of Th17 cells is impaired, and the level of IgE is increased. 6 To date, approximately one hundred individual STAT3 mutations have been reported. 7,8 STAT3 influences other downstream signalling pathways, 9,10 which leads to other features of HIES involving the joints, cranial synostosis and other symptoms. HIES is difficult to diagnose. The incidence of HIES is less than 0.001%, and not all patients show obvious features. As a result, doctors confuse HIES with common pulmonary infections. HIES is also difficult to confirm. 11 Pathology and genotype information is still incomplete for these patients, 12 and the corresponding relationships between genotypes and phonotypes are not entirely clear. Currently, a scoring system devised by the NIH that combines STAT3 mutations and decreased Th17 numbers has been used to help diagnose AD-HIES. The treatment of this syndrome is complex. Patients with HIES experience recurrent infections with pathogenic microorganisms and even have a risk of tumorigenesis, 13 but no permanent cure or targeted treatment is currently available for them. 2 Therefore, related studies are still necessary for HIES. Case Report The patient was a 10-year-old girl who suffered from upper respiratory tract infections 4 to 5 times a year. Over her lifetime, she was diagnosed with pneumonia three times. This time, she was admitted to our hospital due to severe pneumonia accompanied by abscesses and pulmonary cysts. The blood analysis showed a high level of C-reactive protein at 53 mg/L (reference range: <5 mg/L) and an obviously increased eosinophil count of 4.91x10 9 cells/L (reference range: 0.05-0.5 x10 9 cells/L). The immunological assessment showed an elevated IgE level of 11,300 IU/mL (reference range: <200 IU/mL), increased B cell count (CD19+) of 35.010% (reference range: 14.35-22.65%), and decreased Th17 cell count of 3.08% (reference range: 7.13-24.53%). The expression of the cytokines IL-5, IL-6, IL-2, IL-1β, IL-8 and TNF-α was obviously increased (Table 1). According to the results of the CT scan, segmental consolidation with cavity formation was observed in the upper and lower lobes of the right lung, while bronchodilation was observed in the left lung ( Figure 1). Tracheal microscopy showed a large amount of white, thick secretions and phlegm plugs in the bronchiole, and she had bronchiolitis obliterans with bronchiectasis ( Figure 2A and B show the first tracheal microscopy image). After scoring the patient using a clinical scoring system (NIH HIES) developed to screen members of known AD-HIES kindreds, a score of 30 points was recorded for this patient (Table 2), who was suspected as having HIES. The scoring system was established by the NIH group who recognized STAT3-HIES, 14 and is currently commonly used to evaluate patients with sporadic cases. According to the guidelines developed by Woellner et al 11 we performed whole-exome sequencing, and the results revealed a de novo nonsense mutation in STAT3 that was combined with a decrease in the number of Th17 cells, and we confirmed that the patient had AD-HIES. During her hospitalization, linezolid and itraconazole were used as anti-infection treatments, thymalfasin was used for immune regulation, and acetylcysteine was used for sputum reduction. Four tracheal endoscopic lavage procedures were conducted to help relieve the severe pulmonary abscess. We provide systemic information on blood cell percentages and immune cytokine concentrations in the bronchoalveolar lavage fluid (BALF) before and after lavage in this study (Tables 3 and 4). At the time of discharge, chest CT imaging showed an improvement of her pulmonary symptoms, and most blood cell counts in this patient had returned to normal; however, the percentage of eosinophils was still elevated, and the cytokines in the BALF of the right lung were still high but had decreased to normal 15 days later at follow-up. We informed the parents that the treatment of this child would be long-term and ongoing, and genetic counselling was suggested. HIES Scores A score of thirty was obtained using the NIH scoring system ( Table 2). This patient showed no obvious changes in appearance, such as skin abscesses, retained primary teeth, scoliosis, characteristic face, hyperextensibility and increased nasal width, which made the diagnosis difficult. The IgE level in this patient was extremely high and reached 11,300 IU/mL (10 points). The eosinophil count was greater than 4.91x10 9 cells/L (6 points). During her 10 years of life, she was diagnosed with pneumonia 3 times (6 points). Upper respiratory infections occurred 4-5 times per year (2 points), and her lungs showed typical bronchiectasis (6 points). All these symptoms resulted in a suspicion of HIES. In this situation, we performed genetic analysis using whole-exome sequencing and a flow cytometric analysis of Th17 cell counts. Identification of a Novel STAT3 Mutation Sequencing of STAT3 revealed a novel, de novo, heterozygous c.1552C>T mutation ( Figure 3A). This mutation is a nonsense variant (p.Arg518*) that might result in premature truncation of the peptide in the DNA binding domain. Both parents had wild-type genotypes ( Figure 3B). Approximately one hundred individual STAT3 mutations spanning most domains of this protein, except the coiled coil domain, with supporting clinical phenotypes, have been reported to lead to AD-HIES. 8 STAT3 is a transcription factor involved in transduction of signals from IL-6, an inflammatory cytokine that is crucial for Th17 differentiation. 15 Th17 Cell Detection We conducted a flow cytometric analysis of Th1, Th2, and Th17 cells. CD45/SSC gating was used to select all leukocytes, CD3+/CD4+ gating was used to choose T helper cells (data not shown), and CD183/CD196 gating was used for the analysis of Th1, Th2, and Th17 cells ( Figure 3C). CD183+CD196+ represents Th1 cells, CD196+CD183-represents Th17 cells, CD196-CD183+ represents Th1 cells. Both the number and ratio (the absolute number of Th17 cells divided by CD4+ T cells) of Th17 cells decreased, and the ratios (the absolute number of Th1 and Th2 cells divided by CD4+ T cells) of Th1 cells and Th2 cells decreased, according to the reference range (Table 5). Th2 cells increased the number of B cells and subsequently increased the IgE level. The aberrant mutation of STAT3 resulted in abnormal IL-6 levels ( Table 1) and a reduction in the number of Th17 cells. Bronchoscopic Lavage A large area of lesions was observed in both the upper and lower lobes of the right lung; bronchiectasis was indicated in the left lung. We conducted bronchoscopic lavage 4 times to help relieve her symptoms. We provide the dynamic changes in the levels of the series of inflammatory factors in BALF and blood cells before and after bronchoscopy. BALF samples were collected from patient when she was undergoing diagnostic bronchoscopy, BAL was performed using flexible bronchofibrescopy after anaesthesia with lidocaine, sterile saline was instilled into the right and left lobes of the lung, and BALF samples were collected in sterile containers. After 4 bronchoscopic lavages combined with antimicrobial intake, most blood cell counts returned to normal. Although the neutrophil percentage was still lower than normal and the eosinophil percentage was still elevated, the eosinophil count decreased immediately after each bronchoscopic lavage (Table 3). Because the patient had a large and severe lesion in her right lung, the levels of the inflammatory factors IL-8 and IL-1 were higher in the right lung BALF, while the levels in the left lung BALF were mostly normal. The patient was re-examined half a month later by performing another tracheal lavage ( Figure 2C and D showed this time of bronchoscope), and the indicators in lavage fluid from the left and right lungs were mostly normal. We provide a tracheal image of the right lung ( Figure 2) and a CT scan of the lung, showing no further expansion of the lung lesions and alleviation of the pulmonary abscess ( Figure 1C and D). Discussion In this patient, the apparent symptoms of HIES, such as bone and primary teeth abnormalities, and the characteristic face, were not obvious, but severe pulmonary inflammation symptoms, such as pulmonary abscess, bronchiolitis obliterans, pulmonary consolidation, bronchiectasis, and methicillin-resistant Staphylococcus aureus infection, were present. Thus, her characteristics might be confused with those of patients with common pneumonia. The methicillin-resistant S. aureus infection alerted us, because the recurrence of infections with antibioticresistant strains is frequently observed in patients with an immune deficiency. Based on her high concentration of IgE and increased number of eosinophils, we performed a genetic analysis and FACS analysis of her T cells. A novel mutation in STAT3 and decreased numbers of Th17 cells were detected. This patient had a score of 30 according to the HIES NIH scoring system. All symptoms confirmed a diagnosis of HIES. 1 Patients with AD-HIES are clinically characterized by an increased susceptibility to infection with S. aureus, 16 and the colonizing S. aureus strains are mainly antibiotic-resistant strains that harbour key virulence factors. 17 Therefore, when a patient has severe recurrent pneumonia more than 3 times, an infection with antibiotic-resistant S. aureus, an IgE level higher than 2000 IU/mL and elevated numbers of eosinophilic cells greater than 800/µL, HIES should be considered, and scoring and genetic and Th cell analyses are recommended. Her CRP level was elevated, combined with a pulmonary infection, which is uncommon for patients with AD-HIES. A novel nonsense mutation in the DNA binding domain of STAT3 was identified in this patient, resulting in protein dysfunction. This nonsense mutation causes premature truncation of the peptide chain during synthesis. To date, approximately 100 genetic mutations in STAT3 In mouse keratinocytes, STAT3 deletion results in eczema-like inflammation and a marked elevation in IgE levels. 18 Unfortunately, the correlation between genotype and phenotype has not been completely established in patients with AD-HIES. 11,19 Nonimmune features are enriched in patients with SH2 domain mutations, 20 and patients with malignancies also express a mutant in this domain. 21 Our patient showed no obvious apparent symptoms. The mutation type and location of the STAT3 mutation in this patient were unable to be predicted from her clinical presentation, because the genotype-phenotype correlations in patients with AD-HIES are poor and remain to be discovered. More gene mutations and phenotypes should be analysed to determine the corresponding relationship. Thus, more genetic data and analyses are still needed for a diagnosis of this disease. This patient had no classic physical stigmata, but she presented severe pulmonary symptoms. A large area of lesions was observed in both the upper and lower lobes of the right lung; bronchiectasis was indicated in the left lung. Surgical removal was not advised for this child when considering her quality of life in the future; however, medicine intake alone did not alleviate her poor condition. Thus, we conducted bronchoscopic lavage 4 times to help relieve her poor symptoms. As shown in our CT and tracheal images, after physical removal of phlegm combined with antiinflammatory medicine treatment, the pulmonary symptoms were relieved. We show the dynamic changes in the levels of a series of inflammatory factors in BALF and blood cells before and after bronchoscopy. The BALF information might present the pulmonary condition more directly. IL-8 levels were high in the right BALF and have been reported to play a significant role in IgE-mediated lung inflammation, 22 while serum IL-6 levels were high. Currently, a radical treatment is unavailable for this disease, and only symptomatic treatments are used; the effect of gene therapy and omalizumab treatment is still unclear. Given the severity of the patient's illness, she will need to be closely monitored for the best clinical outcomes. Moreover, the child's mental health should also be considered, and psychological counselling was suggested. This disease may also cause tumours in later life stages and has a dominant inheritance pattern; therefore, genetic counselling should be provided. 13 Conclusion To summarize, we report a patient with AD-HIES who carries a novel heterozygous STAT3 mutation. Our patient had no obvious apparent symptoms, except high IgE levels, elevated eosinophil counts, and repeated pneumonia episodes. The diagnosis and treatment of HIES are still complex and difficult, identifying HIESrelated genetic mutations is still necessary, and further analyses of the correlations between different mutations and different phenotypes should be performed. Bronchoscopic lavage is suggested instead of surgical removal of lesions for relieving acute pulmonary abscess and inflammation in paediatric patients, and psychological and genetic counselling should be suggested according to the patient's condition. Consent for Publication We have obtained consent from the patient's parents (the patient is less than 18 years old) for the publication of the case report. The study was approved by the Shanghai Children's Hospital, Institutional Review Board, and in accordance with the Declaration of Helsinki.
2021-03-18T05:14:38.411Z
2021-03-12T00:00:00.000
{ "year": 2021, "sha1": "0f34aede106130c5ad2bf63c33691ed92a9ac328", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=67594", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f34aede106130c5ad2bf63c33691ed92a9ac328", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267959009
pes2o/s2orc
v3-fos-license
Light-Exposed Metabolic Responses of Cordyceps militaris through Transcriptome-Integrated Genome-Scale Modeling Simple Summary Cordyceps militaris is an entomopathogenic fungus with potential health benefits. These benefits have made this fungus marketable. Employing the transcriptome-integrated genome-scale metabolic model approach using gene inactivity moderated by metabolism and expression framework (iPS1474-tiGSMM), this work reveals metabolic fluxes in correlation with the expressed genes involved in the cordycepin and carotenoid biosynthetic pathways of C. militaris under light exposure. Additionally, an analysis of reporter metabolites emphasizes central carbon, purine, and fatty acid metabolisms, uncovering crucial processes for C. militaris adaptation to light exposure. Abstract The genome-scale metabolic model (GSMM) of Cordyceps militaris provides a comprehensive basis of carbon assimilation for cell growth and metabolite production. However, the model with a simple mass balance concept shows limited capability to probe the metabolic responses of C. militaris under light exposure. This study, therefore, employed the transcriptome-integrated GSMM approach to extend the investigation of C. militaris’s metabolism under light conditions. Through the gene inactivity moderated by metabolism and expression (GIMME) framework, the iPS1474-tiGSMM model was furnished with the transcriptome data, thus providing a simulation that described reasonably well the metabolic responses underlying the phenotypic observation of C. militaris under the particular light conditions. The iPS1474-tiGSMM obviously showed an improved prediction of metabolic fluxes in correlation with the expressed genes involved in the cordycepin and carotenoid biosynthetic pathways under the sucrose culturing conditions. Further analysis of reporter metabolites suggested that the central carbon, purine, and fatty acid metabolisms towards carotenoid biosynthesis were the predominant metabolic processes responsible in light conditions. This finding highlights the key responsive processes enabling the acclimatization of C. militaris metabolism in varying light conditions. This study provides a valuable perspective on manipulating metabolic genes and fluxes towards the target metabolite production of C. militaris. These health benefits have made this fungus marketable.Thus, the optimal cultivation process for improving the cell growth and production yield of the target metabolites of C. militaris has become an active area of biotechnological research and development [3].A previous study demonstrated that the contents of bioactive compounds in C. militaris, especially cordycepin, were altered in response to light conditions [11].Short-wavelength light significantly increased the total carotenoid content in fruiting bodies of C. militaris [12].The recent study of Thananusak et al. (2020) [13] also showed light conditions affecting the physiology of C. militaris, in which the response mechanism was proposed.The metabolism of C. militaris has been previously explored by using genome-scale metabolic models (GSMM), e.g., iNR1329 [14] and iPC1469 [15].They revealed sugar utilization for growth and lipid biosynthetic capability under dark conditions, respectively.However, FBA models often weakly predict metabolic fluxes due to minimal data requirements and simplicity.Additionally, the context-specific metabolic behavior underlying C. militaris growth and metabolite production under specific light exposure remains largely unknown.Investigating the metabolic responses of C. militaris under light exposure is limited. To address this challenge, transcriptome data were integrated into GSMMs to represent cellular functionalities by extracting a subset of reactions in relation to a particular context, e.g., a specific condition.Several integrative approaches have been developed based on the rationales, including GIMME [16], iMAT [17], and RegrEx [18].Among these, GIMME stands out as the simplest method with minimal information requirements.The GIMME (Gene Inactivity Moderated by Metabolism and Expression) approach minimizes the flux of reaction with lowly expressed genes through the binarization of enzyme abundance levels to "ON" or "OFF" states after thresholding the associated gene expression level [16].The approach has been effectively utilized across various organisms, including microorganisms like E. coli [16] and Methanothermobacter [19], as well as plants such as Arabidopsis [20] and cassava [21]. This study, therefore, aimed to employ the transcriptome-integrated GSMM approach to further investigate the metabolic responses of C. militaris to light conditions.Initially, the GSMM of C. militaris was retrofitted for growth under light conditions.Next, transcriptome data under light exposure were analyzed and further used to reconstruct the GSMM with transcriptome-based constraints using the GIMME algorithm [16].To uncover the key metabolites underlying the identified potential metabolic routes, the iPS1474-tiGSMM of C. militaris was also used.This study offers an effective approach to investigating cellular metabolism, particularly the key metabolic processes and the involved components acting to achieve the desirable condition. Retrofitting GSMM of C. militaris for Growth under Light Conditions The GSMMs of iNR1329 [14] and iPC1469 [15] were initially used as scaffolds to retrofit the GSMM of C. militaris, then called iPS1474.A combination of various information types was essential to carry out a retrofitting metabolic network.Essential information was collected from the enhanced annotation data of C. militaris protein sequences, biochemical pathways, and chemical identifiers underlying the KEGG [22], MetaCyc [23] databases, and RAVEN toolbox 2.0 [24], publications on specific enzymes, protein databases, and also the literature [13].In addition, there was physiological evidence for the presence of a reaction or a pathway in C. militaris, e.g., a biosynthetic pathway of carotenoid was also added.The biomass composition reaction of GSMM was also retrofitted for carotenoid biosynthesis [13].In the processes of stoichiometry for cofactors, as well as the information on the reversibility or irreversibility for each reaction, these were added as information into the network.The identification of sub-cellular localization was additionally considered.All scripts for retrofitting the GSMM of C. militaris, as well as the generated model files (.mat, .xml,.xlsx,.txt,),were deposited into a public repository on GitHub (https://github.com/sysbiomics/Cordyceps_militaris-tiGSMM) on 26 January 2024. Validation of the Retrofitted GSMM of C. militaris The constraint-based flux simulation of iPS1474 was performed using a flux balance analysis (FBA) in RAVEN 2.0 [24], MATLAB (R2020b) [25], and the Gurobi optimizer as the linear programming solver.The objective function was set to maximize cell growth under carbon uptake rates based on experimental data.The uptake rate for glucose or sucrose under light exposure conditions was set to be 0.1593 or 0.0845 mmol gDW −1 h −1 , respectively [13].The exchange fluxes of ammonium, phosphate, sulfate, H 2 O, and H + were unconstrained to provide the basic nutrients for cell growth. The iPS1474 model was validated with in vitro data [13] for growth and biomass under different carbon sources, i.e., glucose or sucrose, upon light exposure.Additionally, constraints on maximizing cordycepin and carotenoid production were imposed to investigate metabolic capability throughout this study.The metabolic conversion of C. militaris was iteratively tuned with the gene expression data of C. militaris under light exposure. Transcriptome-Integrated Constraint-Based Metabolic Model of C. militaris Using GIMME Initially, the RNA-seq datasets of C. militaris under light conditions were retrieved from a previous study [13].The transcriptome data, including the expression of 8747 genes, were normalized based on the fragments per kilobase of the transcript per million mapped reads (FPKM) (Supplementary File S1; Table S1).Of these, 1412 expressed genes associated with metabolic reactions were integrated into the iPS1474 model through a gene inactivity moderated by metabolism and expression analysis (GIMME) algorithm, resulting in the transcriptome-integrated model, hereafter called iPS1474-tiGSMM.Briefly, GIMME identifies and eliminates inactive reactions associated with genes expressing as a defined threshold and reintegrates reactions essential for achieving the objective function, ensuring that their expression meets [16].At first, the algorithm determined the maximum achievable flux through the objective function, i.e., the growth rate, and employed the flux value as the boundary for reactions.Then, the algorithm identified active reactions by minimizing the influence of inactive reactions based on gene expression-weight coefficients by linear programming according to Equation (1). Minimize: It is noted that S ij is a stoichiometric coefficient of j metabolites and i reactions; v i is a flux flow through reaction i; lb i and ub i are the lower and upper bounds of the flux through reaction i; x i is normalized gene expression data mapped to reaction i; x threshold is the gene expression level threshold; and c i is the penalty score. The gene expression level threshold was set at the 50th percentile of all possible gene expressions in iPS1474-tiGSMM of C. militaris.For the simulation, it was conducted in COBRA Toolbox version 3.0 with a glpk solver on MATLAB (R2020b). Analysis of Active Flux Distribution and Metabolic Reaction under Light Exposure The concurrence of the flux prediction and transcriptome data was assessed for metabolic flux distribution under light exposure.The iPS1474-tiGSMM simulations of C. militaris under different carbon sources, i.e., glucose or sucrose, were carried out.The Biology 2024, 13, 139 4 of 13 predicted metabolic fluxes were compared with the metabolic gene expression.The active light-exposed metabolic reactions (non-zero fluxes) and inactive (zero-fluxes) reactions were integrated based on the presence and absence of gene expression, respectively. Using the Retrofitted GSMM for Identifying Reporter Metabolites and Key Sub-Networks To identify reporter metabolites and key sub-networks in responses to light exposure, the retrofitted GSMM was used.We applied the reporter metabolite algorithm to identify reporter metabolites and search for highly correlated metabolic sub-networks for pairwise light/dark comparison.This analysis used gene-level statistics (e.g., p-value), a set of genes, and their associated metabolites as the inputs.If a metabolite had a distinct-directional p-value below 0.05 (pDistinctDirUp or pDistinctDirDn in the R package Piano) [26], it was identified as a significant reporter metabolite.In a similar manner, a set of genes and their corresponding reactions associated with reporter metabolites was considered as a significant metabolic sub-network. The Retrofitted iPS1474-GSMM of C. militaris in Comparison with Earlier Model Characteristics Functional assignment and enhanced network reconstruction using bioinformatics resulted in a retrofitted GSMM (iPS1474 model) of C. militaris, as shown in Table 1.This contained 1474 genes and 1916 metabolic reactions that governed 1245 metabolites amongst four compartments, i.e., cytosol, mitochondria, peroxisome, and extracellular space.Compared with the earlier model, iNR1329, created by Raethong et al., 2020 [14], this result indicates that 145 metabolic genes were uniquely identified in iPS1474, which were mainly identified in primary metabolic processes, including amino acid metabolism, carbohydrate metabolism, energy metabolism, lipid metabolism, and glycan biosynthesis and metabolism (Figure 1A).Moreover, 96 unique metabolic reactions with 12 unique EC numbers were mostly involved in lipid metabolism and the metabolism of terpenoids and polyketides, as shown in Figure 1B.Overall, the metabolic reactions for the iPS1474 model were particularly enriched in the lipid, carbohydrate, and amino acid metabolisms.Note: Data were taken from 1 Raethong et al., 2020 [21]. Moreover, the stoichiometry of the metabolic precursors involved in the biomass composition was quantitatively estimated based on the macromolecular contents of carbohydrates, proteins, lipids, nucleotides, and vitamins of C. militaris obtained from research supports (Figure 2).Moreover, the stoichiometry of the metabolic precursors involved in the biomass composition was quantitatively estimated based on the macromolecular contents of carbohydrates, proteins, lipids, nucleotides, and vitamins of C. militaris obtained from research supports (Figure 2).Moreover, the stoichiometry of the metabolic precursors involved in the bio composition was quantitatively estimated based on the macromolecular contents o bohydrates, proteins, lipids, nucleotides, and vitamins of C. militaris obtained fro search supports (Figure 2). Validation and Performance of iPS1474-GSMM Model The retrofitted GSMM (iPS1474 model) was validated based on the study of Thananusak et al. (2020) [13].The model simulated a capability for fungal cell growth in glucose and sucrose cultures under light exposure against measured data from laboratories [13].By constraining the uptake rates of individual carbon sources, iPS1474 well predicted the growth of C. militaris in varied carbon substrates (error percentage ≤ 1.64, Figure 3).This result indicates the predictive performance of the iPS1474 model for simulating fungal cell growth in these carbon sources. Validation and Performance of iPS1474-GSMM Model The retrofitted GSMM (iPS1474 model) was validated based on the study of Thananusak et al. (2020) [13].The model simulated a capability for fungal cell growth in glucose and sucrose cultures under light exposure against measured data from laboratories [13].By constraining the uptake rates of individual carbon sources, iPS1474 well predicted the growth of C. militaris in varied carbon substrates (error percentage ≤ 1.64, Figure 3).This result indicates the predictive performance of the iPS1474 model for simulating fungal cell growth in these carbon sources. Transcriptome-Integrated GSMM (iPS1474-tiGSMM) Model of C. militaris under Light Conditions The iPS1474 model well simulated the growth rate of C. militaris in accordance with the experimental data.However, as fluxes dynamically change in response to varying environments, integrating the model with transcriptome data under light conditions became essential to capture the activity of a specific subset of reactions.The expression of enzymeencoding genes is a basic clue indicating the activity of biochemical reactions in action under a particular condition.Assuming all enzymes functioned independently, a metabolic reaction was proposed to activate at least one related expressed gene.According to Thananusak et al. (2020) [13], the 8747 genes of C. militaris were expressed in glucose or sucrose cultures under light conditions, allowing iPS1474 to initially activate reactions related to 1412 expressed metabolic genes. In detail, 93.62% and 94.17% of the 8747 genes were expressed in light-glucose (LG) and light-sucrose (LS) conditions, respectively.The expression levels of all genes and metabolic genes were classified into three categories.The majority of genes with FPKM values were 10 ≤ FPKM < 100, FPKM ≥ 100, and 1 ≤ FPKM < 10, respectively.The total expressed metabolic genes in each category are shown in Figure 4. Transcriptome-Integrated GSMM (iPS1474-tiGSMM) Model of C. militaris under Light Conditions The iPS1474 model well simulated the growth rate of C. militaris in accordance with the experimental data.However, as fluxes dynamically change in response to varying environments, integrating the model with transcriptome data under light conditions became essential to capture the activity of a specific subset of reactions.The expression of enzyme-encoding genes is a basic clue indicating the activity of biochemical reactions in action under a particular condition.Assuming all enzymes functioned independently, a metabolic reaction was proposed to activate at least one related expressed gene.According to Thananusak et al. (2020) [13], the 8747 genes of C. militaris were expressed in glucose or sucrose cultures under light conditions, allowing iPS1474 to initially activate reactions related to 1412 expressed metabolic genes. In detail, 93.62% and 94.17% of the 8747 genes were expressed in light-glucose (LG) and light-sucrose (LS) conditions, respectively.The expression levels of all genes and metabolic genes were classified into three categories.The majority of genes with FPKM values were 10 ≤ FPKM < 100, FPKM ≥ 100, and 1 ≤ FPKM < 10, respectively.The total expressed metabolic genes in each category are shown in Figure 4. Here, GIMME utilized 1412 expressed metabolic genes to inform the set of active reactions in the iPS1474 model by assuming that reactions were considered to be active when the expression of associated genes exceeded a certain threshold (50th percentile, P50) and minimized flux through inactive reactions.As a result, the GSMM with transcriptome-integrated constraints (tiGSMM) of C. militaris, namely, iPS1474-tiGSMM-P50, was gained using the GIMME approach.The simulation of iPS1474-tiGSMM-P50 under sucrose upon light exposure was assessed for its capability to predict flux distribution consistent with the transcriptome data of C. militaris (Supplementary File S2; Table S2).The iPS1474-tiGSMM-P50 showed flux distribution, which improved predicted flux through carotenoid biosynthesis via the mevalonate (MVA) pathway, as shown in Figure 5.It is suggested that the hydroxymethylglutaryl-CoA synthase (EC: 2.3.3.10)catalyzing 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) synthesis, hydroxymethylglutaryl-CoA reductase (EC:1.1.1.34)catalyzing mevalonate (MVA) synthesis, and mevalonate kinase (EC:2.7.1.36)catalyzing 5-Phosphomevalonate (MVA-5-P) synthesis as a substrate played crucial roles in the biosynthesis of carotenoid, as supported by Thananusak et al. (2020) [13].Meanwhile, the iPS1474 model could not capture the MVA pathway (Figure 5).This finding clearly indicates that iPS1474-tiGSMM-P50 could improve the predicted metabolic flux distribution consistent with the light-exposed gene expression of C. militaris.Here, GIMME utilized 1412 expressed metabolic genes to inform the set of active reactions in the iPS1474 model by assuming that reactions were considered to be active when the expression of associated genes exceeded a certain threshold (50th percentile, P50) and minimized flux through inactive reactions.As a result, the GSMM with transcriptome-integrated constraints (tiGSMM) of C. militaris, namely, iPS1474-tiGSMM-P50, was gained using the GIMME approach.The simulation of iPS1474-tiGSMM-P50 under sucrose upon light exposure was assessed for its capability to predict flux distribution consistent with the transcriptome data of C. militaris (Supplementary File S2; Table S2).The iPS1474-tiGSMM-P50 showed flux distribution, which improved predicted flux through carotenoid biosynthesis via the mevalonate (MVA) pathway, as shown in Figure 5.It is suggested that the hydroxymethylglutaryl-CoA synthase (EC: 2.3.3.10)catalyzing 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) synthesis, hydroxymethylglutaryl-CoA reductase (EC:1.1.1.34)catalyzing mevalonate (MVA) synthesis, and mevalonate kinase (EC:2.7.1.36)catalyzing 5-Phosphomevalonate (MVA-5-P) synthesis as a substrate played crucial roles in the biosynthesis of carotenoid, as supported by Thananusak et al. (2020) [13].Meanwhile, the iPS1474 model could not capture the MVA pathway (Figure 5).This finding clearly indicates that iPS1474-tiGSMM-P50 could improve the predicted metabolic flux distribution consistent with the light-exposed gene expression of C. militaris.Here, GIMME utilized 1412 expressed metabolic genes to inform the set of active reactions in the iPS1474 model by assuming that reactions were considered to be active when the expression of associated genes exceeded a certain threshold (50th percentile, P50) and minimized flux through inactive reactions.As a result, the GSMM with transcriptome-integrated constraints (tiGSMM) of C. militaris, namely, iPS1474-tiGSMM-P50, was gained using the GIMME approach.The simulation of iPS1474-tiGSMM-P50 under sucrose upon light exposure was assessed for its capability to predict flux distribution consistent with the transcriptome data of C. militaris (Supplementary File S2; Table S2).The iPS1474-tiGSMM-P50 showed flux distribution, which improved predicted flux through carotenoid biosynthesis via the mevalonate (MVA) pathway, as shown in Figure 5.It is suggested that the hydroxymethylglutaryl-CoA synthase (EC: 2.3.2020) [13].Meanwhile, the iPS1474 model could not capture the MVA pathway (Figure 5).This finding clearly indicates that iPS1474-tiGSMM-P50 could improve the predicted metabolic flux distribution consistent with the light-exposed gene expression of C. militaris. Active Light-Exposed Metabolic Reactions Predicted by iPS1474-tiGSMM-P50 To investigate the light-exposed metabolic responses, iPS1474-tiGSMM-P50 simulations under glucose and sucrose upon light exposure were performed.The pie chart shows the number of active light-exposed metabolic reactions in glucose (376 reactions) and sucrose (380 reactions), as shown in Figure 6A.According to the combination of all the active reactions, 406 active light-exposed metabolic reactions were identified (Supplementary File S3; Table S3).These metabolic reactions were classified into 10 sub-categories involved in lipid metabolism (133 reactions), amino acid metabolism (103 reactions), carbohydrate metabolism (66 reactions), nucleotide metabolism (55 reactions), energy metabolism (19 reactions), the metabolism of cofactors and vitamins (12 reactions), the metabolism of terpenoids and polyketides (11 reactions), the metabolism of other amino acids (3 reactions), glycan biosynthesis and metabolism (2 reactions), and membrane transport (2 reactions), as shown in Figure 6B. Active Light-Exposed Metabolic Reactions Predicted by iPS1474-tiGSMM-P50 To investigate the light-exposed metabolic responses, iPS1474-tiGSMM-P50 simulations under glucose and sucrose upon light exposure were performed.The pie chart shows the number of active light-exposed metabolic reactions in glucose (376 reactions) and sucrose (380 reactions), as shown in Figure 6A.According to the combination of all the active reactions, 406 active light-exposed metabolic reactions were identified (Supplementary File S3; Table S3).These metabolic reactions were classified into 10 sub-categories involved in lipid metabolism (133 reactions), amino acid metabolism (103 reactions), carbohydrate metabolism (66 reactions), nucleotide metabolism (55 reactions), energy metabolism (19 reactions), the metabolism of cofactors and vitamins (12 reactions), the metabolism of terpenoids and polyketides (11 reactions), the metabolism of other amino acids (3 reactions), glycan biosynthesis and metabolism (2 reactions), and membrane transport (2 reactions), as shown in Figure 6B.The characterization of the active metabolic reactions of C. militaris under light-exposure-induced responses should be further elucidated to dissect the impact of light on its metabolism and identify potential targets for enhancing the production of bioactive compounds.It has been reported that C. militaris could use a wide range of carbon sources [27].Focusing on core metabolic reactions (350 reactions) (Figure 6A), interestingly, the important reactions involving the central metabolic pathways were active, such as glycolysis, the pentose-phosphate pathway, and the tricarboxylic acid (TCA) cycle.In glycolysis, r0005 associated with CCM_08316, encoding glucokinase (EC: 2.7.1.2),facilitates the phosphorylation of glucose to glucose-6-phosphate.It is involved in the first step of glycolysis, which is often regulated in response to cellular energy demands. The reactions involved in the pentose phosphate pathway, e.g., R01056_c associated with CCM_00462, CCM_01641, and CCM_08256 (EC: 5.3.1.6),encoding for ribose-5-phosphate isomerase to generate ribose 5-phosphate, which is an intermediate metabolite for the synthesis of PRPP (5-phospho-alpha-ribose-1-diphosphate), were associated with CCM_02500, CCM_00434, and CCM_04665, encoding for ribose-phosphate diphosphokinase (EC: 2.7.6.1; r0336) involved in PRPP synthesis.It is an important precursor molecule involved in the biosynthesis of cordycepin [28].The characterization of the active metabolic reactions of C. militaris under lightexposure-induced responses should be further elucidated to dissect the impact of light on its metabolism and identify potential targets for enhancing the production of bioactive compounds.It has been reported that C. militaris could use a wide range of carbon sources [27].Focusing on core metabolic reactions (350 reactions) (Figure 6A), interestingly, the important reactions involving the central metabolic pathways were active, such as glycolysis, the pentose-phosphate pathway, and the tricarboxylic acid (TCA) cycle.In glycolysis, r0005 associated with CCM_08316, encoding glucokinase (EC: 2.7.1.2),facilitates the phosphorylation of glucose to glucose-6-phosphate.It is involved in the first step of glycolysis, which is often regulated in response to cellular energy demands. To gain deeper insights into how different carbon sources influence the metabolic responses of C. militaris upon light exposure, there were 30 such metabolic reactions uniquely associated with the sucrose culture upon light exposure.For example, r0192, associated with the CCM_00448 gene encoding sucrase (EC: 3.2.1.26).This enzyme is responsible for the hydrolysis of sucrose into its constituent monosaccharides, glucose and fructose.Importantly, it is a key enzyme in carbohydrate metabolism, playing essential roles in energy production and metabolic regulation.Moreover, we found r1283 to be involved in fatty acid metabolism, including long-chain-fatty-acid-CoA ligase (EC: 6.2.1.3;CCM_00448).This could potentially mitigate the toxicity resulting from the accumulation of excess glyoxylate generated during fatty acid metabolism [29].Under glucose culture upon light exposure, there were 26 unique active metabolic reactions.For example, r0795 (EC: 2.3.2.2; EC: 3.4.19.13) associated with CCM_02065, CCM_02473, CCM_05577, CCM_05722, and CCM_09583 encoded for gamma-glutamyl transpeptidase to generate glutamate.It might be regulated and involved in the biosynthesis of cordycepin, serving as a precursor in the process [14,30]. Identified Reporter Metabolites of C. militaris To identify the reporter metabolites of C. militaris cultures grown in glucose or sucrose upon light exposure, an integrative analysis was performed using the transcriptome data obtained from Thananusak et al. (2020) [13], and the retrofitted model was employed as a scaffold.As a result, 75 and 46 significant reporter metabolites in light-sucrose and light-glucose conditions were identified, respectively (Supplementary File S4; Table S4).Interestingly, the iPS1474 identified the top 20 reporter metabolites in response to light, which were involved in N-glycan biosynthesis, aminoacyl tRNA biosynthesis, cysteine and methionine metabolism, oxidative phosphorylation, phenylalanine, tyrosine and tryptophan biosynthesis, pyrimidine metabolism, lipid metabolism, and carotenoid biosynthesis, as shown in Table 2. The reporter metabolites participating in carotenoid biosynthesis, i.e., geranylgeranyl diphosphate (GGPP), geranyl diphosphate (GPP), torulene, and 2-trans,6-transfarnesyl diphosphate (FPP), were identified, as seen in Table 2 and Supplementary File S4 (Table S4).For carotenoid-biosynthesis-associated genes, CCM_06728 encoding for torulene dioxygenase (EC: 1.13.11.59) and CCM_03203 encoding for farnesyl diphosphate synthase (EC: 2.5.1.10)were identified.These enzymes play a role in the degradation or conversion of torulene, which is indeed associated with carotenoid production.In the previous study, the influence of light affected the synthesis of carotenoid in C. militaris [31].The synthesis of carotenoid is often regulated by light-dependent gene expression [13].In the presence of light, genes responsible for carotenoid biosynthesis are activated, leading to an increase in the production of carotenoid.In addition, the significant reporter metabolites involved in the lipid metabolism were also found (e.g., butyryl- and 3-hydroxyoctadecanoyl-coa), as seen in Figure 7.The genes associated with these metabolites participated in the lipid biosynthetic pathway.For example, the genes encoding for secretory lipases, e.g., CCM_03046, CCM_04970, and CCM_09597.These enzymes facilitate the hydrolysis of neutral lipids (triacylglycerols) into free fatty acids. Figure 1 . Figure 1.(A) Number of metabolic genes and (B) number of metabolic reactions in comparison between iNR1329 and iPS1474 of C. militaris. Figure 1 . Figure 1.(A) Number of metabolic genes and (B) number of metabolic reactions in comparison between iNR1329 and iPS1474 of C. militaris. Figure 1 . Figure 1.(A) Number of metabolic genes and (B) number of metabolic reactions in comparis tween iNR1329 and iPS1474 of C. militaris. Figure 2 . Figure 2. The iPS1474 metabolic features in the context of biomass composition. Figure 2 . Figure 2. The iPS1474 metabolic features in the context of biomass composition. Figure 3 . Figure 3. Validation of iPS1474 by comparison of growth rate (h −1 ) between in silico and in vitro data across different carbon sources, i.e., glucose and sucrose.Note: Data were taken from 1 Thananusak et al., 2020 [26]. 3 . Validation of iPS1474 by comparison of growth rate (h −1 ) between in silico and in vitro data across different carbon sources, i.e., glucose and sucrose.Note: Data were taken from1 Thananusak et al., 2020[26]. Figure 4 . Figure 4. Bar plot shows distributing gene expression levels across different FPKM values of all expressed genes and metabolic genes in light-glucose (LG) and light-sucrose (LS) conditions. Figure 4 . Figure 4. Bar plot shows distributing gene expression levels across different FPKM values of all expressed genes and metabolic genes in light-glucose (LG) and light-sucrose (LS) conditions. Figure 4 . Figure 4. Bar plot shows distributing gene expression levels across different FPKM values of all expressed genes and metabolic genes in light-glucose (LG) and light-sucrose (LS) conditions. Figure 6 . Figure 6.The number of the active metabolic reactions identified by the iPS1474-tiGSMM-P50 model prediction, (A) Venn diagram of the number of active reactions under glucose and sucrose upon light-exposure, i.e., tiGSMM-LG-P50 and tiGSMM-LS-P50, respectively, and (B) KEGG-based pathway classification of combination of all active reactions under light-exposure. Figure 6 . Figure 6.The number of the active metabolic reactions identified by the iPS1474-tiGSMM-P50 model prediction, (A) Venn diagram of the number of active reactions under glucose and sucrose upon lightexposure, i.e., tiGSMM-LG-P50 and tiGSMM-LS-P50, respectively, and (B) KEGG-based pathway classification of combination of all active reactions under light-exposure. Table 1 . Comparative metabolic characteristics of the GSMMs of C. militaris.
2024-02-27T16:57:39.145Z
2024-02-22T00:00:00.000
{ "year": 2024, "sha1": "bd5841a89b3159ee1a5a7b7b0f8afd8f6be36cca", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-7737/13/3/139/pdf?version=1708585097", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5d588afa2780910afac6ab0a92c6393f517f2e7", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
11479367
pes2o/s2orc
v3-fos-license
Embryogenesis in Polianthes tuberosa L var. Simple: from megasporogenesis to early embryo development The genus Polianthes belongs to the subfamily Agavoideae of the Asparagaceae family formerly known as Agavaceae. The genus is endemic to México and comprises about 15 species, among them is Polianthes tuberosa L. The aim of this work was to study and characterize the embryo sac and early embryo development of this species in order to generate basic knowledge for its use in taxonomy, in vitro fertilization and production of haploid plants and to complement studies already performed in other genera and species belonging to the Agavoideae sub-family. It was found that the normal development of the P. tuberosa var. Simple embryo sac follows a monosporic pattern of the Polygonum type and starts its development from the chalazal megaspore. At maturity, the embryo sac is of a pyriform shape with a chalazal haustorial tube where the antipodals are located, just below the hypostase, which connects the embryo sac with the nucellar tissue of the ovule. The central cell nucleus shows a high polarity, being located at the chalazal extreme of the embryo sac. The position of cells inside the P. tuberosa embryo sac may be useful for in depth studies about the double fertilization. Furthermore, it was possible to make a chronological description of the events that happen from fertilization and early embryo development to the initial development of the endosperm which was classified as of the helobial type. Background The genus Polianthes belongs to the subfamily Agavoideae of the Asparagaceae family formerly known as Agavaceae (APG III 2009). The genus is endemic to México and comprises about 15 species, among them Polianthes tuberosa L. (Solano and Feria 2007;García-Mendoza and Galván 1995). It is an important economical plant because of its use as an ornamental plant and because its essential oils are highly appreciated for the manufacture of perfumes and other essences (Benschop 1993;Sangavai and Chellapandi 2008;Hodges 2010;Barba-Gonzalez et al. 2012). The species is commercially propagated by asexual methods which have reduced its genetic variability, thus reducing flower forms, sizes and colors (Shillo 1992), as well as increasing its vulnerability to biotic and abiotic stress (Hernández-Mendoza et al. 2015). Embryological studies comprising the formation of male and female gametes, double fertilization and embryo development and endosperm (Maheshwari 1950) allow the understanding of factors that control the processes of embryonic development in order to manipulate them for practical applications (Bhojwani and Bhatnagar 1983). In this regard, the female gametophyte plays a critical role in every stage of the reproductive process such as the direction of pollen tube growth towards the egg (Higashiyama 2002), the transport of sperm nuclei of the embryo sac through the central and egg cells in the process of double fertilization (Lord and Russell 2002;Russell 1993;Huang and Russell 1992;Ye et al. 2002;Weterings and Russell 2004), and once the fertilization is completed, genes expressed in the maternal tissue are involved in embryo and endosperm development (Ray 1997;Chaudhury and Berger 2001). Most of the studies on the embryogenesis of Asparagaceae describe the embryo sac development as of the Monosporic-Polygonum type such as in Yucca rupicola (Watkins 1937), Yucca aloifolia (Wolf 1940), Agave lechuguilla (Grove 1941), Agave virginica (Regen 1941), Hesperocallis undulata and Leucocrinum montanum (Cave 1948), Comospermum yedoense (Rudall 1999), Agave tequilana (Escobar-Guzmán et al. 2008;González-Gutiérrez et al. 2014), Yucca elephantipes (Cruz-Cruz 2013) and Yucca filamentosa (Reed 1903), being the exception Agave fourcroydes and Agave angustifolia (Piven et al. 2001) in which the embryo sac was reported as bisporic of the Allium type. Knowledge about embryo and endosperm development in the sub-family Agavaoideae is limited. In 1941 Regen described the endosperm of A. virginica as of the nuclear type, while Gonzalez-Gutiérrez et al. (2014) reported the endosperm of A. tequilana as of the helobial type. However, reports about female gametophyte development, fertilization and embryo development in the genus Polianthes specifically in the species P. tuberosa are not available. The aim of this work was to study and characterize such processes in order to generate basic knowledge for its use in taxonomy, in vitro fertilization and production of haploid plants among other uses. Furthermore, to complement studies already performed in other genera and species belonging to the Agavoideae sub-family. Plant material The plant material that was used in this work consisted of bulbs of P. tuberosa var. Simple from Tantoyuca, Veracruz, México. These bulbs were cultured in substrate (3 peat moss: 2 sand: 1 vermiculite) under a shade house at CIATEJ (Guadalajara, Jalisco, México) in the spring of 2013 and 2014. Controlled pollination In order to find the various developmental stages of the megagametophyte ten non-pollinated flower buds of different sizes from 50 inflorescences of plants, which were randomly selected, collected and fixed. The rest of the buds remained attached to the inflorescence so that they continued their growth. At the time of anthesis the flowers were emasculated and covered with glassine paper to prevent uncontrolled pollination. Once the stigmas were receptive two non-pollinated flowers per inflorescence were selected and fixed. The remaining flowers were emasculated and hand pollinated with pollen from P. tuberosa var. Double and unripe fruits with different days after pollination (1 DAP-19 DAP) were collected in order to study the processes from fertilization to embryo development. Fixation Ovules and immature seeds were extracted from the ovary and fixed in FAA (10:5:50:35 formaldehyde: acetic acid: ethanol: distilled water) for 24 h. After fixation, ovules were transferred to a 70 % ethanol solution and stored at 5 °C for later staining. Histological observation Mayer's hemalum methyl salicylate staining was used as a massive method for the analysis of large amounts of ovules and immature seeds, and the Feulgen staining method was used for confocal microscopy by using 3D projection series taken in "z", only for those developmental stages where cells and tissues were positioned in different focal planes. Mayer's hemalum-methyl salicylate stain-clearing (Stelly et al. 1984) Specimens previously fixed and stored were stained with Mayer's hematoxylin solution for 3 h at room temperature and later treated with heat for 30 min in a water bath at 40 °C, later, the specimens were treated with 2 % acetic acid for 40 min at 40 °C and then with 0.5 % acetic acid overnight in order to eliminate excess stain. Thereafter, specimens were washed with 0.1 % sodium bicarbonate until the solution was clear, whereupon the solution was renewed and allowed to stand for 24 h. Finally, the specimens were subjected to an ethanol dehydration series: 25, 50, 70, 85, 95 % y 100 % for 15 min and 100 % ethanol for 2 h. The clarification of the tissue of was performed through a series of methyl salicylate:ethanol solutions of 3:1, 1:1, 1:3, for 1 h each (the specimens were stored in the last solution for 6 or more months at 5 °C). For observation, the ovules were mounted on a solution of 100 % methyl salicylate and preparations analyzed under a Leica ® DMR microscope (Wetzlar, Germany) coupled to an EvolutionQEi ® camera (Media-Cybernetics, Bethesda, USA). The images were managed with the Image-Pro software (Media-Cybernetics, Bethesda, USA). Feulgen staining (Barrell and Grossniklaus 2005) Ovules were treated with 1 M HCl for 1:30 h, then 5.8 M HCl for 2 h and again 1 M HCl for 1 h at room temperature. Thereafter, ovules were washed three times with distilled water and stained with Schiff solution for 3 h at room temperature and protected from light. Completed this standing time, the ovules were dehydrated in 30, 50, 70, 90 and 95 % ethanol for 30 min each and twice in 100 % ethanol. Finally, the ovules were allowed to stand overnight in a solution composed of 50 % ethanol and 50 % Leica immersion oil type F solution (Leica Cat. No. 11513859). Thereafter, ovules were mounted in 100 % Leica immersion oil type F for microscopic observation. Megagametophyte analysis was performed on a Leica TCS SPE RGBV confocal microscope, using a 532 nm laser excitation and a detection window between 555 and 700 nm. Images were captured and managed through the LAS X ® software (Leica Microsystems) with either 512 × 512 or 1024 × 1024 pixels. Images were processed with Adobe Photoshop version CS6, all Photoshop operations were applied to the entire image. Megasporogenesis Megasporogenesis starts with the differentiation of an arquesporial cell that becomes the megaspore mother cell (MMC) which distinguishes from all other cells of the ovule primordia, since it has a larger size than the surrounding cells, its shape is circular to semicircular with an average diameter of 18.69 ± 2.33 μm, its nucleus is dense and well defined (Fig. 1a), and sometimes it is possible to observe the unorganized chromatin in the nucleus by way of filaments (Fig. 1b). The MMC begins to increase in size and is seen shifted toward the micropylar end of the ovule. At this stage of development the integuments start to be differentiated (Fig. 1b). The diploid MMC is divided by meiosis generating in meiosis I a dyad of haploid cells of similar size or being the chalazal cell slightly larger than the micropylar cell (Fig. 1c), meiosis II results in a tetrad of cells commonly arranged in a linear manner parallel to the chalazal-micropylar axis (Fig. 1d). The average size of the tetrad is 52.95 ± 3.89 μm long and 19.07 ± 1.45 μm wide. Both integuments continued to grow surrounding the embryo sac and getting closer to the micropylar region. Out of the total number of observations at this stage, 73.68 % of tetrads possessed a linear arrangement (Fig. 1d), however, the presence of other forms of arrangement was observed. In 21.05 % of tetrads of the remaining observations, the formation of tetrads in a "T" arrangement could be observed, where the two micropylar megaspores were found one beside the other or in an intermediate form in which the two megaspores closest to the micropyle are separated by an oblique division instead of a fully cross division (Fig. 1f ). The linear arrangement of the tetrad has been reported as a common pattern in several species of the order Aparagales as is the case of A. fourcroydes, A. angustifolia (Piven et al. 2001) and A. tequilana (Escobar-Guzmán et al. 2008;González-Gutiérrez et al. 2014). Nevertheless, some authors had reported the formation of linear tetrads and "T", as in the case of A. virginica (Regen 1941), A. lechuguilla (Grove 1941) and Y. aloifolia (Wolf 1940). Watkins (1937) reported the frequent formation an intermediate configuration between linear and "T" arrangements for Yucca rupicola similar to that reported in the present study for P. tuberosa var. Simple. Moreover, in some isolated observations (5.26 %) the formation of triads instead of tetrads was observed ( Fig. 1e), as those reported by Regen (1941) in A. virginica who interpreted the presence of triads as the possible non division of one of the megaspores of the dyad to enter meiosis II. Gomez-Rodríguez et al. (2012) observed the formation of triads in the pollen microsporogenesis of A. tequilana and A. angustifolia which are formed by a mechanism where a failure in meiosis II in a cell of the dyad prevented the formation of cell wall in the daughter nuclei, which finally are restored, leading to the formation of a 2n microspore and two n microspores (unreduced gametes). Megagametogenesis In the normal development of the tetrad, three of the megaspores, the closest to the micropylar end degenerated while the chalazal cell remained intact (Fig. 2a), becoming the functional megaspore (FM) (monosporic pattern), similar to what was found in Y. aloifolia (Wolf 1940), A. virginica (Regen 1941) and A. tequilana (Escobar-Guzmán et al. 2008;González-Gutiérrez et al. 2014); whereas in the species Y. filamentosa Reed (1903) reported that is the second megaspore in the chalazalmicropylar direction that remains while the other three megaspores degrade. Meanwhile, Piven et al. (2001) mentioned that in the species A. fourcroydes and A. angustifolia the development of the embryo sac is given from two of the megaspores closest to the chalazal end; this developmental pattern is called Bisporic Allium type, similar to that present in other representative species of the Aparagaceae family as Scilla persica (Svoma and Greilhuber 1987). The FM possesses a large and well-defined nucleus which is usually located at the first or second third of the developing embryo sac (chalazal-micropylar direction) (Fig. 2b). The FM underwent a first mitotic division forming a binucleate sac (62.42 ± 6.90 μm long and 39.11 ± 5.40 μm wide), the newly formed nuclei migrated one towards the chalazal end and the other to the micropylar end of the sac, both being separated by a large central vacuole (Fig. 2c). Once at the ends, a second mitotic division generated a sac with four nuclei (two at each extreme), which were located very close to the walls of the embryo sac and continued separated by Simple. a Cross-section of an ovule showing the condensed nucleus of a megaspore mother cell. b Megaspore mother cell located at the micropylar extreme of the ovule. c Diad. d Linear tetrad. e Triad. Dark or black spaces are vacuoles. f "T" shaped tetrad. mmcn megaspore mother cell nucleus, mmcc megaspore mother cell chromatin, mmc megaspore mother cell, in integuments, arrow heads in (a) sub-epidermic cells of the ovule, c chalaza, m micropyle, d1 and d2 diad cells, white arrows triad cells, arrow head in d and f cells of a tetrad, dotted lines in f "T" formation of a tetrad. Bars 10 µm the central vacuole which like the sac shows a continuous increase in size measuring 77.31 ± 5.96 μm long and 54.62 ± 5.98 μm wide (Fig. 2d). The mitotic division of the nuclei occurred synchronously at both poles of the sac in the same way that occurs in A. tequilana (González-Gutiérrez et al. 2014) and contrary to what was reported by Grove (1941) for A. lechuguilla where mitotic division first occurs in the nuclei located at the micropylar end of the embryo sac. Moreover, at this stage the formation of the hypostase became evident, a tissue that is formed immediately above the FM which is apparently formed by a group of cells with thickened cell walls that are easily observed under the optical and confocal microscope because they are stained more intensely than the rest of cells, such cell formation seems to be connected to the vascular bundles of the ovule through the nucellar tissue. According to Tilton (1980) the formation of the hypostase occurs during the meiotic-mitotic interface of the FM. In this regard, Tilton proposed that the main function of such structure focuses on nutrient translocation into the embryo sac before and after fertilization. Hypostase formation is reported as a frequent character in Agavaceae (now sub-family Agavoideae) (Tilton and Mogensen 1980) and other families in the group of monocots (Rudall 1997). A third mitotic division resulted in an embryo sac with eight nuclei, four at each end of the sac and as in the second mitotic division it happened synchronously. At this time the chalazal haustorial tube became more apparent at the end of the sac and is where three of the four newly formed nuclei were located. The remaining nucleus was placed immediately beneath them, while at the opposite end, all four micropylar nuclei were aligned to the embryo sac wall (Fig. 2e, f ). Finally, one of the four nuclei located at the micropylar end became the micropylar polar nucleus and started to migrate through the central vacuole toward the chalazal end of the sac to meet the single nucleus which is observed outside the haustorial tube, said nucleus became the chalazal polar nucleus. When a 3D reconstruction was performed thin filaments were observed connecting the two polar nuclei (Fig. 2e, f ). According to Tilton and Lersten (1981) these filaments are formed of cytoplasm and provide the vehicle by which the polar nuclei can join. Ikeda (1902) supported the hypothesis that these cytoplasmic connections found between different cell types in the embryo sac provide the means by which the antipodals, the central cell and the egg apparatus remain in communication. Up to this stage of development, the embryo sac exhibited an ovoid to pyriform shape being narrower at the chalazal end and wider toward its micropylar end (Fig. 2e, f ). In the chalazal extreme the development of a narrow tube called haustorial tube was observed (Fig. 4b), which is similar to that reported by Tilton (1978) in Ornithogalum caudatum where the hypostase is surrounding the haustorial tube. The haustorial tube was observed as an invagination into the nucellar tissue of the ovule so several authors attribute functions of nutrition to the embryo sac (Reed 1903;Watkins 1937;Wolf 1940;Rudall 1997). Characterization of the mature embryo sac With the migration of one of the micropylar nuclei toward the chalazal, the embryo sac soon acquired its final shape and its nuclei became cellularized; being the three cells in the haustorial tube the antipodals, the two nuclei located below the haustorial tube became the polar nuclei contained in the central cell and the three cells located at the micropylar end became the egg apparatus so that the normal development of the embryo sac of P. tuberousa var. Simple was typified as monosporic Polygonum-type (Fig. 3a) as described by Maheswari (1937Maheswari ( , 1948. Out of the total samples analyzed at this stage of development 81.66 % corresponded to this pattern, and the average size of the mature sac was 152.02 ± 5.54 μm long by 129.74 ± 5.41 μm wide; in the rest of the samples the presence of defects and/or abnormalities in the development of the embryo sac was detected. These abnormalities were classified into three main groups: a. Embryo sacs in retarded stages of development, i.e. embryo sacs that did not corresponded to the stage of development found in the remaining ovules from a single ovary (Fig. 3b) and corresponded to 1.66 % of analyzed samples. b. Collapsed embryo sacs where degradation of the embryo sac and the nucellar tissue was observed in 10 % of the samples (Fig. 3c). c. Embryo sacs where the formation of the egg apparatus is not observed due to an abnormal thickening of the nucellar cell layer lining the ovule in its micropylar end. These embryo sacs usually lost their pyriform shape showing a "boomerang" shape ( Fig. 3d). This malformation group corresponded to 6.66 % of the specimens analyzed. Regen (1941) described the presence of a large amount of "unproductive" ovules in A. virginica where reproductive cells were not formed due to degeneration of nucellar and sporogenesis tissues. Meanwhile, Cappelletti (1927) reported the presence of a brief hypertrophy of the nucellar nuclei followed by cell degeneration of this tissue ending with the collapse of the embryo sac. cm chalazal megaspore, fm functional megaspore, dm degenerating megaspores, v vacuole, a antipodal cells, ea egg apparatus, mpn micropylar polar nucleus, cpn chalazal polar nucleus, n1, n2 and n3 cells that will form the egg apparatus, c chalaza, m micropyle, arrow heads in (a) megaspores being degraded, arrow heads in (c) primary chalazal and micropylar nuclei, arrow head in (d) nuclei produced by the second meiotic division of the embryo sac, arrow heads in (e) and (f) cytoplasmic filaments. Bars 10 µm Antipodal cells The antipodals were observed as three cells smaller than the rest of the mature embryo sac cells (7.68 ± 0.34 μm long by 6.46 ± 0.43 μm wide). These were located inside the haustorial tube and showed a triangular morphology usually with their nuclei polarized towards the chalazal end of the sac (Figs. 3a, 4a). Sometimes it was not possible to detect the presence of the antipodal cells so their behavior inside the sac appears to be variable being disintegrated before karyogamy of the polar nuclei (Fig. 4b) or remain intact even after the moment of double fertilization. According to Tilton (1978), the antipodals are unique cells that vary in their behavior within the mature female gametophyte, the only trait they share with each other is their location in the chalazal end of the sac; these cells can be ephemeral and disintegrated shortly after its formation as in the case of A. virginica (Regen 1941). In Tofieldia glutinosa, the antipodals even proliferate in the maturation stage of the embryo sac, being up to eight antipodal nuclei (Holloway and Friedman 2008); another example is seen in most of the members of the Poaceae family where the number of antipodals varies between six and 300 (Anton and Cocucci 1984). Misinterpretations in studies of the female gametophyte of some species have been due to the difficulty in visualizing the antipodals by examination under an optical microscope, mainly due to its chalazal position in the embryo sac, especially when (Maheswari 1948(Maheswari , 1950. Recently, Song et al. (2014) confirmed the persistence of the three antipodals after double fertilization in Arabidopsis by expression of fluorescence reporter genes. Central cell (fusion of polar nuclei) The polar nuclei were very similar to each other, they had a spherical to semispherical shape and a size of approximately 10.17 ± 1.48 μm diameter. According to Tilton (1980), both nuclei have a similar size and morphology such that it is difficult to distinguish from each other (Fig. 4b), however, Maheshwari (1941) considered that the original nucleus of the micropylar end may become larger than the polar nucleus from the chalazal end. The distance between the polar nuclei decreased until they were beside each other, and their membranes entered in contact getting fused, and sometimes a single nucleus with two nucleoli inside could be observed (Fig. 4b). Finally, as a result of the polar nuclei karyogamy the nucleus of the central cell was generated (Fig. 4a). The nucleus of the central cell was of a semicircular or ovoid shape with an average size of 15.85 ± 1.11 μm long by 16.41 ± 1.21 μm wide. The nucleus of the central cell as well as the polar nuclei retained its polarity to the chalazal end of the sac (Fig. 4a). In the Agavaceae, this polarity was similarly observed in A. fourcroydes and A. angustifolia (Piven et al. 2001), A. lechuguilla (Grove 1941) A. tequilana (González-Gutiérrez et al. 2014) andY. rupicola (Watkins 1937). However, according to Tilton (1978Tilton ( , 1980 the polar nuclei of the central cell of most angiosperms migrate toward the center of the sac, as in the case of maize (Huang and Sheridan 1994) and Arabidopsis thaliana (Olsen 2004). According to Maheswari (1950), the position of the nucleus of the central cell towards the chalazal end of the embryo sac is an indication that an helobial type of endosperm will be developed once double fertilization took place. Egg apparatus The egg apparatus is located at the micropylar end of the embryo sac and is composed of three cells, two synergids and the egg cell (Fig. 4c). The synergids have a very similar shape between them, their nuclei are highly polarized towards the micropyle and a large vacuole is observed towards the chalazal end (Fig. 4d). Their average size is 16.76 ± 0.30 μm long and 12.92 ± 0.47 μm wide. One of their walls is in contact with the edge of the embryo sac, however, they are separated from each other by a small space (Fig. 4e) and it was possible to observe the filiform apparatus at the base of both synergids (Fig. 4e). The egg cell was highly polarized with a dense nucleus at the chalazal end and the vacuole at the micropylar end (Fig. 4c). This polarity is found in a rather frequent manner in most angiosperms as in the case of Nicotiana tabacum (Mogensen and Suthar 1979;Tian et al. 2005), however, some times the nucleus could be located at the second third of the egg cell with a large number of small vacuoles distributed around (Russell 1993). The dimensions of the egg cell were in an average of 25.96 ± 1.60 μm long and 22.89 ± 1.59 μm wide. Double fertilization The initiation of the process of double fertilization was observed in ovules collected on 6 DAP. The pollen tube that remained attached to the integuments could be observed at the micropyle. The pollen tube made its way through the outer and inner integuments and then reached the micropylar end of the embryo sac through the cells of the uniseriate nucellar tissue to make contact with the cells of the egg apparatus within embryo sac (Fig. 5). Further studies on the double fertilization process will be published elsewhere. Zygote formation and embryo development As a result of fertilization of the egg cell by one of the sperm nuclei at 7DAP it was possible to observe the formation of the zygote (Fig. 6a). The zygote showed a semi-spherical shape and an increase in size is observed relative to the size shown by the egg cell prior to fertilization. Zygote dimensions are 44.33 ± 1.28 μm long by 38.48 ± 1.33 μm wide. The zygote nucleus is relocated to its position at the chalazal end of the cell as it was in the egg cell, however, prior to its final position, the nucleus of the egg cell moves toward the center of the cell, putatively at the moment of its fertilization (to be published elsewhere). This polarity shown by the zygote of P. tuberosa is similar to the studies on zygotic embryogenesis performed in model plants such as Capsella bursa-pastoris (Schulz and Jensen 1968), Nicotiana tabacum (Mogensen and Suthar 1979) and A. thaliana (Mansfield and Briarty 1991;Mansfield et al. 1991). At 7 DAP it was also possible to observe how the zygote began to elongate and changed from hemispherical to oval shape, so that its dimensions throughout the longitudinal axis increased to 55.80 ± 1.15 μm (Fig. 6b) while the width of the zygote remained constant (38.85 ± 0.82 μm). The polarity of its nucleus was kept oriented to the chalazal end of the cell. This elongation of the zygote is seen as a common feature in angiosperms, which prepares the embryo for the first cell division. In the case of A. thaliana (Mansfield and Briarty 1991) the zygote showed an elongation of approximately three times its size (in apical-basal direction) before the first division, and in the case of A. tequilana this increase is a third the original size of the zygote (González-Gutiérrez et al. 2014). Once the cell forming the zygote is elongated, this is divided transversely at the chalazal-micropylar axis resulting in two cells, the apical cell and the basal cell (8DAP) (Fig. 6c). The division of the zygote in a transversal direction observed in P. tuberosa var. Simple occurred similar to that observed in the vast majority of angiosperms (Rodríguez-Garay et al. 2000;Lau et al. 2012), however, this division can be given longitudinally (Johri and Rao 1984) or oblique as in the case of wheat (Batygina 1978). The first division of the zygote generated an apical cell, which showed a large and highly condensed nucleus and a basal cell with a large vacuole that covers virtually the entire space of the cell (Fig. 6c). This first division occurred asymmetrically so the apical cell was usually smaller (19.81 ± 0.65 μm long by 23.79 ± 1.80 μm wide) than the basal cell (38.94 ± 2.11 μm long by 29.83 ± 2.85 μm wide) as in A. tequilana (González-Gutiérrez et al. 2014) and A. thaliana (Mansfield and Briarty 1991) where the cell plate development generates a smaller apical cell compared to the basal cell. In the analyzed samples at 8 DAP the first division of the apical cell that forms the embryonic head was observed (Fig. 6d), this division as it was for the first division of the zygote occurred transversely, contrary to what was reported for A. thaliana where this division occurs longitudinally (Mansfield and Briarty 1991;Capron et al. 2009). The apical cell continued to divide, generating a four-celled embryo by a longitudinal division. On the other hand, the basal cell through a series of transversal divisions following the chalazal-micropylar axis formed the embryonal suspensor, which in turn at this stage of development was able to form the hypophysis from the first division of the basal cell (9 and 10 DAP) (Fig. 6e). By the 10th and 11th DAP the studied samples showed the formation of eight-celled embryos (Fig. 6f ), similar to those described by Batygina (1978), where not only the first division of the zygote, but all subsequent divisions of the embryo occurred obliquely to the chalazal-micropylar axis. The divisions of the embryo continue until the embryo reached the early globular stage (12 and 13 DAP) with approximately 16 cells (Fig. 6g), and finally the formation of globular embryos probably 64 cells or more, stage from which the protoderm differentiation can be observed (16 DAP) (Fig. 6h). Endosperm development Along with the changes presented in the zygote, as a consequence of fertilization of the central cell by the second sperm nucleus, the endosperm mother cell was generated, which made a first division transverse to the axis chalazal-micropylar forming two cells, one cell confined to the area of the chalazal haustorium and the second cell was located throughout the embryo sac in the last two thirds thereof (Fig. 7a). The chalazal cell then followed a series of divisions first of the nuclear type and then of the cellular type forming a small chalazal chamber, meanwhile the micropylar nucleus generates a second micropylar chamber which is of a larger size and where endosperm divisions occur in a nuclear way where most of endosperm nuclei are located at the periphery of the embryo sac (7 DAP) (Fig. 7b). The formation of both chambers, the endosperm development of the nuclear type in the micropylar chamber (Bhojwani and Bhatnagar 1983;Floyd and Friedman 2000), as well as the aforementioned chalazal position of the central cell nucleus in the embryo sac are typical features of the helobial type of endosperm (Maheshwari 1950), thus P. tuberosa var. Simple developed an endosperm that was classified as of helobial type, like those reported in the species Hesperocallis undulata (Cave 1948) and A. tequilana (González-Gutiérrez et al. 2014). Endosperm development was observed at an early zygote stage so it was possible to observe several divisions of the same before the first division of the zygote took place (Fig. 7b). The endosperm development in P. tuberosa was similar to that of Amaranthus hypocondriacus (Coimbra and Salerma 1999), T. glutinosa (Holloway andFriedman 2008) andA. tequilana (González-Gutiérrez et al. 2014). The general shape of the embryo sac started to change, the walls of the sac moved toward the nucellar tissue and lost its pyriform appearance, and taking an ovoid shape slightly narrow at the chalazal end where the chalazal haustorial tube was originally placed (Fig. 7b). At 8 DAP important changes began to be observed. The embryo sac continuously changed its shape as a result of increase in volume and divisions of the cells of the endosperm, the chalazal walls of the embryo sac pushed the chalazal nucellar tissue generating two new haustoria which were divided by the postament, a tissue containing a set of thickened cells that formed the hypostase in the unfertilized ovule. Then a third haustorium was formed in the micropylar area of the embryo sac where the embryo develops (Fig. 7c). The development of haustoria both chalazal and micropylar after fertilization are characteristics that are commonly observed in several species of the order Asparagales (Rudall 1997). As stated before, the differentiation of the embryo protoderm could be observed at 16 DAP, however, even though endosperm cellularization was not observed, it might occur after this stage. Conclusions The normal development of the P. tuberosa var. Simple embryo sac follows a monosporic pattern of the Polygonum type and starts its development from the chalazal megaspore. At maturity, the embryo sac is of a pyriform shape with a chalazal haustorial tube where the antipodals are located, just below the hypostase, which connects the embryo sac with the nucellar tissue of the ovule. The central cell nucleus shows a high polarity, being located at the chalazal extreme of the embryo sac. Due to this particular characteristic, the second sperm nucleus has to travel a long distance in order to fertilize such nucleus. The position of cells inside the P. tuberosa embryo sac may be useful for in depth studies about the double fertilization. Furthermore, it was possible to make a chronological description of the events that happen from fertilization and early embryo development to the initial development of the endosperm which was classified as of the Helobial type. Authors' contributions AGGG carried out the microscope analyses, the acquisition of data, the analysis and interpretation of data and drafted the manuscript. BRG conceived and coordinated the study, carried out analysis and interpretation of data and drafted the manuscript. Both authors read and approved the final manuscript.
2018-04-03T00:22:37.001Z
2016-10-18T00:00:00.000
{ "year": 2016, "sha1": "b390c3d9a6026f0649f1ef43ad629ade95ecbadf", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40064-016-3528-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b390c3d9a6026f0649f1ef43ad629ade95ecbadf", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
49241616
pes2o/s2orc
v3-fos-license
Enantioselective fluorination of α-branched aldehydes and subsequent conversion to α-hydroxyacetals via stereospecific C–F bond cleavage The highly enantioselective fluorination of α-branched aldehydes was achieved using newly developed chiral primary amine catalyst. The solution was refluxed for 15 h under argon atmosphere. The resulting mixture was poured into saturated aq.NH 4 Cl, and the whole mixture was filtered to remove the catalyst, then extracted with ethyl acetate. The organic extracts were dried over Na 2 SO 4 and concentrated. The crude mixture was purified by silica gel column chromatography (hexane : CH 2 Cl 2 = 5:1) to give 75% yield of (R)-15 (white solid). 1 To a solution of (R)-15 (1.46 mmol) and NiCl 2 (PPh 3 ) 2 (95.5 mg, 0.146 mmol, 10 mol%) in TBME (14.6 mL) was added 3M ethereal solution of MeMgI (2.92 mL, 8.76 mmol, 6 equiv) at 0 ºC. The solution was refluxed for 16 h under argon atmosphere. This mixture was poured into ice-cooled 1M HCl, and the whole mixture was filtered to remove the catalyst. The filtrate was poured into saturated aq.NaHCO 3 , and extracted with dichloromethane. The organic extracts were dried over Na 2 SO 4 and concentrated. The crude mixture was purified by silica gel column chromatography (hexane : CH 2 Cl 2 = 10:1) to give 91% yield of (R)-16 (white solid To a suspension of (R)-2a 3 or (R)-2b (5.25 mmol), tetrabutylammonium hydrogen sulfate (356.5 mg, 1.05 mmol, 20 mol%) and K 2 CO 3 (7.26 g, 52.5 mmol, 10 equiv) in CH 3 CN (105 mL) was added ethyl isocyanoacetate (688 μL, 6.30 mmol, 1.2 equiv) at 0 ºC. The solution was refluxed for 16 h under argon atmosphere. The resulting mixture was filtered and the filtrate was concentrated. The residue was purified by column chromatography on silica gel to afford (R)-17. The solution was stirred at room temperature for 1 h under argon atmosphere. The resulting mixture was poured into ice-cooled saturated aq.NaHCO 3 and extracted with CH 2 Cl 2 . The organic extracts were dried over Na 2 SO 4 and concentrated. The residue was purified by column chromatography on silica gel to afford (R)-1. CO 2 Et Br F S17 General procedure: Enantioselective fluorination of 3 was carried out according to the procedure described in page S6. After completion of the reaction, MeOH (2.64 mL)/NaOMe (1.32 mmol, 5 equiv.) or ethylene glycol (2.64 mL)/NaH (1.32 mmol, 5 equiv) were added at 0 °C. The mixture was stirred at room temperature, then diluted by adding sat.NaHCO 3 aq., and extracted with Et 2 O. The organic layer was dried over Na 2 SO 4 and concentrated under reduced pressure. The crude product was purified by silica gel chromatography to give -hydroxylacetals 10-12. MeI (0.408 mmol, 2 equiv.) was added to the mixture, and stirred for 60 min at 0 °C. The reaction was quenched by adding sat. NH 4 Cl aq. and extracted with Et 2 O. The organic layer was dried over Na 2 SO 4 and concentrated under reduced pressure. The crude product was purified by silica gel chromatography to afford 19. The crude mixture was purified by silica gel column chromatography (hexane : ethyl acetate = 4 : 1) to give 55% yield of 10k (pale yellow oil H NMR measurement of hemiacetal derived from 4. After fluorination of 3a, NaHCO 3 aq. was added to the mixture, and extracted with Et 2 O. The organic layer was dried over Na 2 CO 3 and concentrated to give the crude mixture of 4a. 1 H NMR measurement of 4a in CD 3 OD clearly showed the generation of hemiacetal as a diastereomeric mixture (dr = 6 : 4).
2018-06-17T01:00:38.877Z
2015-11-16T00:00:00.000
{ "year": 2015, "sha1": "1c4370ec167f933150a3c5432b1ad35250403baf", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/sc/c5sc03486h", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c4370ec167f933150a3c5432b1ad35250403baf", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
1527183
pes2o/s2orc
v3-fos-license
Photoemission study of the electronic structure and charge density waves of Na2Ti2Sb2O The electronic structure of Na2Ti2Sb2O single crystal is studied by photon energy and polarization dependent angle-resolved photoemission spectroscopy (ARPES). The obtained band structure and Fermi surface agree well with the band structure calculation of Na2Ti2Sb2O in the non-magnetic state, which indicates that there is no magnetic order in Na2Ti2Sb2O and the electronic correlation is weak. Polarization dependent ARPES results suggest the multi-band and multi-orbital nature of Na2Ti2Sb2O. Photon energy dependent ARPES results suggest that the electronic structure of Na2Ti2Sb2O is rather two-dimensional. Moreover, we find a density wave energy gap forms below the transition temperature and reaches 65 meV at 7 K, indicating that Na2Ti2Sb2O is likely a weakly correlated CDW material in the strong electron-phonon interaction regime. The electronic structure of Na 2 Ti 2 Sb 2 O single crystal is studied by photon energy and polarization dependent angle-resolved photoemission spectroscopy (ARPES). The obtained band structure and Fermi surface agree well with the band structure calculation of Na 2 Ti 2 Sb 2 O in the non-magnetic state, which indicates that there is no magnetic order in Na 2 Ti 2 Sb 2 O and the electronic correlation is weak. Polarization dependent ARPES results suggest the multi-band and multi-orbital nature of Na 2 Ti 2 Sb 2 O. Photon energy dependent ARPES results suggest that the electronic structure of Na 2 Ti 2 Sb 2 O is rather two-dimensional. Moreover, we find a density wave energy gap forms below the transition temperature and reaches 65 meV at 7 K, indicating that Na 2 Ti 2 Sb 2 O is likely a weakly correlated CDW material in the strong electron-phonon interaction regime. L ayered compounds of transition-metal elements always show interesting and novel electric and magnetic properties and have been studied extensively. The discovery of basic superconducting layers, such as the CuO 2 plane 1 in cuprates and Fe 2 An 2 (An 5 P, As, S, Se, Te) layers 2 in iron based superconductors, have opened new fields in physics and chemistry of layered superconductors. Recently another class of layered compounds built from alternatively stacking of special conducting octahedral layers Ti 2 Pn 2 O (Pn5Sb, As) and certain charge reservoir layers [e.g., Na 2 , Ba, (SrF) 2 , (SmO) 2 ] have attracted much attention [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] . Most notably, these compounds exhibit competing phases just like in cuprates and iron based superconductors. Both experiments and band calculations show that the ground states of Na Ti Sb O (Refs. 6, 9, 18 2 2 2 2 2 12, 13, 20, 21) are possible spin-density wave (SDW) or charge-density wave (CDW) phases, and the Na 1 substitution of Ba 21 in Na x Ba 1-x Ti 2 Sb 2 O suppresses the CDW/SDW, and leads to superconductivity, whose critical temperature (Tc) can be as high as 5.5 K for x50. 15 (Refs. 13). These layered compounds provide a new platform to study unconventional superconductivity. Na 2 Ti 2 Sb 2 O is a sister compound to BaTi 2 Sb 2 O, which shows a phase transition at Ts,115 K as characterized by a sharp jump in resistivity and a drop in spin susceptibility 3 . The microscopic mechanism for this phase transition has not been determined, but it has been suggested to arise from the SDW or CDW instability driven by the strongly nested electron and hole Fermi surfaces (Refs. [18][19][20][21][22][23]. However, the nature of the phase transition and its correlation with the superconductivity are still unknown. A recent DFT calculation 23 predicted possible SDW instabilities in Na 2 Ti 2 Pn 2 O (Pn5As, Sb), and more specifically that the ground states of Na 2 Ti 2 Sb 2 O and Na 2 Ti 2 As 2 O are bi-collinear antiferromagnetic semimetal and novel blocked checkerboard antiferromagnetic semiconductor, respectively. An optical study 24 reveals a significant spectral change across the phase transition and the formation of a density-wave-like energy gap. However, one cannot distinguish whether the ordered state is CDW or SDW since both states have the same coherent factor. To date, the experimental electronic structure of Na 2 Ti 2 Sb 2 O has not been reported, which is critical for understanding the nature of the density waves in these compounds. In this article, we investigate the electronic structure of Na 2 Ti 2 Sb 2 O with angle-resolved photoemission spectroscopy (ARPES). Our polarization and photon energy dependent studies reveal the multi-orbital and weak three-dimensional nature of this material. The obtained band structure and Fermi surface agree well with the band structure calculation of Na 2 Ti 2 Sb 2 O in the non-magnetic state, which indicates that there is no magnetic order in Na 2 Ti 2 Sb 2 O and the electronic correlation is weak. Temperature dependent ARPES results reveal that a density wave energy gap forms below the transition temperature and reaches 65 meV at 7 K, indicating that Na 2 Ti 2 Sb 2 O is likely a weakly correlated CDW material in the strong electron-phonon interaction regime. Results Band Structure. The electronic structure of Na 2 Ti 2 Sb 2 O at 15 K is presented in Fig. 1. Photoemission intensity maps are integrated over a [E F -10 meV, E F 1 10 meV] window around the Fermi energy (E F ) as shown in Figs. 1(a) and 1(b). The azimuth angle of the sample in Fig. 1(b) was rotated by 45u compared with in Fig. 1(a), there is subtle spectrum weight difference in the two obtained Fermi surface maps due to the matrix element effect. The observed Fermi surface consists of four square-shaped hole pockets (a) centered at X and four similar electron pockets (c) centered at M. The electronic structure around C is more complicated, mainly consists of a diamond-shaped (b) and a four-leaf clover like (b9) electron pockets. The extracted Fermi surface from photoemission intensity map and the theoretic predicted Fermi surface are shown in Figs. 1(c) and 1(d), which agree well with each other. The calculated Fermi surface of Na 2 Ti 2 Sb 2 O in the non-magnetic state was taken from Ref. 23. The Fermi pockets centered at X and M show multiple parallel sections, providing possible Fermi surface nesting condition for density wave instabilities, as suggested in previous first principle calculations 23 (Figs. 1(e2) and 1(f2)). Taking two distinct bands d and g as examples, the renormalization factors are very close to 1 for both bands, suggesting the weak correlation character of Na 2 Ti 2 Sb 2 O. Figs. 1(g) and (h) show the low energy electronic structure along the C-M and C-X directions together with their second derivative spectrum. The band structure as indicated by the dashed curves in Figs. 1(g2) and (h2) are resolved by tracking the local minimum locus in the second derivative of the ARPES intensity plot with respect to energy. A weak but dispersive electron band can be resolved around M point, its band bottom locate at the top of a hole-like band d. Two nearly coincident electron-like bands (b and b9) can be resolved around C point at certain photon energy along the C-M direction, while there is only one electron-like band b across E F near C along the C-X direction. A hole-like band a crosses E F and forms the squareshaped pockets around X. The overall measured electronic structure of Na 2 Ti 2 Sb 2 O agrees well with the calculations, and the near-unity renormalization factor suggests that the ground state of Na 2 Ti 2 Sb 2 O is nonmagnetic and the correlation is weak. Polarization Dependence. The electronic structure of Na 2 Ti 2 Sb 2 O near E F is mainly contributed by Ti 3d orbitals, which is similar to the case of iron based superconductors. We conducted the polarization dependent photoemission spectroscopy measurement to resolve the possible multi-orbital nature of Na 2 Ti 2 Sb 2 O. The experimental setup for polarization-dependent ARPES is shown in Fig. 2(a). The incident beam and the sample surface normal define a mirror plane. For the s (or p) experimental geometries, the electric field of the incident photons is out of (or in) the mirror plane. The matrix element for the photoemission process could be described as: Since the final state Y k f of photoelectrons could be approximated by a plane wave with its wave vector in the mirror plane, is always even with respect to the mirror plane in our experimental geometry. In the s (or p) geometry,ê : r is odd (or even) with respect to the mirror plane. Thus considering the spatial symmetry of the Ti 3d orbitals, when the analyzer slit is along the high-symmetry directions, the photoemission intensity of specific even (or odd) component of a band is only detectable with the p (or s) polarized light. For example, with respect to the mirror plane (the xz plane), the even orbitals (d xz , d z z , and d x z {y z ) and the odd orbitals (d xy and d yz ) could be only observed in the p and s geometries, respectively. The photoemission intensity plots of Na 2 Ti 2 Sb 2 O along the C-M and C-X high symmetry directions are shown in Fig. 2. The incident C1 light is a mixture of both the p and s polarizations, so all the bands with specific orbital can be seen with the C1 incident light. The b band at C is absent in the s polarization along the C-M direction, visible in both polarizations along the C-X direction, which may be attributed to the Ti d xz orbital. The electron band c only shows up on the s polarization at the M point, exhibiting its odd nature with respect to the mirror plane, which may be attributed to the d yz and/or d xy orbital. The hole-like band at X point is not as pure, it is visible in the p polarization along the C-X direction, hardly seen in the s polarization, which may be a mixture of different Ti 3d orbitals. In general, Na 2 Ti 2 Sb 2 O exhibits obvious polarization dependence, which resembles the multi-band and multi-orbital nature of band structure of iron pnictide superconductors 25 . Kz Dependence. The calculated electronic structure of Na 2 Ti 2 Sb 2 O shows typical two dimensional character by the nearly kzindependent Fermi surface sheets around the X and M points, while the electronic structure exhibit significant k z dispersion at C point 22,23 . To study the three-dimensional character of the electronic structure in Na 2 Ti 2 Sb 2 O, we have conducted the photon energy dependent experiment with circularly polarized photons. The measured band structures along the two high-symmetry directions (C-M and C-X) with different photon energies are present in Fig. 3 typical cycle of each 14 eV photon energy. The Fermi momentum of b reaches its minimum at 90 eV photon energy, then increases with increasing photon energy, and reaches its maximum at 104 eV. On the contrary, the Fermi momentum of a band reaches its maximum and minimum at 90 eV and 104 eV, respectively. Consistent with the measured Fermi surface, there is only one electron band near C along the C-X direction (labeled as b), while we can clearly observe two electron bands along the C-M direction (labeled as b and b9). The Fermi crossings of b and b9 show negligible photon energy dependence along C-M, while the relative intensity of b and b9 change with photon energy. For instance at 104 eV, the b9 intensity is high, while the b intensity is low. With increasing photon energy, the intensity of b9 decreases while that of b increases, reaching their minimum and maximum at 118 eV, respectively. The relative intensity instead of Fermi crossing shows distinct photon energy dependence for b and b9. For the c band near the M point, its Fermi momentum shows weak kz dispersion, with the minimum and maximum at 104 eV and 118 eV, respectively. The theoretic predicted Fermi surface of Na 2 Ti 2 Sb 2 O shows cylinder Fermi sheets near M and X and strong k z dependent Fermi sheet near C 22,23 , our photoemission data confirmed the two dimensional character of the electronic structure at X and M. The weak photon energy dependence of the electronic structure at C is not consistent with the theoretic calculation, and this discrepancy may be due to the poor k z resolution of our ARPES experiment in the vacuum ultraviolet photon energy range. It is known that the poor k z resolution would largely smear out the dispersive information along k z for a fast-dispersive band, as likely observed here. Formation of the Density Wave Energy Gap. In the conventional picture of density wave transition, the formation of electron-hole pairs with a nesting wave vector connecting different regions of FSs would lead to the opening of an energy gap. In charge-density wave systems such as 2H-TaS 2 , strong electron-photon interactions could cause incoherent polaronic spectral lineshape, and large Fermi patches instead of a clear-cut Fermi surface 26 . Anomalous temperature dependent spectral weight redistribution and broad lineshape with incoherent character was reported in BaTi 2 As 2 O (Ref. 27), an iso-structural compound of Na 2 Ti 2 Sb 2 O. It was found that partial energy gap opens at the Fermi patches, instead of Fermi surface nesting, is responsible for the CDW in BaTi 2 As 2 O. The detailed temperature dependence of the low energy electronic structure of Na 2 Ti 2 Sb 2 O is presented in Fig. 4. The Fermi surface topologies of Na 2 Ti 2 Sb 2 O at 150 K and 7 K are rather similar, but a dramatic spectra weight change can be observed around the X point. At 150 K, which is above the phase transition temperature 115 K, the spectra weight around X is quite strong compared with those around the C point. At 7 K, which is well below the transition, the spectral weight near X is obviously suppressed, while it was slightly enhanced near C. Fig. 4(c) shows the symmetrized spectrum along C-X. The band dispersion shows much alike at both temperatures, but an energy gap opens at X point when it comes into the CDW/SDW state at 7 K. We tracked the EDCs at the Fermi crossing of a band to reveal the CDW/SDW gap opening behavior more precisely. The density of states near E F is obviously suppressed with decreasing temperature [ Fig. 4(d)]; an energy gap opens at 113 K below the phase transition temperature of 115 K for Na 2 Ti 2 Sb 2 O[ Fig. 4(e)]. The gap size increased with decreasing temperature, following the typical BCS formula[ Fig. 4(f)]. The gap size get saturated at low temperature and the largest gap size is about 65 meV at 7 K, which give a large ratio of 2D/k B T s ,13. The optical study 24 revealed 2D/k B T s ,14, in consistent with our findings. Such a large ratio indicates that this density wave system is in the strong electron-photon coupling regime 27 . Intriguingly, the photoemission spectrum of the electron band b around C shows a broad line shape without a sharp quasiparticle peak near E F , and the spectral weight increases slightly with deceasing temperature [ Fig. 4(g)]. Furthermore, the peak position moves slightly upward to E F with deceasing temperature. The spectral weight enhancement for b band shows a gradual change behavior with decreasing temperature, indicating that it is not relevant to the density wave transition around 115 K. Compared with the obvious gap opening behavior at X, it is safe to conclude that the gap does not open near C. In consideration of the theoretic prediction that X and M show multiple parallel sections, it is nature to deduce that Fermi surface nesting may happen between the parallels sections of X and M. Due to the matrix element effects, the spectral weight near M is extremely weak for data taken with 21.2 eV photons, we thus cannot access the temperature dependence there. In the sibling compound BaTi 2 As 2 O (Ref. 27), large energy scale spectral weight transfer with broad lineshape was reported. With decreasing temperature, some parts of the bands in BaTi 2 As 2 O get suppressed through the CDW transition, while some parts of the bands get enhanced. Similar large-scale spectral weight redistribution was also observed previously in Sr 2 CuO 2 Cl 2 (Ref. 29), which is explained by multiple initial/final states induced by strong coupling between electrons and bosons. In the case of Na 2 Ti 2 Sb 2 O, the electron band b around C[ Fig. 4(g)] and the hole band a around X [ Fig. 4 (d)]both show broad line shape without sharp quasiparticle peak near E F , whose typical full width at half maximum (FWHM) are about 100,150 meV. These behaviors have been found to be typical signatures of polaronic systems, such as La 1.2 Sr 1.8 Mn 2 O 7 (Ref. 30) and K 0.3 MoO 3 (Ref. 31), where the weight of the quasiparticle peak is vanishingly small, and its dispersion is renormalized to the vicinity of the Fermi surface. Similar to BaTi 2 As 2 O, some part of the bands (a band) in Na 2 Ti 2 Sb 2 O get significantly suppressed with decreasing temperature, while some part of the bands (b band) get slightly enhanced, through which the total electronic energy is saved crossing the CDW transition. Different from BaTi 2 As 2 O, we have observed clear CDW gap formation and possible Fermi surface nesting condition in Na 2 Ti 2 Sb 2 O, which prefer the traditional Fermi surface nesting mechanism. Na 2 Ti 2 Sb 2 O is somewhat an intriguing combination of traditional CDW materials and polaronic materilas, where Fermi surface nesting and polaronic behaviors are present at the same time in one material. Discussion It is crucial to understand the nature of the phase transition in the parent compounds of the newly discovered titanium-based oxypnictide superconductors, which is an essential step towards a thorough understanding of their superconducting mechanism. The SDW origin of the instability would favor an unconventional superconductivity with a possibly sign-changing s-wave pairing, while the CDW origin would suggest more conventional superconductivity with a simple s-wave pairing. Previous experimental and theoretical studies have evoked much controversy on the nature of the possible density wave transition. Our photoemission results are consistent with the density wave origin of the phase transition in Na 2 Ti 2 Sb 2 O. Moreover, considering the qualitative agreement of the experimental results and the calculated electronic structure 23 in the nonmagnetic states, and it is reasonable to deduce that it is possibly a conventional CDW transition in Na 2 Ti 2 Sb 2 O. Although further low temperature ARPES or STM experiment is certainly needed to reveal the exact nature of the superconducting samples, one can speculate that the superconductivity in Na x Ba 1-x Ti 2 Sb 2 O (Ref. 13) is likely due to electron phonon interactions, just like in NbSe 2 (Ref. 28). In summary, our experimental band structure agrees qualitatively well with the calculation 23 in the nonmagnetic state, excluding the existence of possible magnetic order in Na 2 Ti 2 Sb 2 O. Na 2 Ti 2 Sb 2 O shows obvious multi-band and multi orbital nature, which resemble the iron-based superconductors. The electron band at M and the hole band at X show weak k z dispersion, consistent with its layered crystal structure. We observe a large density wave gap of 65 meV which forms near the X point at 7 K, indicating that Na 2 Ti 2 Sb 2 O is likely a CDW material. The weak renormalization of the overall band structure indicates weak electron-electron correlation, while the broad lineshape and large energy gap and spectral weight transfer suggest the system is likely in the strong electron-phonon interaction regime. Methods Sample synthesis. Single crystals of Na 2 Ti 2 Sb 2 O were synthesized by the self-flux method. A mixture of Na, Sb, Ti and Ti 2 O 3 with molar ratio of 185185154 is prepared and put into an aluminum oxide crucible sealed inside a Ta tube. The mixture is gradually heated to 800uC and quenched to room temperature. Afterwards the mixture is heated at 1100uC for 2 hours and cooled to 500uC at 5uC/hour before quenched to room temperature. ARPES measurement. The polarization and photon energy dependent ARPES data were taken at the surface and interface spectroscopy beamline of the Swiss Light Source (SLS). The temperature dependent ARPES data were taken with an in-house setup at Fudan University. All data were collected with Scienta R4000 electron analyzers. The overall energy resolution was 15 meV or better, and the typical angular resolution was 0.3u. The samples were cleaved in-situ and measured under ultrahigh vacuum better than 3 3 10 211 mbar.
2018-04-03T01:12:39.957Z
2015-03-30T00:00:00.000
{ "year": 2015, "sha1": "bcc7d9ad7cf74111f86282f2d3f79f1ddf9df901", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep09515.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bcc7d9ad7cf74111f86282f2d3f79f1ddf9df901", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Medicine" ] }
266126782
pes2o/s2orc
v3-fos-license
Suspension syndrome: a scoping review and recommendations from the International Commission for Mountain Emergency Medicine (ICAR MEDCOM) Background Suspension syndrome describes a multifactorial cardio-circulatory collapse during passive hanging on a rope or in a harness system in a vertical or near-vertical position. The pathophysiology is still debated controversially. Aims The International Commission for Mountain Emergency Medicine (ICAR MedCom) performed a scoping review to identify all articles with original epidemiological and medical data to understand the pathophysiology of suspension syndrome and develop updated recommendations for the definition, prevention, and management of suspension syndrome. Methods A literature search was performed in PubMed, Embase, Web of Science and the Cochrane library. The bibliographies of the eligible articles for this review were additionally screened. Results The online literature search yielded 210 articles, scanning of the references yielded another 30 articles. Finally, 23 articles were included into this work. Conclusions Suspension Syndrome is a rare entity. A neurocardiogenic reflex may lead to bradycardia, arterial hypotension, loss of consciousness and cardiac arrest. Concomitant causes, such as pain from being suspended, traumatic injuries and accidental hypothermia may contribute to the development of the Suspension Syndrome. Preventive factors include using a well-fitting sit harness, which does not cause discomfort while being suspended, and activating the muscle pump of the legs. Expediting help to extricate the suspended person is key. In a peri-arrest situation, the person should be positioned supine and standard advanced life support should be initiated immediately. Reversible causes of cardiac arrest caused or aggravated by suspension syndrome, e.g., hyperkalaemia, pulmonary embolism, hypoxia, and hypothermia, should be considered. In the hospital, blood and further exams should assess organ injuries caused by suspension syndrome. Background Suspension syndrome describes a multifactorial cardiocirculatory collapse during passive hanging on a rope or in a harness system in a vertical or near-vertical position [1][2][3].Although numerous cases have been reported, the incidence of suspension syndrome is not known [1,[4][5][6][7].Since the first presentation of a case series in 1972 [2], its pathophysiology has been debated controversially [3,8].A widespread hypothesis assumes that blood pooling in the lower limbs leads to a reduction in cardiac preload, a consecutive decrease in cardiac output, tissue hypoperfusion and lastly loss of consciousness and cardiac arrest [2,8,9].However, no study has ever proven this hypothesis and recent studies suggest a neurocardiogenic mechanism [10,11].The best measures for immediate aid by first responders is still debated and some recommendations advise against placing a casualty in a supine position after being rescued from suspension, hypothesising an acute right ventricular volume overload due to blood returning from the legs [2,4,[12][13][14].However, this hypothesis has also never been proven and is based on 'expert opinion' only [15].The aim of the International Commission for Mountain Emergency Medicine (ICAR MedCom) was to perform a scoping review to identify all articles with original epidemiological and medical data to understand the pathophysiology and develop updated recommendations for the definition, prevention, and management of the suspension syndrome. Methods This work comprises two distinct components.The first is a scoping review, and the second entails recommendations that were formulated through discussions within ICAR MedCom.We conducted a literature search in PubMed, Embase, Web of Science, and the Cochrane Library.We incorporated all articles available in the databases up to September 17, 2023 found with the keywords "suspension syndrome", "suspension trauma", "harness hang syncope", "harness hang syndrome", "rescue death", "harness suspension" and "harness syndrome" using the conjunction OR (RL) in order to identify all articles with epidemiological or original pathophysiologic data (e.g.vital signs, laboratory parameters, objective signs and symptoms).We also searched the bibliographies of articles relevant to this review and articles from the authors' personal databases (all).We excluded duplicates, articles on different scientific topics (= not eligible), articles that could not be retrieved in full text (= not available), trial registrations, conference abstracts, case reports without original data, reviews (no original data), articles in languages other than the authors' own (i.e., English, German, Italian), trial registrations, conference abstracts, letters, editorials, and short communications (SR, RL).Data were extracted independently by two authors (SR, RL); any disagreements were solved in discussion among the authors.Data was critically appraised using the National Heart, Lung and Blood Institute Study Quality Assessment Tools (SR, RL) [16].The results pertinent to this work are synthesized in Table 1.The contents of the manuscript were developed by the author group and discussed within ICAR MedCom.Secondly, based on prior recommendations [17][18][19][20][21][22], the author group developed recommendations and presented them to all four commissions (i.e., air rescue, avalanche rescue, medical and terrestrial rescue) of the ICAR and discussed them at the 2019 ICAR meeting.The revised form was circulated in the ICAR MedCom list server for final review.Finally, the manuscript was approved for submission to a peer reviewed journal. Based on our findings we developed recommendations on prevention, diagnosis and treatment of the suspension syndrome, we graded them according to the Grading System of the American College of Chest Physicians [23]. Literature search Of the 121 studies returned from search, 70 met inclusion criteria after screening the titles and abstracts.On full-text review, 47 studies were excluded (Fig. 1) and finally, 23 articles were included (Table 1). Description of studies The 23 studies included were divided into two themes based on their characteristics: Epidemiology and Pathophysiology.Two studies reported epidemiological data from surveys while 21 studies were on pathophysiology.Studies on pathophysiology ranged from autopsy studies [1] to manikin [1] and small interventional/observational [14] human studies to case series or reports including pathophysiological data [5].Some studies on pathophysiology also assessed the effect of different harness types. Epidemiology Exact data on the epidemiology of suspension syndrome is lacking.Since the use of sit harnesses, the incidence of death from suspension syndrome is very rare.In an industrial setting, there have been no fatalities or syncopal events published; this was confirmed by a negative enquiry to a large trade body in the UK by Seddon in 2002, which found no suspension syndrome within 5.8 million on-rope hours by rope access technicians qualified by IRATA (International Rope Access Trade Association) [4].A small survey among height rescue organizations identified 3 cases of suspension syndrome, but without stating observational periods and describing symptoms or severity [32].It is postulated that awareness, Pathophysiology Until recently, a widespread hypothesis assumed that blood pooling in the lower limbs prompts a reduction in cardiac preload and subsequently a decrease in cardiac output and tissue perfusion, eventually leading to loss of consciousness, liver and kidney injury [15] and cardiac arrest [2].However, an unequivocal causal relationship between blood pooling and cardiac arrest has never been proven and more recent studies support the hypothesis that a neurocardiogenic mechanism is responsible for the sudden reduction in cerebral perfusion and loss of consciousness in suspension syndrome.Experimental studies in healthy participants hanging in a harness system showed venous pooling starting from bigger veins and progressively involving small venous vessels in the lower extremities [11,38].However, despite this sequestration of blood in the lower limbs due to the force of gravity and the absence of muscle activity, no relevant effects on systemic hemodynamic parameters were found (i.e., heart rate, blood pressure, stroke volume), besides occasionally a mild tachycardia and hypertension which was attributed to discomfort and sympathetic activation [10,11,15,24,27,28,37].This persisted until a sudden drop in heart rate, blood pressure and stroke volume, similar to a neurocardiogenic syncope with a vasodilatory and cardio-inhibitory response, which led to a decreased cerebral oxygenation [10,11,[26][27][28][36][37][38].Simultaneously, symptoms of pre-syncope such as dizziness, light-headedness, pale skin, sweating, blurred vision and nausea occurred [11,25,27,38].The absence of distinct compensatory tachycardia and reduced stroke volume, which are usually seen when cardiac preload is significantly reduced, suggest that the above mentioned traditional pathophysiological hypothesis of suspension syndrome is incorrect.However, the exact mechanism leading to the neurocardiogenic reflex and the role of orthostatic stress (i.e., blood pooling) in this context are unknown [40].No change in baroreceptor-sensitivity, a possible mechanism leading to a neurocardiogenic syncope, was found in the minutes before the pre-syncope [11].The Bezold-Jarisch reflex, a cardioinhibitory reflex causing bradycardia, vasodilation, and hypotension, could be implicated in suspension syndrome.However, the Bezold-Jarisch reflex originates from inhibitory mechanoreceptors in the left ventricle stimulated by a poor filling, which was not found in experimental studies [11].Altered sensation in the lower limbs, pain and the inability to move the legs, likely caused by compression of the sciatic nerve by the harness straps [11,26], could significantly contribute to the development of the neurocardiogenic syncope [41]. During syncope, cerebral perfusion is quickly restored after an induced fall to the horizontal position.In contrast, hanging motionless on a rope precludes a horizontal position after loss of consciousness whether the attachment point is at the level of the hip as for a sit harness or at a higher level as with a chest harness (Fig. 2).The resulting positions can lead to a further decrease of organ perfusion that can be followed by death.Moreover, in the scenario of an unconscious patient suspended on a rope, airway obstruction caused by hyperflexion or hyperextension of the neck can further contribute to a potentially fatal outcome [42]. Studies showed an inter-individual variability in time to occurrence of the first symptoms of pre-syncope.This can be as short as a few minutes but can be up to several hours [2,10,25,33].In addition to the unpredictable pathophysiological development, the symptoms of presyncope can also occur suddenly without a prodromal stage [11].The level of pain might be an important factor contributing to the (time to the) vagal event [11,26].Gender does not seem to have any effect, whereas increased body weight seems to decrease time suspension is tolerated [31,33].Exercise before suspension could increase the susceptibility and decrease the time until suspension syndrome occurs [11]. Prolonged suspension can have additional effects.Tissue hypoperfusion and hypoxia may lead to cell lysis, particularly in muscle tissue [7,35,36].Rhabdomyolysis may cause hyperkalaemia, which may lead to a ventricular arrhythmia as well as acute kidney injury.Accidental hypothermia can be associated with prolonged hanging and can further affect the course of suspension syndrome.Thromboembolic events have been associated with suspension syndrome due to venous stasis but was not associated with any of the deaths [30].Paraesthesia and other neurologic lesions have been reported [2,25]. Harness types The type of harness has an influence on hanging.When hanging in a belt (without leg loops) or in a chest harness alone, very severe pain, pressure paralysis of the brachial nerves and a marked decrease in cardiopulmonary parameters due to restricted chest and diaphragm motion can occur rapidly and was associated with a higher incidence of suspension syndrome compared to today.[24][25][26]29].These types of attach-Fig. 2 Hanging motionless on a rope precludes a horizontal position after loss of consciousness whether the attachment point is at the level of the chest (left) or hip (right) ment are therefore considered obsolete and must be distinguished from the suspension syndrome in modern sit harnesses, which became increasingly widespread from the early 1970s onwards. Suspension in harnesses with a dorsal attachment, as is often used in industrial climbing, is also associated with lower suspension tolerance in some but not all studies [36].The hanging position is more vertical and a femoral vein compression, which may reduce venous return, is possible [39].In addition, self-rescue is more difficult.However, the dorsal attachment is fundamentally different from the harness systems used in speleology and mountain sports.Anthropometric data, suspension angle and suspension tolerance from such harness systems can therefore not be easily transferred to sit harness systems with ventral attachment points [33].An improperly fitted and sized belt can increase the risk of developing a suspension syndrome, for example by causing a pain-triggered vagal syncope due to nerve compression [34]. Discussion To the best of our knowledge, this is the first scoping review on suspension syndrome.The literature search identified surveys on epidemiology of suspension syndrome and studies delving into its pathophysiology.Although precise and up-to-date epidemiological data remain elusive, we uncovered several studies of fair quality shedding light on the condition's pathophysiological aspects.In essence, suspension is shown to cause venous pooling, yet this does not appear to trigger significant hemodynamic repercussions.The loss of consciousness can be attributed to a neurocardiogenic mechanism.Notably, comprehensive and credible data are exclusively available from experimental human studies, as randomized controlled interventional studies are absent.These experimental studies predominantly center on the underlying pathogenetic mechanisms during the early stages of suspension syndrome.Ethical considerations constrain the evaluation of prolonged exposure beyond the onset of pre-syncopal symptoms and signs, leaving the outcomes of later stages, including cardiac arrest, shrouded in uncertainty. Our scoping review did not yield experimental studies on the prevention, diagnosis, and treatment of suspension syndrome.Nevertheless, given the clinical significance of these aspects, we next provide discussions informed by the pathophysiological data gleaned from the scoping review and from treatment recommendations for other conditions, e.g., cardiac arrest.Furthermore, it is worth noting that the terminology, definition, and classification of suspension syndrome in the literature display significant heterogeneity.As a result, we propose a framework to address these aspects. Definitions and classification All terms with harness, such as "harness hang syncope" or "harness syndrome" should be avoided, as being in a harness is not a prerequisite to develop suspension syndrome.Also, all terms with trauma as "suspension trauma" are inappropriate, because often there is no or only minimal trauma.The term suspension syndrome includes (near) syncope, death on the rope, death after rescue and cases of subacute renal failure from rhabdomyolysis after rescue (Table 1).We propose the following classification of suspension syndrome, which is based on the severity of the symptoms, different pathophysiology, and the time course of the suspension syndrome: None of the signs and symptoms must be attributable to trauma or any other medical condition (e.g., trauma, hypothermia, hypoglycemia). Diagnosis and treatment As suspension syndrome usually occurs in terrain where there is a risk of falling, it is important to ensure rescuer safety.The overall treatment principles follow normal management of vaso-vagal symptoms and, if critical, standard advanced life support (ALS) protocols [43,44].Treatment of pre-syncope and syncope should be initiated immediately on extrication of the patient from the rope.The horizontal supine position to improve cerebral blood flow is recommended.Previous recommendations suggested not laying a patient down abruptly, but there is no evidence to support this [1,4,9,45,46].If victims cannot be immediately extricated from the rope, preventive measures, as described above, should be performed. For an early diagnosis and a prompt intervention it is important to know and pay attention to the typical presyncopal signs and symptoms: light-headedness, dizziness, confusion, pale skin, cold sweating, warmth/hot flashes, blurred vision or nausea, bradycardia [11,31,36,47].The orthostatic tolerance varies individually, and symptoms of pre-syncope may occur at any time during passive hanging on a rope. First diagnostic measures on site after recovery from hanging include monitoring of pulse and respiration, blood pressure, peripheral oxygen saturation (SpO 2 ) and electrocardiogram (ECG).The ECG should be analysed for signs of hyperkalaemia (peaked T waves, flat/absent P waves, broad QRS, sine waves) and arrhythmias (bradycardia, ventricular fibrillation) [44].Portable ultrasound could be useful for detection of deep venous thrombosis or right ventricular dilation from pulmonary embolism and extended focused assessment with sonography for trauma (eFAST) [48][49][50].Continuous full monitoring including body core temperature after recovery from hanging and during transport is recommended.Upon hospital admission, blood tests including liver and kidney function tests, creatine kinase (CK) and myoglobin, along with blood gas analysis are recommended to detect for possible electrolyte, acid base disturbances and rhabdomyolysis [7,42,51].Monitoring of urinary output is also recommended.Consider a CT-scan to detect or exclude accompanying injuries. Cardiac arrest is the end stage of suspension syndrome, and it may occur when an incapacitated patient suspended in a near-vertical position drops in blood pressure and heart rate below critical levels of organ perfusion.Emergency treatment is immediate extrication to flat ground and the initiation of standard ALS protocols.Rhabdomyolysis from lower extremity ischaemia should be suspected in cases where there has been more than 2 h of passive suspension [2,3,7].Hyperkalaemia and kidney failure could develop in both the conscious and unconscious patient.There are few cases of death occurring directly after rescue; it is speculated that this may be an effect of potassium returning from the legs [3].Diagnosis and treatment of suspected rhabdomyolysis should follow standard protocols.In the case of cardiac arrest following suspension, hyperkalemia and pulmonary embolism should be considered and treated empirically, particularly after prolonged suspension [3,7].Drugs that increase serum potassium should be avoided, e.g., succinylcholine.Other likely reversible causes of death in cardiac arrest are hypoxia from airway obstruction during hanging and hypothermia [38].Those reversible causes of cardiac arrest should be treated according to current guidelines [44].Symptoms of nerve compression from the harness do occur [11].No specific therapy is available and recovery from nerve compression requires time. Prevention Expert opinion suggests that anyone hanging passively in a harness or indeed held in an upright position without a harness is at risk of developing a suspension syndrome [3]. The most important preventive measure for the climber is having proper equipment and knowledge on how to use the former, especially personal rope skills [3].Climbers and rescuers should be aware that loss of consciousness and cardiac arrest can occur at any time when a person is hanging motionless on rope.Once a person is stuck on rope, they must be brought down immediately.Therefore, rope work should never be conducted alone.The climber's or worker's team members must have a plan for recovering a person quickly if they cannot escape the situation themselves. If the person cannot be recovered, some steps may assist the person.Suspension syndrome is a consequence of passive, immobile suspension, not just being suspended on rope.Active movement of the legs maintains and restores venous return to the heart [38].A rock, crevice or house wall or a clipped step sling can also be used as an abutment to activate the muscle pump.A backpack should be taken off the back and hung, for example, at the attachment point to make hanging more comfortable.To alleviate muscle work needed to sit comfortably over time and increase comfort, the upper body can be stabilised by a sling which is passed under the armpits and clipped into the rope.Care must be taken to ensure that the sling does not cut into the body painfully.Suspended persons should be encouraged to continue moving the legs.Alternatively, a strap or sling under the knees may be used to raise up the legs closer to heart level [27,29,31].Alternatively, the legs may be lifted by a rescuer.This may be especially important if the suspended person feels at risk of losing consciousness for any reason and would not be able to continue moving their legs.An added benefit of the supported legs is that it takes some pressure of the harness itself which may reduce effects of nerve compression [4]. A person stuck on a rope is an emergency.It is appropriate to activate an emergency response early.Rescue teams should be properly equipped and trained to recover a person stuck on a rope in the shortest possible time. A hanging test with the equipment should be performed to best adjust the harness and find the least painful hanging possible while suspended before first use of the harness.Well-fitting leg loops on a modern harness do not restrict blood flow of the femoral vessels [3].The decision whether to remove a harness during the rescue and treatment of the patient should therefore be based purely on the technical aspects of rescue and not on medical aspects. Research implications The incidence and the relevance of suspension syndrome should be better described, e.g., through an international suspension syndrome registry.Also, the pathophysiology should be better assessed, e.g., whether a grading of suspension syndrome based on pathophysiological parameters can be achieved. Limitations Suspension Syndrome has a very low incidence; severe cases are especially rare.There are only a few case reports, some of which could not be thoroughly analysed because of incomplete data.Certain literature from the era of the initial description of suspension syndrome is no longer accessible.Consequently, the pathophysiological insights presented therein cannot be integrated into the current discussion.No interrater reliability was calculated for the assessment of the results of the literature search.Comprehensive data are available from experimental human studies and focus on the underlying pathogenetic mechanisms during the early stages of suspension syndrome.Randomized controlled interventional studies on suspension syndrome are absent.Regarding the quality assessment based on NHLBI grading, the studies often could not be clearly assigned to the categories specified by the NHLBI and usually had only small sample sizes. Conclusions Suspension syndrome is a rare entity.It is caused by hanging suspended in a harness.A neurocardiogenic reflex may lead to bradycardia, arterial hypotension, and cardiac arrest.Concomitant causes, such as pain from being suspended, traumatic injuries and accidental hypothermia may contribute to the development of suspension syndrome.Preventive factors include using a well-fitting sit harness, which does not cause discomfort while being suspended, and activating the muscle pump of the legs.Expediting help to extricate the suspended person is key.In a peri-arrest situation, the person should be positioned supine and standard ALS should be initiated immediately.Reversible causes of cardiac arrest caused by suspension syndrome, e.g., hyperkalaemia and pulmonary embolism, should be considered.In the hospital, blood and further exams should assess organ injuries caused by suspension syndrome.An international registry on suspension system is warranted to assess its incidence, pathophysiology, and outcome. Recommendations Based on our scoping review, discussion with the ICAR MedCom and to give guidance on the most important questions regarding the suspension syndrome, the following recommendations on prevention, diagnosis and treatment of the suspension syndrome have been developed and graded according to the Grading System of the American College of Chest Physicians [23].If the casualty is no longer able to act and it is safe to do so, the first rescuer reaching the casualty should raise the victim's legs to create a more horizontal position while measures are taken to lower the patient to the ground [11,42] 2C 6 No. Recommendation Grade Once the casualty is on the ground, the casualty should be positioned supine.Assessment and treatment should follow standard advanced life support algorithms.Reversible causes of cardiac arrest, including hyperkalaemia and pulmonary embolism, should be considered, and managed appropriately [1,3,11,42,43,45,46] 1A 7 After prolonged hanging (> 2 h), patients are at risk of developing hyperkalaemia and acute kidney injury and should therefore be transported to a hospital with the capability of performing emergent renal replacement therapy [2,3,7] 2C Fig. 1 Fig. 1 PRISMA flow diagram depicting the study selection process 1 . Acute suspension syndrome a) Near suspension syncope (characterized by light-headedness, dizziness, confusion, pale skin, cold sweating, warmth / hot flashes, blurred vision or nausea, bradycardia) b) Suspension syncope c) Suspension cardiac arrest d) Post-suspension cardiac arrest within 60 min after rescue 2. Subacute suspension syndrome a) Sensory or motoric deficit in the legs persisting for > 24 h after rescue b) End organ dysfunction, in particular rhabdomyolysis-associated acute kidney injury c) Cardiac arrest > 60 min after rescue Table 1 Articles included for analysis Table 1 ( continued) Poorhealth and safety regulations and training in extrication techniques have mitigated against serious adverse events. [1,3,11,42]hould be done only with proper equipment and knowledge on how to use it correctly.Rope work should never be conducted alone 1C 2 Persons suspended in a harness should be rescued as soon as possible, even if the casualty is asymptomatic, as time to near or actual syncope and potentially cardiac arrest is variable and unpredictable [11] 1B 3 While awaiting rescue, persons suspended freely on a rope should move their legs to reduce venous pooling [11, 38] 2B 4If no adjoining structures are in reach, foot loops should be used to step in and increase the activation of the muscle pump[1,3,11,42]
2023-12-10T14:12:06.432Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "328f66fc2d91ba39348049dd60b3f55087eeeaea", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "328f66fc2d91ba39348049dd60b3f55087eeeaea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
46875196
pes2o/s2orc
v3-fos-license
Generic first-order phase transitions between isotropic and orientational phases with polyhedral symmetries Polyhedral nematics are examples of exotic orientational phases that possess a complex internal symmetry, representing highly non-trivial ways of rotational symmetry breaking, and are subject to current experimental pursuits in colloidal and molecular systems. The classification of these phases has been known for a long time, however, their transitions to the disordered isotropic liquid phase remain largely unexplored, except for a few symmetries. In this work, we utilize a recently introduced non-Abelian gauge theory to explore the nature of the underlying nematic-isotropic transition for all three-dimensional polyhedral nematics. The gauge theory can readily be applied to nematic phases with an arbitrary point-group symmetry, including those where traditional Landau methods and the associated lattice models may become too involved to implement owing to a prohibitive order-parameter tensor of high rank or (the absence of) mirror symmetries. By means of exhaustive Monte Carlo simulations, we find that the nematic-isotropic transition is generically first-order for all polyhedral symmetries. Moreover, we show that this universal result is fully consistent with our expectation from a renormalization group approach, as well as with other lattice models for symmetries already studied in the literature. We argue that extreme fine tuning is required to promote those transitions to second order ones. We also comment on the nature of phase transitions breaking the $O(3)$ symmetry in general cases. I. INTRODUCTION Nematic liquid crystal phases are states of matter that possess long-range orientational order but are translationally invariant [1]. Historically, they were discovered in systems of rod-like molecules with a D ∞h symmetry and had a revolutionary impact on the display industry. However, it is generally accepted that the classification of nematic phases coincides with three-dimensional (3D) point groups. Since the early 1970s there have been steady and tremendous efforts in the search for new nematic phases beyond uniaxial order. Indeed, D 2h biaxial nematics were proposed [2] and their properties were discussed [3,4] just shortly after theories of the uniaxial ones were established [1]. There is now strong evidence of their existence [5][6][7] and they are believed to be promising candidates for the next generation of liquid crystal displays (LCDs) [8]. Another remarkable example of unconventional nematic phases whose existence has been established is the twist-bent liquid crystals formed from bent-core constituents with C 2v symmetry [5,6]. They exhibit intriguing optical [9][10][11][12] and elastic [13,14] properties and rich transition sequences [15][16][17][18], and are still subject to present studies. Nevertheless, in spite of considerable progress, we may have unveiled only a small corner of the rich landscape of the polyhedral phases. There are still many open questions on, e.g., their transition sequences and the nature of those phase transitions, the interactions of the associated topological defects and their influence on the thermodynamical, optical and mechanical properties of the system. From a theoretical point of view, the difficulty is closely tied to the complexity of those symmetries and their subgroup structure. These demand tensor order parameters of high rank and lead to rich patterns of phase transitions where dynamics of topological defects also plays a crucial role. Traditional Landau schemes and the associated lattice models are explicitly based on order parameters, and, hence, may become extremely involved and difficult to handle when dealing with complicated symmetries. They are also not convenient in accessing topological defects. Furthermore, the full classification of the explicit form of those order parameter tensors has been attained only recently [45,46]. However, lattice gauge theory, adopted from highenergy physics [47], has opened up new avenues to address these issues. The application of this method to nematic orders dates back to the seminal works of Lammert, Rokshar, and Toner in the mid-1990s [48][49][50]. The authors utilized a Z 2 gauge theory to promote Heisenberg vectors to directors, and formulated their model in terms of vectors and Z 2 gauge fields, instead of Q ab tensors. They successfully capture the important statistical physics of uniaxial nematics, especially the first-order nematic-isotropic (NI) transition, and show the power of lattice gauge theory in controlling dynamics of topological defeccts. Variants of the Lammert-Rokshar-Toner model have also been applied to strongly correlated elec-tron systems, for instance, in studies of charge fractionalization of superconductors [51][52][53] and spin nematics [54,55]. The works mentioned above have focussed exclusively on Z 2 symmetries and uniaxial orders. Only recently has the gauge-theoretical description been extended to accommodate general point-group symmetries, in the studies of 2 + 1d quantum melting [56,57] and 3D thermal nematics [45,46,58] by ourselves and collaborators. Its advantages have been proven both mathematically and practically, especially when dealing with 3D point groups which are in general non-Abelian. First of all, it has been shown in solid mathematical terms that the gauge model fits all nematic orders into a uniform and efficient framework, regardless of their symmetry [45]. This is in stark contrast to traditional order-parameter methods which are typically specific only for a single symmetry and often suffer from the complexity arising from high-rank ordering tensors. Moreover, the formulation of the gauge model requires no prior knowledge of the underlying order parameter which is an essential input for traditional methods. Instead, it acts as a machinery and generates a full classification of nematic ordering tensors which, to our knowledge, has never been done before [46], though remarkable results of a more narrow scope have been obtained previously by other means [33][34][35]. Furthermore, in virtue of its generality, the gaugetheoretical method can also provide a global view over symmetries, which allows us to explore universal properties of different nematic orders [45]. These include the insight of a relation between thermal fluctuations and symmetries, and the finding of a vestigial chiral phase that is reminiscent the chiral liquid reported in a recent experiment [59]. Last but not least, the gauge model is also naturally compatible with anisotropic interactions. By allowing anisotropy, it has mathematically predicted and numerically verified rich patterns of biaxial-uniaxial transitions and new types of biaxial-biaxial transitions [58], enriching our understanding of biaxial orders. In earlier works, we have focused on building the connection between generalized 3D nematics and non-Abelian gauge theories, and on exploring the topology of their phase diagrams. In the present paper we study the nature of the NI transition for polyhedral orders by means of Monte Carlo simulations and a renormalization group analysis. This is not only important for physical properties of the system near the phase transition, but also relates to a fundamental question in statistical physics. That is, whether breaking O(3) in different manners can give rise to new universality classes. Moreover, it is worth pointing out that the gauge-theory model allows us to easily exclude irrelevant symmetries and focus on the most important degrees of freedom. Meanwhile, the model remains flexible enough to incorporate competing orders and disorders. The symmetries we are interested in are the 7 polyhedral subgroups of O(3), {T, T d , T h , O, O h , I, I h }, requiring orientational tensors of rank higher than 2. Nematic phases of these symmetries are sometimes dubbed as octupolar or tetrahedral (T d ), cubic (O, O h ) or icosahedral (I, I h ) phases in literature. However, for convenience we will refer to them as generalized nematic phases when discussing general symmetries and G−nematics when discussing a specific instance of the symmetry G. This convention was by no means invented by us, but already used in the textbook Ref. 1. These polyhedral nematic phases have not been clearly identified in experiments. Nevertheless, this does not mean that they are only of academical interest. Indeed, modern technologies in nano and colloid science are able to synthesize and manipulate mesoscopic particles with the desired symmetry to a high degree [60][61][62][63][64][65], hence providing essential building blocks for the realization of polyhedral phases. Moreover, it is also worth noting that these phases may emerge from systems of lower-symmetry constituents with suitable interactions or geometrical constraints, such as the proposed tetrahedral T d phase from C 2v -shaped molecules [15,42] and the cubic O h phase from D ∞h -components [28,66,67]. This paper is organized as follows. In Section II, we define the necessary degrees of freedom, and review the realization of generalized nematics in the language of lattice gauge theories. Section III is devoted to Monte Carlo simulations. We first discuss the results of the chiral tetrahedral T nematics in detail, then present those for other polyhedral symmetries with a discussion on their general features. We compare our results with those from a renormalization scenario and other lattice models in Section IV. Finally, we conclude and provide an outlook in Section V. A. Degrees of freedom Instead of directly using physical order parameters, the fundamental degrees of freedom are nonphysical matter fields and gauge fields in the gauge theoretical description. The matter fields are O(3) rotors describing all possible rotations in 3D real space. They can be parameterized by local orthonormal triads as where n α = {l, m, n} with α = 1, 2, 3 are the three axes of a local triad. In concrete terms, they are defined by rotations that let R coincide with the fixed "laboratory" axes e a = {e 1 , e 2 , e 3 ), and are parametrized by three Euler angles with respect to e a . The three axes of R satisfy the relation This also defines the chirality or handedness, denoted by a pseudo-scalar σ, of the triad. For σ = 1, R describes rotations in SO(3) and is usually referred to as a proper rotation. For convenience later on, we also define triads formed by pseudo-vectors n α = { l, m, n}, with l · ( m × n) ≡ 1, describing rotations of a rigid body. Correspondingly, a rotation in O(3) can be decomposed as The gauge fields are defined as a connection between two neighboring triads. They are also rotations, but in contrast to the global O(3) rotations, they describe local rotations with respect to the axes of a triad. The introduction of gauge fields makes it is possible to compare two triads locally at different locations. The symmetry of the gauge fields is a point group by construction. In the simplest situation, when homogenous distributions of order parameters are preferred, it coincides with the symmetry of the "mesogens" of a liquid crystal. In terms of the terminology of Ref. 68, this symmetry is chosen to be the symmetry of the effective building blocks of the system, and can in turn represent the symmetry of the state in the fully symmetry-broken phase. In other words, the scheme works at a coarse-grained level by construction. B. The model Having established the necessary information on the degrees of freedom, we now introduce the gauge-theoretic model. It is defined on an auxiliary cubic lattice, which is permitted by the continuous translational symmetry of nematic liquid crystals. It hence applies to both continuous and discrete point groups. The Hamiltonian takes the following form, R i is the O(3) triad at a lattice site i, U ij is the gauge field of a point-group symmetry G mediating the interaction between two nearest-neighboring sites and lives on the link i, j . J is a coupling matrix and can act as a tuning parameter. It is constrained by the symmetry of nematic mesogens, i.e. the gauge symmetry G, in such a way that it has to be invariant under the transformation ΛJΛ T = J, ∀Λ ∈ G. It follows that J is isotropic and takes the form J = J1 for all polyhedral groups, where J is positive for ferromagnetic (alignment) coupling. However, anisotropies of J are possible for nematics with axial symmetries, and are responsible to the generalized biaxial-uniaxial and biaxial-biaxial transitions [58]. This invariance identifies the orientation of a triad defined by R i with that defined by Λ i R i , and thus encodes the symmetry of the mesogens under consideration. The Hamiltonian Eq. (5) is invariant under gauge transformations To be more concrete, we define a local triad n β i = U βγ ij n γ j at a site i, and rewrite the Hamiltonian Eq. (5) in the following form, where Greek letters in superscripts are associated with the axes of local triads, and J αβ = Jδ α,β for polyhedral symmetries. By doing so, the triad n γ j has been brought to the same local gauge, namely the same bodyfixed coordinate, as the site i by parallel transporting so that the orientation of the two triads can be compared. Then, considering the gauge transformations in Eq. (6) running over G at a site j, but letting other sites unchanged, Λ i =j ≡ 1, this generates a set of n β j which consists of all the equivalent definitions of the orientation of the underlying mesogen of the symmetry G at the site j. Let us take a chiral cube with the symmetry O as an example. The orientation of the cube maps to 24 configurations of a local triad, corresponding to the 24 transformations of the group O. When all these configurations are considered and identified, we are effectively describing the orientation of a cube via that of a set of local triads. The symmetry of the underlying mesogens is thus realized by the gauge symmetry. Note that the choice of Λ i =j ≡ 1 in the above example is purely for simplifying the example. Gauge transformations Eq. (6) can be performed independently to all the sites. Consequently, in the low-energy limit, the orientational interaction of physical mesogens (order parameter fields) is hence effectively encoded in the gauge model Eq. (5) of nonphysical degrees of freedom, as depicted by Fig. 1(b). This procedure can also be shown in explicit mathematical terms by integrating out the gauge fields Eq. (5), and we refer to our earlier publications Refs. 45 and 46 for detailed proofs. However, though maybe less intuitive, it is advantageous to work with gauge degrees of freedom. As the symmetry of the order parameter fields is directly implemented by the gauge symmetry, the gauge model applies to all point group symmetries by simply choosing a desired G. C. Discussion on the phases It is well known that gauge symmetries cannot break spontaneously [69]. As a consequence, the fully symmetry-broken phase of the gauge model, Eq. (5), features a ground state manifold O(3)/G which is just the order-parameter manifold of a G−nematic phase. This phase is usually referred to as the Higgs phase in the language of gauge theories, and corresponds to the aligned state of mesogens in the current context, i.e., a nematic phase of the symmetry G. On the other hand, the disordered O(3) liquid phase is realized by the confinement phase of the gauge model Eq. (5). There are three comments to be made on the above statements to avoid confusion. First, the Higgs phase just mentioned corresponds to a situation where O(3) has been completely broken to the local symmetry G. However, aside from this, there can also be an intermediate Higgs phase that breaks O(3) to a larger point group G satisfying G ⊂ G ⊂ O(3), featuring vestigial order. This could happen when fluctuations in some sectors of the degrees of freedom are more pronounced than in others. For instance, in case G is a finite axial group, the fully ordered Higgs phase is a biaxial nematic phase of the symmetry G. When fluctuations in the plane perpendicular to the so-called primary axis are sufficiently strong or weak, upon changing temperature, the system may experience an intermediate uniaxial and/or biaxial phase, respectively, before entering the disordered isotropic liquid phase, as discussed in detail in Ref. 58. As another example, an intermediate Higgs phase may also appear as a chiral liquid phase. Possible realizations of this phase are systems formed from mesogens of a chiral polyhedral symmetry, G ∈ {T, O, I}. For these symmetries, fluctuations in orientations are much more pronounced than those in the chirality. Thus a phase that breaks real-space inversion and mirror symmetry but is invariant under SO(3) rotations can emerge between the nematic phase and the O(3) liquid phase. We will encounter this situation again in the next section and a systematic and detailed discussion can be found in Ref. 45. Second, the distinction between the Higgs phase and the confinement phase is only a property when G is a nontrivial subgroup of O(3). In the limit G = O(3), these two phases are continuously connected and indistinguishable [70], consistent with the fact that there is no symmetry breaking for O(3)/O(3). Last but no least, as mentioned earlier, we will focus on homogeneous distributions of order parameter fields, as is realized by the gauge model Eq. (5). However, inhomogeneous distributions may also lead to interesting phenomena. One example is the chiral-T phase with a helical structure of T d molecules, owing to an explicit chiral elastic term in Landau free energy, as discussed in Ref. 33. We can also introduce a gauge invariant chiral term to Eq. (5) to incorporate such helical structure for general symmetries, but will leave it for future study. What is relevant to the current paper is that, as such a chiral term is independent to the additional quartic terms of high rank tensors, it is unlikely that they can change the nature of fluctuation-induced first-order phase transitions for the T and T d symmetry which will be discussed in Sec. IV A. D. Topological defects Before closing the section, let us briefly comment on the dynamics of gauge fields in the model Eq. (5) and its relation to topological defects of liquid crystals. From the point of view of gauge theories, the model Eq. (5) consists of a single Higgs term, in which form the dynamics of gauge fields U ij arises purely from the interaction with the matter fields R i . In general, however, the gauge fields can have their own dynamics, which in the simplest case is described by a plaquette term in the following form, This generalizes the defect suppression term in Refs. 48 and 49, and essentially is an analog of the Yang-Mills theory as the U ij 's are in general non-Abelian [47]. However, comparing to usual lattice Yang-Mills theories in high energy physics, here we are interested in discrete symmetries. U ∈ G denotes the orientated product of the gauge fields around a minimal plaquette of the lattice, and represents a gauge flux. The coupling strength K Cµ depends on the trace of U , so it is a function of the conjugation class C µ of G, which means that gauge fluxes in the same conjugation class are physically equivalent. A gauge flux has the effect that after a triad travels around it in a closed circuit, the triad is rotated by U , just like circling a disclination. Furthermore, the classification of gauge fluxes coincides with the Volterra classification of disclinations [71]. Even though this classification, as well as the Volterra classification, is in general not identical to the homotopy classification of topologically stable defects [36], it includes all the elementary TABLE I. Generators of 3D polyhedral point groups. The first column specifies the symmetries. The second column shows a set of generators which produce the entire group elements of the underlying symmetry. The third column gives the order of the symmetries. Note that, there are multiple ways to choose the generator set, but they are all equivalent (for more information see Refs. 72 and 73). A representation of the generators listed below is catalogued in Table II. Symmetry Generators Order topologically stable defects and can be used to construct the full homotopy classification. As we can easily tune K Cµ to suppress or prompt certain types of defects, the interaction in H YM provides a possible route to study the influence of topological defects on thermodynamical properties of nematic liquid crystals. As is known from the study of lattice gauge theory [70], this may qualitatively change both the topology of the phase diagrams and the nature of underlying phase transitions. Refs. 48-50 also showed remarkable examples in this context, where the first-order uniaxial-isotropic transition is split into two continuous ones when the defect suppression is sufficiently large. For general symmetries, we expect rich physics to explore when treating the elastic term (5) and non-Abelian defect term (8) at equal footing. Nevertheless, this is beyond the scope of the current paper, and deserves a separate systematic study. Thus, for simplicity, we will set K Cµ ≡ 0 and focus on Eq. (5) in the following. Physically, this means none of the topological defects are assigned a particular core energy. III. NUMERICAL RESULTS As discussed in the last section, the gauge model Eq. Tables I and II, while more information can be found in textbooks, e.g., Refs. 72 and 73. Schönflies notation is TABLE II. Definitions of the generators. Here we specify the representation of the generators for the 3D polyhedral point groups used in our simulations. cN (p) denotes a N -fold rotation about a vector p defined by the local axes {l, m, n}. τ = ( √ 5+1)/2 is the golden ratio, which is involved in case of icosahedral groups I and I h . σ h defines a reflection about the (l, m) plane, while σ d indicates a reflection about the plane (l + m, n). Generator Representation used thorough out the manuscript. We use the Metropolis algorithm and run simulations on cubic lattices with volume V = 8 3 , 16 3 , 24 3 . The simulations include three steps. In the first step, the transition temperatures are located as precise as possible by examining the peak position of the heat capacity and the nematic susceptibility (see below). Procedures of cooling random initial states and heating uniform states are compared. In the next step, we perform extensive simulations near the transition, histogramming the distribution of the observables of interest. The typical amount of independent samples used are of the order of 10 5 to 10 7 . In the last step, the histograms are further improved by Ferrenberg-Swendsen reweighting [74,75], and the transition temperatures are estimated from the shape of the histograms [76,77]. In the rest of the section, we first present the results for tetrahedral T nematics and discuss the general features of this phase transition in detail. Then we discuss other symmetries. A. SO(3)/T transition The tetrahedral group T consists of 12 proper rotations leaving a tetrahedron invariant. Therefore, aside from the orientational order, a T -symmetric nematic phase also takes an intrinsic chiral order, breaks inversion and any kinds of mirror symmetries of real space. Note that the T -nematic phase we discuss here is different from the T -phase discussed by Fel in Ref. 33. In the later case the T symmetry arises from the helical structure of T dsymmetric mesogens and is associated with a different order parameter (see below and also Sec. IV A for further discussion). It turns out that for nematics formed from constituents with a T -symmetry and a flexible handedness, as well as from those with an O-or I-symmetry, fluctuations in the orientation sector are in general much more pronounced than those in the chirality sector [45]. Consequently, the system develops orientational order and chiral order sequentially. Furthermore, by comparing numerical results with a mean-field analysis, it is shown that the two phase transitions are well separated. This implies that the relevant degrees of freedom associated with the NI transition lie in the SO(3) sector in Eq. (5). In mathematical terms, we can rewrite the gauge The orientational order parameter of T -nematic phases is a rank-3 tensor taking the following form (note the difference to the T d order parameter, see Sec. IV A), where O T i denotes the local ordering tensor at a coarsegrained lattice site i, ... V denotes the average over the volume, and cyc is the sum running over cyclic permutations of local axes { l, m, n} [46]. The Levi-Civita tensor is introduced to make the ordering tensor traceless, so O T becomes a zero tensor in the liquid phase. This term is only needed when working with SO(3) triads where the handedness of each local triad is fixed. If the handedness is allowed to fluctuate (i.e, the case of an O(3) triad) summing over the two kinds of chirality cancels this term. In case of homogenous distributions, instead of using the tensor form Eq. (11), we can characterize a nematic order of symmetry G by its magnitude defined as This quantity is called the nematicity and generalizes the concept of magnetization [46]. Consequently, we can further define the susceptibility of q in the standard way, and detect the NI transition by the peak of χ q , where β is the inverse temperature. As confirmed in our simulations, the peak of χ q coincides with that of the heat capacity defined in the standard way, indicating that the nematicity q is indeed a valid scalar order parameter. We have measured the SO(3)/T NI transition with Eq. (10) by monitoring several quantities, including the energy, the nematicity, histograms of the two, the heat capacity and the susceptibility. As many of them reveal the same information, we present only those which are necessary for the following discussions. In Fig. 2, we show the behavior of the nematicity for a broad range of temperatures, and Figs. 3(a) and 3(b) show the histograms of the energy density and the nematicity at phase transition, P (E) and P (q), respectively. As shown in the energy histogram Fig. 3(a), a doublepeak behavior emerges at sufficiently large lattice sizes, and appears more pronounced when the lattice size increases, indicating the occurrence two stable, co-existing phases, which is a hallmark of a first-order phase transition. The distance between the valley and peak of a histogram, measuring the difficulty for the system to tunnel between two phases, also increases with the lattice size, as expected. The physical meaning of the two peaks is revealed by the nematicity histogram from the same simulations, shown in Fig. 3(b). With increasing lattice size, aside from the behavior that the peaks become more pronounced, the first peak notably moves to the left. We The two-peaks behavior reveals the first-order nature for all these transitions. The histograms of the nematicity (not shown) exhibit similar features. The same binning size is used for the first three symmetries, while a smaller binning size is used for the SO(3)/I transition. However, the heights of these histograms are not comparable, since even though these symmetries are studied via a common framework here, they correspond to different physical models. expect it eventually goes to zero in the thermodynamical limit, indicating a disordered liquid phase. The other peak corresponds to the nematic phase. We find first-order behavior for all these transitions. have very similar behavior as the former two, with slightly higher T c 's. This may be understood by the fact that the Z 2 center in the latter two cases can be factorized as a trivial Z 2 /Z 2 theory, leading to the same order parameter manifold as the SO(3)/O and SO(3)/I cases, respectively. Indeed, O-nematics and O h nematics, as well as I-and I h -nematics, share the same orientational order parameter, only distinguished by a pseudo-scalar chiral order parameter [46]. We also studied the behavior of the nematicity and its histogram for these symmetries. Although the curves corresponding to different symmetries are well separated (since the phase transitions occur at different temperature scales) in Fig. 4, the histograms of the nematicities overlap closely for these symmetries, especially in the disorder region where all the disordering peaks are located at some small nematicity value close to 0. Moreover, they show very similar features as seen in Figs. 2 and 3 for the SO(3)/T transition case. They are therefore not presented. One notable feature of Fig. 4 is that the peaks of the histogram shift to lower energy scales as symmetries increase (the energy density is normalized via E G = −9V which is the energy when all mesogens uniformly align up), indicating a decrease in the corresponding transition temperatures. This can be understood as a consequence of more pronounced orientational fluctuations for high symmetries, which in turn results in an increasing difficulty to stabilize the order. Note that this feature is manifest when using a common metric, the Higgs coupling strength J, for all the symmetries. This metric is not a direct physical measure in the sense that it describes the interaction strength between the auxiliary gauge fields and matter fields, rather than that of physical order parameter fields. Although the latter one depends in principle on the Higgs coupling, the derivation of the relation is in general nontrivial. Nevertheless, this does not prevent us to obtain general insights on the nature of the phase transitions. Clearly, Fig. 4 reveals the generic first-order nature of the NI transition for all these symmetries. It is not clear yet how the strength of these first-order transitions depends on their respective symmetries. However, this depends on microscopic details of a system and a universal conclusion might not exist. IV. RELATIONS AND COMPARISONS TO OTHER METHODS We have numerically reached the conclusion of a generic first order NI transition from a particular framework. Moreover, this is consistent with existing results from other methods, including a general perspective from mean field theories, renormalization group (RG) analyses [78] and other lattice models [40,41], as we elaborate below. A. Mean-field theories and RG A significant difference between nematic order and spin or vector order is that in general the former requires a tensor order parameter due to nontrivial internal symmetries. In case of the D ∞h uniaxial nematics, the order parameter is a rank-2 tensor, Q ab , which gives rise to a third order term, Q ab Q bc Q ca , in the Landau-de Gennes theory and makes the NI transition discontinuous. For polyhedral symmetries {T h , O, O h , I, I h } the nematic order parameters are also even rank tensors [46], which take the following form for nonchiral groups On the other hand, the tetrahedral T d order requires a rank-3 order parameter tensor, O T d , where This forbids the appearance of the third order term in a naive mean-field theory which has the following free energy density, [8].) They are typically constructed by an interaction potential between two rigid molecules or mesogens of a certain symmetry. The orientation of a mesogen is described by M (nonorthogonal) unit vectors, spanned from a common local origin and organized in a way that explicitly has the desired symmetry. The potential, V ij , is then defined in terms of Legendre polynomials of the inner product of those vectors at the lowest nontrivial order, where the coefficient c l is positive for ferromagnetic coupling, P l (...) denotes the Legendre polynomial at the order l, and v (m) i are local unit vectors at the lattice site i. For the cubic O h order, the v (m) i coincide with our local triads n α i , and the Legendre polynomials are trivial for l < 4 [40]. On the other hand, in case of the tetrahedral T d order, the v (m) i are the 4 three-fold axes of a regular tetrahedron and are nonorthogonal [33], whereas the Legendre polynomials become nontrivial only from P 3 onwards [41]. Following the same principle and given the expression Eq. (16), one expects that this requires 15 vectors, which are the 2-fold axes of a regular icosahedron [33], and sixth order Legendre polynomials. It is not hard to show that the interaction potential Eq. (19) can be understood as a counterpart of the Lebwohl-Lasher model [79] for general symmetries, and is equivalent to the inner product between two order parameter tensors, As shown in Refs. 45 (5), which can readily be applied for all point-group symmetries, the potential Eq. (19) is symmetry-dependent and involves large amounts of vectors and high-order Legendre polynomials, whose high complexity in actual use can be anticipated. For instance, in case of the I h order, it involves 225 Legendre polynomials of order 6. However, the potential Eq. (19) has advantages in a more straightforward connection with microscopic interactions of liquid crystal mesogens, as it is built directly on physical order parameter fields. Moreover, it is interesting to see how this method applies to the T and T h symmetries, where the role of mirrors may be manifest, as well as the relation and difference of the resultant lattice models to those of the T d and O h case, which are the symmetry they halve, respectively. V. SUMMARY Rotational symmetry breaking is ubiquitous and plays an important role in condensed matter physics and statistical physics. One of its intriguing features is that there is a multitude of ways to break a symmetry into its subgroups, leading to a large array of exotic phases. In this work, we have examined the nature of phase transitions breaking the rotational group O(3) to polyhedral point groups. Such phases are prime candidates in the search for unconventional nematic liquid crystals, in particular in the field of nano and colloidal science. We found that the transitions from the nematic phase to the isotropic liquid phase are generically first order for all polyhedral symmetries. Furthermore, the polyhedral NI transitions are robust in the sense that they require fine tuning of a high precision in order to achieve a second order phase transition. This feature is inherited from the complexity of the group structure of polyhedral symmetries. Moreover, along the lines of the discussion in Sec. IV A, we anticipate the NI transition of generalized uniand bi-axial nematics, which breaks O(3) to axial point groups {C n , C nv , C nh , S 2n , D n , D nh , D nd }, to be generically of first order as well. As discussed in detail in Ref. [46], the order parameter of axial symmetries in general has the structure O G = {A G , B G , σ}, where A G = A G [n] defines the order of the primary axis chosen to be n, B G = B G [l, m] or B G [l, m, n] defines the order in the perpendicular plane and is required for finite axial symmetries, and σ defines the chiral order as seen in Sec. III A and is only relevant for the proper axial groups {C n , D n }. For symmetries {C nh , S 2n , D n , D nh , D nd }, A G is a rank-two tensor and coincides with the Q ab director. Hence, following the Landau-de Gennes theory, it is immediately clear that regardless of the in-plane structure, the NI transition for these symmetries will be generically first order. For symmetries C n and C nv , the primary order parameter A G is a vector, and continuous phase transitions seem to be preferred. However, when n > 1 but finite, the direct NI transition will be also first order, owing to the existence of an even and/or high rank B G tensor, as in the cases of polyhedral symmetries. Even at n = 1, where both A and B are vectorial, the order of the phase transition will depend on their coupling. Therefore, even though there are diverse patterns to break the O(3) symmetry, second-order transitions and corresponding universality classes may be quite rare. The familiar Heisenberg universality class related to the breaking of O(3) to O(2) ∼ = C ∞v is a special case. Our results add new insights to the physics of exotic orientational phases and hopefully facilitate the understanding of future experiments. Finally, we would like to note that in the present work only a single symmetry is considered in the realization of each polyhedral nematic. Nevertheless, as has been discussed by many authors for T d and O h symmetries [15,28,42,66,67], polyhedral phases may emerge from systems formed from less-symmetric constituents. Although it is hard to imagine a second-order NI transition from this, it would be interesting to explore the general pattern of symmetry emergence in liquid crystal systems. Given the compatibility with competing orders and the potential power on controlling topological defects, we expect that the gauge-theory scenario will be suitable to achieve this aim without losing simplicity.
2018-01-26T15:23:47.000Z
2017-06-06T00:00:00.000
{ "year": 2017, "sha1": "8a4b27895cce5268efefa4800b5f574303793667", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1706.01811", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8a4b27895cce5268efefa4800b5f574303793667", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
7074212
pes2o/s2orc
v3-fos-license
The prognostic significance of apoptosis-associated proteins BCL-2, BAX and BCL-X in clinical nephroblastoma Apoptotic cell death represents an important mechanism for the precise regulation of cell numbers in normal tissues. Various apoptosis-associated regulatory proteins, such as Bcl-2, Bax and Bcl-X, may contribute to the rate of apoptosis in neoplasia. The present study was performed to evaluate the prognostic value of these molecules in a group of 61 Wilms' tumours of chemotherapeutically pre-treated patients using an immunohistochemical approach. Generally, Bcl-2, Bax and for Bcl-X S/L were expressed in the blastemal and epithelial components of Wilms' tumour. Immunoreactive blastema cells were found in 53%, 41% and 38% of tumours for Bcl-2, Bax and for Bcl-X S/L, respectively. An increased expression of Bcl-2 was observed in the blastemal component of increasing pathological stages. In contrast, a gradual decline of Bax expression was observed in the blastemal component of tumours with increasing pathological stages. Also blastemal Bcl-X S/L expression decreased with stage. Univariate analysis showed that blastemal Bcl-2 expression and the Bcl-2/Bax ratio were indicative for clinical progression, whereas epithelial staining was of no prognostic value. Multivariate analysis showed that blastemal Bcl-2 expression is an independent prognostic marker for clinical progression besides stage. These findings demonstrate that alterations of the Bcl-2/Bax balance may influence the clinical outcome of Wilms' tumour patients by deregulation of programmed cell death.   http://www.bjcancer.com © 2001 Cancer Research Campaign apoptosis, whereas the short splice variant (Bcl-XS) promotes cell death. Recent studies demonstrated that Bcl-X L is mainly involved in the mechanism of resistance to chemotherapeutic agents and radiation (Datta et al, 1995). An abnormal pattern of Bcl-2, Bax and Bcl-X expression has been demonstrated in several human malignancies (McDonnel et al, 1992;Gazzaniga et al, 1996;Krajewska et al, 1996a,b;Xie et al, 1998). Wilms' tumour, or nephroblastoma, is a paediatric malignancy of the kidney and one of the most common solid tumours in children (Beckwith, 1994). Preoperative chemotherapy as treatment for Wilms' tumour has increasingly been more used in Europe by International Society of Paediatric Oncology (SIOP) which is different from the protocol used in the USA by the National Wilms' Tumour Study (NWTS) (D'Angio, 1983). Despite the remarkable response to chemotherapy, 5% to 10% of tumours are fatal due to metastases and/or the occurrence of drug resistance (Green et al, 1994). Since it appears that programmed cell death is a crucial event in normal kidney development one may expect oncogenic events to result from defects in the pathways of renal cell death (Coles et al, 1993). Therefore, the prognostic value of Bcl-2, Bax and for Bcl-X S/L were investigated in specimens of nephroblastoma patients which were chemotherapeutically treated prior to surgery. Having access to this type of material derived from a substantial number of patients with a good stage distribution, the aim of our study was to find factors which could predict the clinical outcome of these patients after being treated by chemotherapy and surgery. Patients During the period 1987-1999, 61 patients with nephroblastoma were treated by neo-adjuvant chemotherapy and subsequent tumour nephrectomy. Following treatment patients were followed regularly and all data concerning diagnosis, treatment and followup were stored in a database. Clinical progression was defined as histologically or cytologically proven local recurrence or the appearance of distant metastases. Sample selection All nephrectomy specimens were fixed in 10% buffered formalin and embedded in paraffin. The haematoxylin and eosin-stained slides were reviewed by an experienced paediatric pathologist (JCDH). The tumour stage (pT stage) was assigned according to adaptation of the SIOP trial protocol established in the SIOP meeting in Stockholm in 1994(Boccon-Gibod, 1998. Among the paraffin blocks available of individual patients those containing tissue with the 3 different cell types of Wilms' tumour (blastemal, epithelial and stromal) were selected. In addition, adjacent normal kidney tissue was taken from each patient. Antibodies The following primary antibodies were used: mouse monoclonal antibody against Bcl-2 (clone 124) (DAKO, A/S, Glostrup, Denmark) and rabbit anti-human Bax (P-19) and Bcl-X S/L (S-18 against both the Bcl-X L and Bcl-X S ) polyclonal antibodies from Santa Cruz, California, USA. The specificity and characteristics of these antibodies have been published elsewhere (Oltvai et al, 1993;Datta et al, 1995). Immunohistochemistry The PAP (peroxidase-anti-peroxidase) technique was used. Serial sections (5 µm) from all samples were mounted on 3-aminopropyltrietoxysilane (Sigma Co, St Louis, MO, USA) coated glass slides, which were incubated overnight in a 60˚C incubator. After dewaxing in fresh xylene for 10 min and passage in 100% methanol for 10 min, the sections were rinsed in methanol containing 3% hydrogen peroxide for 20 min to block endogenous peroxidase activity. The slides were rinsed with distilled water. In order to enhance antigen exposure, the slides were microwaved at 700 W in 0.1 M citrate buffer at pH 6.0 for 15 min. After cooling and rinsing with PBS, the slides were placed in a Sequenza immunostaining system (Shandon, Uncorn, UK). Sections were incubated with 10% normal rabbit serum (prior to monoclonal antibody) or 10% normal goat serum (prior to polyclonal antibody) (DAKO). Incubation was done in PBS/5% bovine serum albumin (BSA) for 15 min and subsequently overnight with the primary antibody at 4˚C. The antibodies were diluted in PBS/5% BSA at 1:50 for Bcl-2, 1: 150 for Bax and 1:75 for Bcl-X S/L . Subsequently, the slides were rinsed with PBS/Tween (0.1%) and incubated for 30 min with rabbit-anti-mouse antibody for monoclonal antibody or goat-antirabbit for polyclonal antibody (DAKO), followed by rinsing with PBS/Tween (0.1%). The PAP complex (DAKO) was diluted in PBS/5% BSA at 1:300 and incubated for 30 min followed by rinsing with PBS. The antigen-antibody binding was visualised with diaminobenzidine tetrahydrochloride dihydrate (Fluka, Neu-Ulm, Germany) as chromogen. The sections were lightly counterstained with haematoxylin. Negative controls were included by replacing the primary antibody by PBS/5% BSA. Normal kidney tissue adjacent to the tumour served as positive control. Immunostaining analysis The slides were examined at 25 × magnification without knowledge of the clinical outcome of the patients. The amount of positive staining was assessed as the estimated percentage positively staining cells: < 10% was considered negative, > 10% was considered positive. Statistical analysis Statistical analysis was performed using the SPSS 9 software package. The association between Bcl-2, Bax, Bcl-X S/L expression and clinico-pathological features was analysed using χ 2 test. For analysis of survival data Kaplan-Meier curves were constructed and the log-rank test was performed. Multivariate analysis was performed using Cox's proportional hazards model with P < 0.05 considered statistically significant. Protein extraction and Western blot analysis To further confirm the Bcl-X S/L immunohistochemical data, Western analysis was performed with tissues from Wilms' tumour xenografts. 6 different xenograft tissues were analysed in total. Tissues 1-3 (cf. Figure 4) originate from transplants of three individual patients, resulting in xenografts WT-7, WT-9 and WT-11, respectively, whereas tissues 4 -6 (WT-15, WT-15LN, WT-16) were from one individual patient, being a specimen of primary tumour in right kidney (WT-15), lymph node metastasis (WT-15LN), and primary tumour in left kidney (WT-16), respectively. Morphologically, all 6 tissues contained the blastemal and stromal component, whereas in tissue 1 and 3 epithelial cells were present as well. The frozen tissues were crushed in a liquid nitrogen chilled metal cylinder. The tissue homogenates were transferred to a lysis buffer consisting of 10 mM TRIS (pH 7.4), 150 mM NaCl (Sigma), 1% triton × 100 (Merck, Darmstadt, Germany), 1% deoxycholate (Sigma), 0.1% sodium dodecyl sulfate (SDS, Gibco BRL), 5 mM EDTA (Merck) and protease inhibitors (1 mM phenylmethyl-sulfonyl fluoride, 1 mM aprotinin, 50 mg l -1 leupeptin, 1 mM benzamidine and 1 mg l -1 pepstatin, all from Sigma). The samples were spun at 35 000 g at 4˚C for 10 minutes. The protein content of the supernatant was measured photometrically using the bio-rad protein assay (Bio-rad, München, Germany). The proteins were transferred to a SDS-polyacrylamide gel and electrophoresis was performed in 10 times diluted tray buffer for 2 hours. The gel was blotted to a 0.45 µm cellulose nitrate membrane (Scheicher & Schuell, Dassel, Germany). Pre-stained markers were used as size standards (Novex, San Diego, CA). The immunoblot was blocked for 1 hour with 5% dry milk (Sigma) in 0.1% Tween-20 (Sigma). The antibodies were diluted 1:1000 in 5% dry milk and applied overnight at 4˚C. After rinsing with PBS/0.1% Tween, the blot was incubated with horseradish peroxidase-labelled goat-anti-rabbit (1:2000, DAKO) for 1 hour. Subsequently, a one minute incubation with a 1:1 mixture of luminol and oxidizing reagent (DuPont NEN, Chemiluminescence kit, Boston, MA) was performed. Excess reagent was removed by placing the blot on a piece of Whatmann paper. Finally, the antibodies were visualised by exposure of the blot to an X-ray film for 30 seconds. Clinico-pathological findings The tumour-stage distribution was T 1 in 21, T 2 in 20 and T 3 in 20 patients. There were 29 (48%) females and 32 (53%) males. The mean age at surgery was 4.2 years, and the mean overall follow-up period was 5.7 years. Clinical progression occurred in 14 patients (23%), and 8 patients (13%) died from their tumour. At the end of the follow-up period 53 patients were alive. BCL-2, BAX, BCL-X S/L expression in normal renal tissues Immunoreactivity for Bcl-2 was found in the Henle's loop, collecting ducts and parietal layer of Bowman's capsule of the normal kidney ( Figure 1A). Immunoreactivity for Bax was found in the proximal and distal convoluted tubules and parietal layer of Bowman's capsule, but was not expressed in the collecting ducts ( Figure 1B). Expression of Bcl-X S/L was evident mainly in the collecting duct and to lesser extent in the proximal and distal convoluted tubules, whereas expression in the glomeruli was not found ( Figure 1C). BCL-2 expression in Wilms' tumour tissues Bcl-2 positive (i.e. > 10% stained) blastemal and epithelial cells were found in 32 (53%) and 33 (54%) of the Wilms' tumours studied, respectively (Table 1) Bcl-2 staining was mainly perinuclear, but some cytoplasmic staining was also found ( Figure 1D). Some specimens showed expression of the majority of cancer cells, whereas in others only small areas of the tumour were found positive for the Bcl-2 protein. The majority of the infiltrating lymphocytes in the tumour stroma were strongly Bcl-2 positive, serving as internal control for the staining procedure. Bcl-2 expression gradually increased from T 1 , T 2 to T 3 for both the blastemal and epithelial component. No statistically significant relationship between stage and Bcl-2 expression was found. The nephroblastoma sections showed an intense expression of the epithelium with little expression in the stromal component. BAX expression in Wilms' tumour tissues Blastemal and epithelial expression of Bax was found in a similar percentage of Wilms' tumours (41 and 43% of cells, respectively) ( Table 1). Bax staining was mainly cytoplasmic ( Figure 1E) and the staining pattern of most of the specimens was weak to moderate. Bax expression in both blastema and epithelium gradually decreased from T 1 to T 3 , but the trend was not statistically significant. The epithelial component of the tumour demonstrated diffuse expression of Bax. BCL-X S/L expression in Wilms' tumour tissues Blastemal and epithelial expression of Bcl-X S/L was found in 23 (38%) and 35 (57%) of the Wilms' tumour studied, respectively ( Table 1). The percentage of tumours with Bcl-X S/L -positive blastema was generally lower than those staining positive for Bcl-2. The majority of Bcl-X S/L -positive tumours had very intense cytoplasmic staining pattern, showing a more widely distribution ( Figure 1F). There was no statistically significant relationship between stage and Bcl-X S/L expression. Immunoblot analysis of tissue lysates of a panel of human Wilms' tumour xenografts identified the specificity of the Bcl-X S/L antibody used for immunohistochemistry ( Figure 4). Morphologically, all 6 tissues contained blastemal and stromal component, whereas in tissue 1 and 3 epithelial cells were present as well. The Bcl-X L protein was found to have migrated as a doublet of proteins with an apparent molecular mass of 28-32 kDa. The Bcl-X L was clearly detectable in all 6 samples analysed, particularly the 32 kDa message. A clear 22 kDa band compatible with Bcl-X S was detected in 3 samples positive for Bcl-X L , whereas faint bands were found in the remaining 2 samples, of which one was devoid of Bcl-X S (Figure 4). Prognostic value of BCL-2, BAX and BCL-X S/L Univariate analysis using the logrank test showed a prognostic value of blastemal Bcl-2 expression for clinical progression (Table 2, Figure 2). The blastemal expression for both Bax and Bcl-X S/L and epithelial expression of the 3 proteins did not show any prognostic value (Table 2). To test whether Bcl-2, Bax and Bcl-X S/L expression had any prognostic impact a multivariate Cox' regression analysis was done including the parameters pT stage, Bcl-2, Bax and Bcl-X S/L expression. PT stage was divided as pT1-2 vs. pT3. In that analysis, only the Bcl-2 blastemal expression could be identified as independent prognostic variable besides stage (data not shown). Prognostic value of the Bcl-2/Bax ratio By examining the Bcl-2 to Bax ratio versus outcome, 4 patients groups were identified. The Kaplan-Meier curves of Figure 3 show the prognostic influence of Bcl-2/Bax expression on the time to progression. Clearly, patients with high (>10%) expression of both Bcl-2 and Bax had a statistically significant poor prognosis compared to the 2 groups of patients with Bcl-2-negative expression (i.e., Bcl-2 and Bax negative or Bcl-2 negative/Bax positive) (Table 2, Figure 3). Generally, the data of Figure 3 is consistent with the notion that the patients with low Bcl-2/Bax ratio do better than those with high ratio. Bcl-2 and Bax were simultaneously expressed in 28% and 25% of blastemal and epithelial components of tumours, respectively. DISCUSSION Over the past few years, Bcl-2 has moved from being a molecule exclusively implicated in lymphoid translocation (14; 18) to being an important gene involved in key mechanisms in the regulation of cell death, by inhibiting apoptosis. Consequently, Bcl-2 is implicated in the process of multidrug resistance of cancer. For this reason, studying the prognostic value of Bcl-2 gene family in malignant processes is warranted. In the present study we have examined the expression of Bcl-2 and some of its related proteins, Bax and Bcl-X S/L , to study its prognostic value in Wilms' tumour. All patients studied received chemotherapy before operation. It is realised that in the majority of patients having a good response upon chemotherapy, treatment did affect the cellular compartments of the Wilms' tumour, the blastema in particular. Consequently, this may influence the (Bcl-2, Bax, and Bcl-X S/L ) immunophenotype of the remaining tumour component. The present study did not allow comparative studies of the effect of treatment upon the expression of the apoptotic markers studied. The expression level of the various markers, however, reflects the status of the tumour after chemotherapy, regardless of the occurrence of drug resistance. Therefore the final status of the tumour removed, may predict the clinical outcome of the patient, e.g. whether metastasis will occur. Expression of Bcl-2, Bax and Bcl-X S/L was found in normal kidney tissues, which were used as internal positive controls for the immunohistochemical reaction ( Figure 1A, B, C) as well as for the Western blot ( Figure 4). In general, increased Bcl-2 expression was related to advanced disease. Reduction of Bax immunostaining in high-stage tumours inconsistent with the more aggressive character of these tumours. Most clinically oriented studies on Bcl-2 expression have found a positive correlation with adverse prognosis in a variety of solid tumours, including non-small-cell lung carcinoma (Pezzella et al, 1993), nasopharyngeal carcinoma (Lu et al, 1993), prostatic carcinoma (McDonnel et al, 1992, and neuroblastoma (Castle et al, 1993). In contrast, Bcl-2 expression was correlated with favourable prognosis in tumours arising from epithelial as in stage II colon carcinoma (Sinicrope et al, 1995) and node-positive treated breast cancer (Gasparini et al, 1995). However, in other studies of patients with renal cell tumours (Paraf et al, 1995), invasive transitional cell carcinoma of the bladder (Glick et al, 1996), and head and neck cancer (Drenning et al, 1998), such correlation was not found. A recent study failed to demonstrate prognostic Bcl-2 significance in Wilms' tumour because of a limited number of cases studied (Re et al, 1999). In another study, Tanaka et al (1999) showed that preoperative chemotherapy did not significantly influence either the occurrence of apoptosis or expression of Bcl-2 in a group of treated Wilms' tumour patients. In the present study, a prognostic value of blastemal Bcl-2 expression was found for clinical progression. The mechanism by which Bcl-2 contributes to the progression of nephroblastoma is unknown. Based on gene transfer experiments (Vaux et al, 1988) and studies of transgenic mouse models (Strasser et al, 1990), Bcl-2 functions by inhibiting cell death rather than affecting the rate of cell proliferation. Information regarding the prognostic significance of Bax in human tumours is scarce. It has, however, been shown that reduced Bax expression correlates with poor prognosis in patients treated with chemotherapy for metastatic breast adenocarcinomas (Krajewski et al, 1995) and in radiotherapeutic glottic squamous cell carcinomas (Xie et al, 1998). High levels of Bax expression correlates to a better outcome for patients with low-grade tumour of urinary bladder (Gazzaniga et al, 1996). Bax expression alone had no influences upon survival of stage I patients with radically resected non-small-cell lung cancer (Apolinario et al, 1997), however, a study of the Bax mRNA expression levels in nephroblastoma, was not indicative that its expression could have a role in the prognosis (Re et al, 1999). This was confirmed by the outcome of the present study showing an absence of any prognostic value of Bax. The ratio between the Bcl-2 and the Bax proteins has been described as a cellular rheostat determining the cellular response to apoptotic stimuli (Oltvai et al, 1993). Given the potential role of these proteins in malignant tumours, the prognostic significance of combined Bcl-2 and Bax expression has been studied by many authors. Gazzaniga et al (1996) demonstrated that the Bcl-2/Bax expression ratio had prognostic impact for disease progression in low-grade bladder tumours. Interestingly, in nephroblastoma Bcl-2/Bax expression had prognostic significance for the prediction of clinical progression at univariate analysis. The present data demonstrate that the level of Bcl-2 expression exceeds that of Bcl-X S/L , although Bcl-X S/L staining is extensive and intensive compared to Bcl-2. These results are not similar to those reported for untreated prostatic and gastric cancer, in which Bcl-X S/L immunoreactivity was found in 100 and 85% of the tumours, respectively (Krajewska et al, 1996a,b). Bcl-X S/L was expressed in 100% of untreated gliomas, but not related to the response to radiotherapy or chemotherapy (Rieger et al, 1998). In the present study, Western blot analysis demonstrated that the antiapoptotic Bcl-X L protein was the dominant isoform of the Bcl-X S/L protein in experimental human nephroblastoma. This was in agreement with the outcome of studies of Re et al suggesting the oncogenic potential of Bcl-X L overexpression in the development of anaplastic Wilms' tumour (Re et al, 1999). Accordingly, reduction of Bcl-X L expression may influence the outcome of patients treated with radiotherapy and/or chemotherapy (Datta et al, 1995;Krajewska et al, 1996b;Decaudin et al, 1997). On the basis of the present non-quantitative immunohistochemical data of various apoptosis-associated proteins no conclusion can be drawn with respect to the clinical behaviour of Wilms' tumours. Still, reduction of the expression of the anti-apoptotic Bcl-X L protein, may contribute to enhanced sensitivity to radiotherapy/chemotherapy. Indeed, cumulating in vitro data suggest that Bcl-X L may play an important role in the susceptibility of tumour cells to chemotherapeutic agents and radiation. For this experimental study it seems that the immunohistochemical expression of one single marker will provide sufficient information on the chemosensitivity or radiosensitivity. In conclusion, Bcl-2, Bax and Bcl-X S/L were expressed in normal renal tissues and in nephroblastoma. Bcl-2 expression and the Bcl-2/Bax ratio showed prognostic value in the specimens of chemotherapeutically treated Wilms' tumour patients.
2014-10-01T00:00:00.000Z
2001-11-01T00:00:00.000
{ "year": 2001, "sha1": "a5a5e105bf93897775d51e9358a6998d52cd5332", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/6692146.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a5a5e105bf93897775d51e9358a6998d52cd5332", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236394417
pes2o/s2orc
v3-fos-license
Lessons Learned from 55 (or More) Years of Professional Experience in Urban Planning and Development Reflecting on the many debates over the years on changing urbanization processes, on the towns and cities of yesterday, today, and tomorrow, the main challenge will be listening to lessons of wisdom from the past and adapting these to our future professional work.When Chief Seattle said that the Earth does not belong to us, we belong to the Earth, he called for more humility and respect so as to plan for the needs of today and tomorrow, and not for the greed of a few. The doomsday scenarios of overpopulation only make sense if we continue to exploit our planet the way we do today, as if we have an infi‐ nite reservoir of resources. Already back in the 1960s, Barbara Ward, John F. C. Turner, and particularly Kenneth Boulding taughtme to rethink our whole perception of Spaceship Earth. I have seenmany towns and cities grow as if resources were limitless; I myself have seen and worked on efforts to focus on spatial quality, respecting nature whenever possible for a growing number of people, recognizing resources as being precious and scarce, and yet guaranteeing equitable access to a good quality of urban life. Such objectives are not evident, when models in education, schools of thought, professional planners, and greedy developers are often geared towards the contrary: the higher the skyscrapers, the better; the more egotripping by architects, the more the rich like it; the more people are stimulated to consume, the better the world will be. Such narrow visions will no longer help. At several global urban planning and developments events (1976, 1992, 1996, 2016, etc.), new ideas and agendas have been put forward. Whether the present Covid‐19 crisis may induce a more rapid change in vision and practice is still too early to confirm, but luckily, several towns and cities, and a few visionary planners and decision makers are showing some promising examples. Our Background Is Our Wealth and Our Limitation Knowing your own context is an essential precondition to be able to meaningfully and professionally communicate with others. Such background is formed over the span of several years, from childhood to adulthood, through family, friends, school, education, higher learning, travel, volunteer work, practical experience in design, construction, and built and un-built environmental planning work. Knowing your own context not only strengthens your perception of how we do things in our own context, but also how we can start to exchange experience with others in other contexts. I was very fortunate to have had parents who let me explore, from a very early age, the city of Brussels where I was born as well as its outlying areas. I remember cycling perfectly safely at the age of six, on the broad tree-lined boulevards with designated cycling paths (destroyed in the 1960s, now re-planned). I visited the Brussels World Exhibit EXPO 58 frequently when I was only 15 years old and already by then having been inspired by the architectural innovations of the time. Subsequently I studied architecture and further specialized in urban design in Norway, in urban planning in Seattle, WA, and Berkeley, CA, and in environmental planning and policies in Boulder, CO. I did stints of practice and apprenticeship on construction sites and was involved in master planning in Belgium, Norway, and the US. Later, I worked on urban design and planning projects in the Middle East and internationally before joining the University of Leuven. And even while there, I continued to work in practice and to undertake a lot of fieldwork. Without this practice, both here and internationally, I would not have been able to keep one foot rooted in the practice of planning programs and projects, and one foot rooted in research, capacity building, and international cooperation. It is important is to immerse oneself either locally or internationally in contexts where other values, religions, languages, and traditions are practiced. One should force oneself to go beyond one's own comfort zone, leave the cocoon of your own people/social class, and reach out to others, to the unknown. Mentors are essential to open up new ways of thinking and doing, to give you the freedom to experiment, and above all to share their unconditional wisdom, both as a person and a professional. Mentors are not there to be imitated; in fact, good mentors would always say: "Do not just do what I say or do, but explore new paths." This is important because particularly architecture and urban design is far too often an imitation of what so called 'star' architects and designers are doing. Copying from fashionable architectural magazines is the least creative process. In a rapidly globalizing world this is far too often done and only weakens the possibility to learn from other different local contexts and practices. Many schools of architecture and planning in the South unfortunately often had copy paste programs from the North and thereby taught very little their own context and culture to their people. Although this is now gradually changing, evidently this has been a difficult basis to further understanding of local contexts. The 'Rem Koolhaases' of this world, for example, should be critically assessed. We do not want to hail ego-trippers in architecture and urban design who pretend their architecture is 'universal.' In addition, we have to learn over time. Awareness of sustainability requirements is now greater, and also much more urgent. Many Modernist and Post-Modernist architects have yet to come to that awareness and should more humbly learn from tradition, more modestly work on a sustainable future rather than always want to be the unique stars. The slogan 'the higher the better' is a poor motivator for sustainable spatial qualities. Long-term perspectives are always necessary. Will the 'Dubais' of this world survive or are they the ruins of tomorrow? In Dar Es Salaam, a few years ago, I was shown around a recently built high-rise building, totally inappropriately located within the urban fabric, not enhancing existing street patterns, no public space or green area around it, obscuring the harbor view for the passersby, with just a few gimmicks of colors and architectural details to make it look original-a futile and costly exercise. The field of spatial planning is wider than architecture, but it is also often more theoretical and more recently developed. Patrick Geddes laid some of the foundation of modern planning theories and practices. Of course, throughout history there have always been urban planners. The cities of Ur, Babylon, Rome, Paris, Beijing, Great-Zimbabwe, and Cairo, for instance, had all been planned, even though with differing planning concepts. In the more recent post-war period, Doxiadis and the Ekistics School were quite innovative for their times. They incorporated several disciplines in their 'Science of Ekistics,' dealt with a wide range of scale levels, and applied this all into many Master Plans in a number of countries. Countries like the Netherlands had already had their first "Spatial Planning Note" from the early 1960s and have continued with revised notes regularly. Many countries in the world have continued with spatial planning efforts, although National Spatial plans are fewer than the manifold local spatial development plans. Unfortunately, most spatial plans prepared by professionals only confirmed their subjugation to the neo-liberal systems dominating many parts of the world. They became the servants of the status quo, and were no longer the critical evaluators and innovative searchers towards a more sustainable world. Luckily, exceptions are there to prove that other paths are possible. In a recent publication edited by Louis Albrechts (2019), several decision-makers and professionals speak out on how they can make a positive difference in a changing world. One World and Many People and Places Already during your young life and during your studies you have increasing opportunities to open up to the world. Youngsters nowadays travel abroad with their parents, much more than previous generations. But in doing so, they should also strengthen their observation of other cultures, not just be tourists, but young observant professionals. During studies of higher learning the opportunities for exchange have become manifold. The European Erasmus programs, for example, have proven to be very successful, not only in terms of numbers of students exchanged, but also very cost-efficient as they have opened up the world to so many young professionals at relatively little cost. In my many years as Program Director of the Master's in Human Settlements, I have had over 450 undergraduate students who did their thesis work, most of them with several months of fieldwork, within the framework of one of our international cooperation projects in over 21 different countries. Many of these students became highly motivated and they even experienced these periods as being unique in their lives and for some, it was the start of an international professional career. These undergraduate students also learned from the many postgraduate students who came to study in Leuven, over 700 from 39 different countries over the last 25 years. Of course, keep in mind that the world keeps changing: what one may have learned today is not necessarily valid ten years from now. As mentioned, perspectives over time are important. During intensive fieldwork in Tunisia 30 years ago, we observed, in a Muslim rural setting, that men and women live according to different rules and traditions more separated from each other, and also public space is organized accordingly. It was not up to us to change this cultural tradition. However, in contemporary urban settings in Tunisia, few elements of such traditional practice remain, and even those that do for a time period, are gradually changing under the influence of changing openness between genders. So, we do not just learn solutions for a specific moment in time, but through analysis and discussions on potential change patterns we learn to develop solutions valid for a longer time frame. Anyone, Anywhere Is A Potential Teacher or Co-Learner Going 'international' means having an open mind and a willingness to learn. Learning, as I have experienced over the years, one can do from anyone anywhere. Having done quite a lot of fieldwork with our team, we learned as much, or more, in the so called 'slums'-which we prefer to call popular or informal housing areas-as in some of the more highbrow formal architectural projects. A few simple masons taught me about building with sundried earth blocks and with rammed earth in Morocco, where I also met Elie Mouyal, a well-known architect there, who built with earth for the rich and the poor; a master builder showed me how to construct a Nubian vault in Egypt, after I had met Hassan Fathy and visited some of his work. In the late 1970s I started to cooperate with the University of Nairobi and met a Kenyan colleague there, Elijah Agevi, who knew so much about spatial planning, local housing, formal and informal, in East Africa, that I have always considered him as a mentor and invited him several times as guest lecturer and team member to Leuven. I discovered Bamboo architecture in Indonesia in remote rural villages, in the mid-1970s, and I saw how they have mastered this ancient practice, and how it can be adapted to present and future architecture even in more urbanized areas. Now a few young professionals are finally continuing on this path, but unfortunately very little is taught at schools of civil engineering worldwide. In the 1980s, while working in the north-eastern region of Esarn, in Thailand, in collaboration with several Thai institutes, we looked into adapting traditional wood skeletons for house construction for loadbearing walls made with interlocking sundried stabilized soil blocks. Wood had disappeared because of rampant deforestation and poor people could no longer afford it. The local lateritic soil in the region was quite suitable for stabilized earth construction. Now, this method of construction is widespread and is applied widely in the region in the construction of schools, temples, water storage tanks, etc. Together with these innovations and working with the Department of Agriculture, we initiated reforestation programs to make the region less prone to drought and crop failure. As far as construction goes, we still rely heavily on reinforced concrete construction, even though it is proven that cement production is not sustainable. In several countries, both in the Global North and Global South, it is proven that wood skeletons, for example, even for buildings as high as ten floors can be quite adequate and appropriate if the wood used is generated from reforestation programs. Bamboo and earth are also among the several age old materials which have been rediscovered in recent decades. Similarly for public infrastructure, examples from so-called developing countries are very relevant for the North. Developed in the late 1970s, the public bus transport systems of, for instance, Curitiba in Brazil, have been far more effective than in Belgium and have inspired several cities in Latin America to prioritize the public transport needs of the wider population. Several cities in China are now developing innovative ecological parks that one can learn from. Efforts to re-plan the Mekong Delta, particularly in Vietnam-a region to become more resilient in the face of rising sea levelsare gaining recognition. It is vital to break the limited and narrow scope of vision we often have in our own cultural worldview, and the perception of essential resources such as land, air, water, mineral resources, etc. In many contexts, land is a common good, so are many natural resources, not to be appropriated as fully private by individuals and exploited for private gain. We are now also increasingly seeing the limits of appropriated rights to 'private' ownership. Many resources are only borrowed from the Earth and from past generations, and we have to take good care of these for future generations. We have to re-establish the value of the 'commons,' managing land and natural resources as a community, taking care of public goods (land, forests, clean air, clean water, and seas) in a respectful and sustainable way. The list of examples is endless, and we can only be thankful to have had the opportunity to learn from so many different people. Those of Us in the Global North Are Not Superior I was brought up in an era when the world of the North called itself 'developed' and the South 'under-developed' (later renamed 'developing'). How erroneous this view of the world was! We in the North might have developed some technological tools that others did not have, but as far as human relations are concerned, we are no better than any other people in the world. Now, even our technological systems often malfunction due to our bureaucracy, to our lack of entrepreneurship, to our over-regulated institutions. Certainly, basic rules about fair labor, fair trade, environmental protection, human rights, etc., must be practiced, and we should propagate these, but in a globalized world we are now seeing the limits of the neo-liberal capitalist system, which often exploits natural resources to the benefit of only the rich and increases the gap between them and those who are weak, poor, and voiceless. In terms of working towards a more sustainable world, none of us are superior; in fact, our ecological footprint in the North is far bigger than that of many other people in the South, making it far more difficult for us to change our patterns of production and consumption. It must be said that it is no longer possible to talk about the rich North and the poor South. Pockets of wealth or poverty exist in all cities, towns, and villages everywhere in the world. We cannot fight poverty without fighting excessive wealth. The Real World Is Much More Exciting than the Small Academic World The academic world is but a very small part of day-to-day reality. Most people do not live in this small world, often seen as an ivory tower. Indeed, that realization should make academics aware-particularly those working in international cooperation projects-of three important 'musts': First, they should explore and learn from this vast day-to-day reality, do fieldwork, and learn from practice. Secondly, they should translate their findings into understandable, user-friendly language, and communicate well with the research they do on subjects, i.e., people and communities. Researchers must never forget that the people they study give their personality and information 'on loan' to academics for study. Thirdly, the entire present system of 'publish or perish,' particularly as it is only oriented to peers, is a rather perverse system for evaluating academics. Many just publish without having anything new or meaningful to say, let alone directly involving their research subjects. In our work, we have always tried to promote the local partners we have worked with. Thesis students have been encouraged, often required, to make a presentation for the people who had been their research subjects. We have even encouraged the practice of 'revisiting projects.' One of our alumni, after having done her thesis work in a project 'Building Together' in Bangkok, returned a few years later, re-evaluated the project, stayed in contact with the dwellers and now, after many years, is still a good friend with several of them. In terms of academic disciplines, it should be clear that ordinary people, in their day-to-day lives do not care about 'different disciplines.' They care about work that is well done and with care and attention. Separating disciplines is an academic invention. Cooperating with and transcending several disciplines is essential to work in the real world. Useful as a discipline may be in carrying out in-depth research, it more often becomes a big obstacle to the complex task of planning and building towns and villages. Near Mwanza in Tanzania, for example, over several years, we worked in close cooperation with an anthropologist and local sociologists, to better understand the Sukuma's use of space, privacy requirements, and rituals while building their neighborhoods and villages. In Ho Chi Minch City, in Vietnam, in a major urban upgrading project lasting ten years, one of the strongest team members of the local team was an experienced sociologist. Finally, and very importantly, we also have to work on different scales at the same time: landscapes, infrastructure patterns, nature zones, water bodies, streams and rivers, open, built and enclosed spaces, buildings, building sites and technical support, are all part of a combined human settlements approach. Separating these while planning is not contributing to a holistic qualitative outcome. An architect just designing a building on an assigned site, without questioning the assignment itself, without questioning or responding to the building's suitability in terms of its wider spatial impact, without questioning the use of materials and techniques is foregoing the essential task of a professional. Of course, questioning something has to go together with the willingness to propose alternatives. We have to be more willing to think 'outside the box' and not be afraid to work with other disciplines, to cross borders, and to dare to experiment even if there is no institutional or regulatory framework to do so. This also requires the development of a language to communicate with other disciplines and to integrate and confront different perspectives of the same reality. Every Context Is Unique, and Cooperation and Exchange Enhance this Uniqueness Every context is unique; every community one works with is unique. Yet uniqueness is not a barrier to learning, communicating or exchanging. On the contrary, uniqueness offers the best opportunity not to fall into routine practice, not to rely on copycats, not to rely on fashion trends, but to explore each context and each new assignment as a unique opportunity for cooperation and exchange. There are so many types of cooperation and partnership possible, each one with its own strengths and weaknesses. Over the years, we have undertaken many different modalities of cooperation. Cooperation with international formal institutions (e.g., UN-Habitat, UN Environment Program, UNICEF, ICLEI-Local Governments for Sustainability), with international NGOs (ACHR, Habitat Coalition, SELAVIP, Protos), with universities or university networks (Asian Institute of Technology, King Mungkot University, UNPAR, ITB, HCMU, SEPT, NED, UNairobi, Ardhi/UDAR, WITS, UCT, MedCampus, ALFA, UCuenca), with local governments (Nakuru, Vinh City, Essaouira, Bayamo, Missungwi, Tarime), with local NGOs and community-based organizations in different countries, with mixed associations (government, Flemish Interuniversity Council, universities, NGOs, UN partners), and in a few cases, with commercial establishments. It is essential to keep one's own identity clearly spelled out from the very beginning and to know one's limitations and strengths vis-à-vis the partners. Diplomacy is required but one does not have to become like the other! If cooperation among various stakeholders and partners is to be lasting and successful, then the role and the mandate of each partner should be spelled out very clearly from the outset. Often the most rewarding types of cooperation are the relatively small scale initiatives with based on personalized working relationships. When a group of young professionals designed and built the Women's house in Ouled Merzoug, in the province of Ouerzazate, Morocco, during their Building Beyond Borders program at the University of Hasselt, of course guided by and in cooperation with local communities and artisans, they wrought long lasting relationships, and three of these young professionals are now continuing to upgrade schools in the region (Block, 2020; see also Studio Nous Nous, n.d., for another school project all with local crafts and materials, in the same region in Morocco). Increasingly local to local cooperation is gaining strength, particularly since local authorities and local partners are the closest to their own context. In such a way, the top-down planning is slowly being reduced to its proper proportions to find a better equilibrium with more bottom-up planning. In a major program-"Localizing Agenda 21"-our Post Graduate Centre for Human Settlements at KU Leuven, together with UN-Habitat and support from the Belgian Development Cooperation, launched a localized cooperation mechanism for strategic spatial planning for better, more sustainable urban development in several medium-sized cities. Cooperation with UN-Habitat, local authorities, local communities, and experts and academic centers was challenging but successful. A major publication, "Urban Trialogues" (Loeckx, Shannon, Tuts, & Verschure, 2004), elaborates both the theoretical foundations and the practical applications of this approach. Clarity and Honesty Will Strengthen Long-Term Engagement Building on the previous point, it is important to engage oneself and one's institution fully for the long term, not only in the short term like many travelling consultancy firms do. In my experience, the minimum duration for a period of cooperation was five years, sometimes even lasting more than ten years. This is definitely so for spatial planning programs that often take time to implement and come to fruition. Even architectural projects need time. One should start with the landscape planning long before a building is built. Trees take a longer time to grow but are just as essential as a building. Only in longer term cooperation can one learn from one another and establish solid and meaningful relationships. In Nakuru (Kenya), Vinh (Vietnam), Essaouira (Morocco), and Bayamo (Cuba), we had a commitment of a minimum of five years; with UNPAR in Indonesia and with COOPIBO in Tanzania, a commitment of more than ten years; and with ACHR and SELAVIP, we have had over 30 years of ongoing cooperation. Bringing cooperation to an end must also be carefully planned. Three of the main reasons for ending cooperation are the following. First, because of a gradual misuse of scarce resources: as an academic institution (or any organization for that matter), you have to stress that money is not the key factor of your cooperation and whatever money there is should be openly accounted for. In Indonesia, after having worked together for more than 15 years, we had to end all cooperation because of widespread misuse of resources. Secondly and very importantly, as self-reliance is a key element, one can end an ongoing cooperation because goals and objectives have been met and the local partners continue the work set out jointly in an excellent (but possibly different) manner on their own. This was the case in Cuba, South Africa, Vietnam, and Thailand. Thirdly, one can conclude that, in spite of many years of effort, results are not sufficient and do not warrant continuation, or somehow there is a divergence in objectives between the different partners. This was partially the case with Ardhi University Tanzania, and happened often when commercial interests or firms/consultants were involved in the cooperation. To conclude, the wise words from 1983 of our Pakistani colleague Arif Hasan can guide us: I will not do projects that will irreparably damage the ecology and environment of the area in which they are located; I will not do projects that increase poverty, dislocate people and destroy the tangible and intangible cultural heritage of communities that live in the city; I will not do projects that destroy multiclass public space and violate environment friendly bylaws and zoning regulations; and I will always object to insensitive projects that do all this, provided I can offer viable alternatives. Work towards Sustainable Development without Too Much Compromise The ultimate goal of all our work should be to come to a better, more qualitative sustainable built environment. This is a never-ending process and hence some will be disappointed, but in reality it is always a continuous process, a search for the better in which we can define short, to medium, to long-term objectives with well-defined steps, actions, programs, and projects. We call it a strategic process, because each project, however small or short-termed, should only be undertaken if it contributes towards this search and the defined intermediate steps towards sustainability. For more than 50 years now, we have considered sustainability as essential, using the recommendations of the 1972 Stockholm Conference, the various UN-Habitat Conferences (e.g., in 1976, 1996, and 2016, and the Millennium Development Goals. These days, the Sustainable Development Goals provide us with even clearer goals, objectives, and action plans. The beauty of working toward sustainability is that no one can claim the ultimate solution or say: "I am there, that's it." No, our search will involve anyone, everywhere in the world, on a continuous basis. And it will definitely not be a search for the 'more' (money or material wealth) but for the 'better' (health, quality of life, freedom of expression, etc.). Planning for sustainability is also planning for resilience, local adaptations, and transitions. Will the present pandemic teach us a lesson to focus more on essentials and less on triviality ? Personally, I find it too early to answer this question in depth. The often heard slogan "This changes everything" is too simplistic at the moment. So far, it is more likely that many decisionmakers and the better-off people worldwide consider "returning to business as usual" as their main mantra. Is that wise? No, it is not, as it indicates a reluctance to learn and adapt, but psychologically it could be understandable. In the immediate aftermath of war, it has been observed that reconstructing reality as it was is one way of overcoming destruction traumas. However, learning from changing realities seems to be one of the most difficult things for planners, as for our societies worldwide today. The first oil crisis of 1973 was a warning; car-free Sundays were organized; people were encouraged to take public transport. Has this generated new spatial planning and architecture more focused on essential spatial qualities for the great majority of ordinary people? No, on the contrary, greed and megalomania has taken over in many cities. Freeways and car-oriented spatial planning dominate the landscape. Old bicycle paths, trees, and green spaces have been taken away to give motorized transport full priority. Many urban neighborhoods became 'dormitories to house the productive workforces.' Skyscrapers became the new model, the higher the better even with 20-storey high photos of their greedy owner embedded in the facades such as in Dubai. Pudong, China, is probably a good example. My first memory of the river site opposite Shanghai is of rice fields and a few giant billboards. Now the skyscrapers dominate, some of the older structures are dwarfed and I would add, ridiculed by these giants. Were these older villages bad? No, not at all, they were made obsolete. Later on, new villages and gated communities were planned on the Pudong side, mostly pastiche copies of Danish, British, or Spanish neighborhoods, luckily offering residential quality far better than the megalomaniac high-rise areas. The limitations of Spaceship Earth are as yet to be changing the behavior of many, particularly in the many rich pockets of our world, the large greedy enterprises and the wealthy. Never Forget the Past in Planning for the Future Let me conclude with a last, short lesson learned. We are but a short moment on Earth, so we have to remain modest and know that many, many generations before us have planned and built human settlements and used planning and construction techniques and practices that evolved over many generations and were adapted to local cultures and local resources. Recent globalization did away with some of these modes of planning and building and many new typologies of built environments emerged. In addition-and this remains one of the biggest problems today-the Modernist movement in Architecture and Urbanism wanted to start from a clear slate, as if the past was not there. Le Corbusier presented a new plan for Paris (luckily this has never been implemented), after Haussmann had already destroyed old neighborhoods. Housing became a 'machine to live in,' superblocks emerged as prototypes, putting people into industrialized prefab boxes (like 'sardines'). The new modes of Modernist motorized transport, for example, altered the city and landscape infrastructure and layout. We now see the limits of such a car-oriented approach. Walking and cycling have become more important modes of transport, so we have to rethink and re-plan our streets, public buildings, green and open spaces, housing and service facilities. Adapting to local culture, climate, and resources was often an afterthought among urbanists, and it unfortunately still is, among most of the very greedy project developers, disregarding fundamental concepts of sustainable development. Such developers now by far dominate over powerless (or disinterested/corrupted) public authorities, backed by unscrupulous professionals. This must and will change. So let us conclude with an optimistic look towards the future. The younger generation will have to be called upon. If they have the courage and the vision, the younger generations (can) have the power, (can) have the spirit, and (can) have the awareness. And several of us of older generations are there to back this new generation: Finally, as a junior urban scholar, it strikes me that the role of geographers and planners has been primarily that of audiences during this outbreak. Various Chinese urban scholars expressed disappointment about their limited ability to make contributions to this war against the coronavirus, while witnessing how other professionals are more actively involved. To me, the epidemic also raises questions about how urban scholars could position ourselves in an epidemic. Urban planners, who have long been positioned to deal with uncertainty and to mediate between authorities and publics, might be well positioned to work with other stakeholders on an epidemic-response system that builds a collaborative framework among different sectors, smoothens the information flow between experts and people, and helps city governments to deal with uncertain developments of the outbreak. (Hang, 2020)
2021-06-11T13:14:44.799Z
2021-05-25T00:00:00.000
{ "year": 2021, "sha1": "aa0c31159a917d9c74d38eaa3688d2b9b17669f8", "oa_license": "CCBY", "oa_url": "https://www.cogitatiopress.com/urbanplanning/article/download/3980/3980", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4333a4de5fefe5906332b3632b85331c6be1cc40", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Political Science" ] }
56378048
pes2o/s2orc
v3-fos-license
STREETGEN: IN-BASE PROCEDURAL-BASED ROAD GENERATION : Streets are large, diverse, and used for conflicting transport modalities as well as social and cultural activities. Proper planning is essential and requires data. Manually fabricating data that represent streets (street reconstruction) is error-prone and time consuming. Automatising street reconstruction is a challenge because of the diversity, size, and scale of the details ( ∼ cm for cornerstone) required. The state-of-the-art focuses on roads and is strongly oriented by each application (simulation, visualisation, planning). We propose a unified framework that works on real Geographic Information System (GIS) data and uses a strong, yet simple hypothesis when possible to produce coherent street modelling at the city scale or street scale. Because it is updated only locally in subsequent computing, the result can be improved by adapting input data and the parameters of the model. We reconstruct the entire Paris streets in a few minutes and show how the results can be edited simultaneously by several concurrent users. INTRODUCTION Streets are complex and serve many type of purposes, including practical (walking, shopping, etc.), social (meeting, etc.), and cultural (art, public events, etc.).Managing existing streets and planning new ones necessitates data, as planning typically occurs on an entire neighbourhood scale.These data can be fabricated manually (cadastral data, for instance, usually are).Unfortunately, doing so requires immense resources in time and people.Indeed, a medium sized city may have hundreds of kilometers of streets.Streets are not only spatially wide, they also are very plastic and change frequently.Furthermore, street data must be precise because some of the structuring elements, like cornerstones (they separate sidewalks from roadways) are only a few centimetres in height.Curved streets are also not adapted to the Manhattan hypothesis, which states that city are organised along three dominant orthogonal directions (Coughlan and Yuille, 1999). The number and diversity of objects in streets is also particularly challenging.Because street data may be used for very different purposes (planning, public works, and transport design), it should be accessible and extensible. Traditionally, street reconstruction solutions are more road reconstruction and are also largely oriented by the subsequent use of the reconstructed data.For instance, when the use is traffic simulation (Wilkie et al., 2012, Nguyen et al., 2014, Yeh et al., 2015), the focus is on reconstructing the road axis (sometime lanes), not necessarily the roadway surface.In this application, it is also essential that the reconstructed data is a network (with topological properties) because traffic simulation tools rely on it.However, the focus is clearly to reconstruct road and not streets.Streets are much more complex objects than roads, as they express the complexity of a city, and contains urban objects, places, temporary structures (like a marketplace).The precision of the reconstruction is, at best, around a metre in terms of accuracy. Another application is road construction for the virtual worlds or driving simulations.In this case, we may simply want to create realistic looking roads.For this, it is possible to use real-life civil engineering rules, for instance using a clothoid as the main curve in highway (McCrae andSingh, 2009a, Wang et al., 2014).When trying to produce a virtual world, the constructed road must blend well into its environment.For instance, the road should pass on a bridge when surrounded by water.We can also imitate real-world road-building constraints, and chose a path for the road that will minimise costs (Galin et al., 2010).Roads can even be created to form a hierarchical network (Galin et al., 2011).Reconstructed roads are nice looking and blend well into the terrain, but they do not match reality.That is, they only exist in the virtual world. The aim may also be to create a road network as the base layout of a city.Indeed, stemming from the seminal work of (Parish and Muller, 2001), a whole family of methods first creates a road network procedurally, then creates parcels and extrudes these to create a virtual city.These methods are very powerful and expressive, but they may be difficult to control (that is, to adapt the method to get the desired result).Other works focus on control method (Chen et al., 2008, Lipp et al., 2011, Beneš et al., 2014).Those methods suffer from the same drawback; they are not directly adapted to model reality. More generally, given procedural generation methods, finding the parameters so that the generated model will match the desired result is still an on-going research (inverse procedural modelling, like in (Martinovic and Van Gool, 2013) for fac ¸ade, for instance). We start from rough GIS data (Paris road axis); thus, our modelling is based on a real road network.Then, we use a basic hypothesis and a simple road model to generate more detailed data.At this point, we generate street data for a large city (Paris); the result is one street network model.We use a widespread street network model, where the skeleton is formed by street axis and intersection forming a network.Then other constituents (lane, pedestrian crossing, markings, etc.) are linked to this network.We base all our work on a Relational DataBase Management System (RDBMS), to store inputs, results, topology, and processing methods.This article follows the IMRAD format ( (Wu, 2011)).In Section 2. we explain why we chose to base our work on a RDBMS, and explain the hypothesis, how we generate the road surface, and how the parameters of the resulting model can be edited.In Section 3. we provide results of street generation and results of editing.In Section 4., we discuss the results and present limitations and possible improvements.The design of StreetGen is a result of a compromise between theoretical and practical considerations.StreetGen amplifies data using a simple, yet strong hypothesis.As such, the approach is to attain correct results when the hypothesis appears correct and change the method to something more robust when the hypothesis appears wrong, so as to always have a best guess result. Second, StreetGen has been designed to work independently at several different scales.It can generates street data at the city scale.The exact same method also generates street data interactively at the street scale. Lastly, StreetGen results are used by different applications (visualisation, traffic simulation, and spatial analysis).As such, the result is a coherent street data model with enforced constraints, and we also keep links with input data (traceability). Introduction to RDBMS We chose to use a RDBMS ((PostgreSQL, 2014) with (PostGIS, 2014)) at the heart of StreetGen for many reasons.First, RDBMSs are classical and widespread, which means that any application using our results can easily access it, whatever the Operating System (OS) or programming language.Second, RDBMSs are very versatile and, in one common framework, can regroup our input GIS data, a road network (with topology), the resulting model of streets, and even the methods to create it.Unlike filebased solutions, we put all the data in relation and enforce these relations.For instance, our model contains surfaces of streets that are associated with the corresponding street axis.If one axis is deleted, the corresponding surface is automatically deleted.We push this concept one step further, and link result tables with input tables, so that any change in input data automatically results in updating the result.Lastly, using RDBMS offers a multi OS, multi GIS (many clients possible), multi user capabilities, and has been proven to scale easily.We stress that the entirety of StreetGen is self contained into the RDBMS (input data, processing methods, and results). StreetGen Design Principles Input of StreetGen We use few input data, and accept that these are fuzzy and may contain errors.The first input is a road axis network made of polylines with an estimated roadway width for each axis.We use the BDTopo1 product for Paris in our experiment, but this kind of data is available in many countries.It can also be reconstructed from aerial images (Montoya-Zegarra et al., 2014), Lidar data (Poullis and You, 2010), or tracking data (GPS and/or cell phone) (Ahmed et al., 2014).Using the road axis network, we reconstruct the topology of the network up to a tolerance using either GRASS GIS ( (Neteler et al., 2012)) or directly using PostGIS Topology.We store and use the network with valid topology with PostGIS Topology.The second input is the roughly estimated average speed of each axis.We can simply derive it from road importance, or from road width (when a road is wide, it is more likely that the average speed will be higher).The third input is our modelling of streets and the hypothesis we create. Because the data we need can be reconstructed and there is a low requirement on data quality, our method could be used almost anywhere. Street data model Real life streets are extremely complex and diverse; we do not aim at modelling all the possible streets in all their subtleties, but rather aim at modelling typical streets with a reasonable number of parameters. First, we observe that street and urban objects are structured by the street axis.For instance, a pedestrian crossing is defined with respect to the street axis.At such, we centre our model on street axes. Second, we observe that streets can be divided into two types : parts that are morphologically constant (same roadway width, same number of sidewalks, etc.), and transition parts (intersection, transition when the roadway width increases or decreases).We follow this division so that our street model is made of morphologically constant parts (section) and transition parts (intersection).The separation between transition and constant parts is the section limit and is expressed regarding the street axis in curvilinear abscissa. Third, classical streets are adapted to traffic, which means that a typical vehicle can safely drive along the street at a given speed.This means that cornerstone in an intersection does not form sharp right turns that would be dangerous for vehicle tires.The most widespread cornerstone path in this case seems to be the arc of a circle, as it is the easiest form to build during public work.Therefore, we consider cornerstone path to be either a segment or the arc of a circle.This choice is similar to (Wilkie et al., 2012) and is well adapted to the city, but not so well adapted to periurban roads, where the curve of choice is usually the clothoid (like in (McCrae and Singh, 2009b)), because it is actually the curve used to build highways and fast roads. The surface of intersection is then defined by the farthest points on each axis where the border curve starts.In this base model, we add lanes, markings, etc. Kinematic rule of thumb We propose basic hypotheses to attempt to estimate the radius of the corner in the intersection.We emphasise that these are rules of thumb that give a best guess result reasonably close to, and does not mean that the streets were actually made following these rules (which is false for Paris for instance). Our first hypothesis is that streets were adapted so that vehicles can drive conveniently at a given speed s that depends on the street type.For instance, vehicles tend to drive more slowly on narrow residential streets than on city fast lanes.Our second hypothesis is that given a speed, a vehicle is limited in the turns it can make.Considering that the vehicle follows an arc of circle trajectory, a radius that is too small would produce a dangerous acceleration and would be uncomfortable.Therefore we are able to find the radius r associated with a driving speed s through an empirical function f (s)− > r.This function is based on real observations of the French organisation SETRA (SETRA, 2006). From our street data model and these kinematic rules of thumb, we deduce that if we roughly know the type of road, we may be able to roughly estimate the speed of the vehicles on it.From the speed, we can estimate a turning radius, which leads to the roadway geometry. Schematically, we consider that a road border is defined by a vehicle driving along it at a given speed, while making comfortable turns. Robust and Efficient Computing of Arcs Goal The hypotheses in the above section allow us to guess a turning radius from the road type.This turning radius is used to reconstruct the arcs of a circle that limits the junctions.The method must be robust because our hypotheses are just best guesses and are sometime completely wrong. Given two road axis (a1, a2) that are each polylines, and not segments), having each an approximate width (w1, w2) and an approximate turning radius (r = min(r1, r2), or other choosing rule), we want to find the centre of the arc of the circle that a driving vehicle would follow.Method Our first method was based on explicit computing, as in (Wang et al., 2014), Figure 13.However, this method is not robust, and has special cases (flat angle, zero degree angle, one road entirely contained in another), is intricately two-dimensional (2D), and, most importantly, cannot be used on poly-lines.Yet realworld data is precisely made of polylines, due to data specification or errors. We choose to use morphological and boolean operations to overcome these limitations.Our main operators are positive and negative buffers (formally, the Minkowski sum of the input with a disk of given size) as well as the surface intersection, union, etc.We are looking for the centre of the arc of the circle.Thus, by definition the centre could be all the places of distance of d1 = w1 + r from a1 and distance of d2 = w2 + r from a2. We translate this into geometrical operations: buf f eri, buffer of ai with di inter, the intersection of boundary of buffers, which is commonly a set of point but can also be a set of points and curve. All those place could be circle centre.closest, the point of inter that is the closest to the junction centre.We must filter this among the candidates in order to keep only the one that makes the most sense, given our hypotheses. When hypothesis are wrong In some cases closest may be empty (when one road is geometrically contained in another considering their width for instance).In this case our method fails with no damages, as no arc is created. The radius may not be adapted to the local road network topology.This predominantly happens when the road axis is too short with respect to the proposed radius.In this case, we reduce the guessed radius to its maximal possible value by explicitly computing the maximum radius if possible.It also happens that the hypotheses regarding the radius are wrong, which creates obviously misplaced arcs.We chose a very simple option to estimate whether an arc is misplaced or not and simply use a threshold on the distance between the arc and the centre of the intersection.In this case, we set the radius to a minimum that corresponds to the Paris lane separator stone radius (0.15 m). Computing Surfaces from Arc Centres Border points Once the centre of circle is found, we can create the corresponding arc of circle by projecting the centre of the circle on both axis buffered by wi.In fact, we do not use a projection, as a projection on a polyline may be ill-defined (for instance projecting on the closest segment may not work).Instead, we take the closest point. Similarly, we 'project' the circle centre onto the road axis.We call these projection candidate border points.We have two or Section and intersection surface We compute the section surface by first creating border lines at the end of each section out of border points.The border lines are normal to a local straight approximation of the road axis.Then, we use these lines to cut the bufferised road axis to obtain the section surface. At this point, it would be possible to construct the intersection surface by linking border lines to arcs, passing by the buffered road axis when necessary.We found it too difficult to do it robustly because some of the previous results may be missing or slightly false due to bad input data, wrong hypotheses or a computing precision issue. We prefer a less specific method.We build all possible surfaces from the line set comprised of arcs, border lines, and buffered roads.We then keep only the surface that corresponds to the intersection. Variable buffer In the special case where the intersection is only a changing of roadway width, the arc of the circle transition is less realistic than a linear transition.We use a variable buffer to do this robustly.It also offers the advantage to being able to control the three most classical transitions (symmetric, left, and right) and the transition length using only the street axis. We define the variable buffer as a buffer whose radius is defined at each vertex (i.e., points for linestring).The radius varies linearly between vertices.One easy, but inefficient solution to compute it is to build circles and isosceles trapezoids and then union the surface of these primitives. Lane, markings, street objects Based on the street section, we can build lanes and lane separation markings using a buffer.Note that simply translating the centre axis would not work with polylines. Figure 10: Starting from center line (black), a translation would not create correct a lane (red).We must use the buffer (green). Our input data contains an estimation of the lane number.Even when such data is missing, it can still be guessed from road width, road average speed, etc., using heuristics.The number of lane could also be retrieved from various remote sensing data.For instance, (Jin et al., 2009) propose to use aerial images.We can also build pedestrian crossings along the border lines.Using intersection surfaces, we build city blocks.We use the topology of the road network to obtain the surface of the face corresponding to the desired block.Then, we use Boolean operations to subtract the street and intersection surfaces from the face.This has the advantage that this still provides results when some of the street limiting the block have not been computed. Figure 11: The blocks are generated even when some parts of the street network have not been computed. Concurrency and scaling One big query We emphasize that StreetGen is one big SQL query (using various PL/pgSQL and Python functions). The first advantage it offers is that it is entirely wrapped in one RDBMS transaction.This means that, if for any reason the output does not respect the constraints of the street data model, the result is rolled back (i.e., we come back to a state as if the transaction never happened).This offers a strong guarantee on the resulting street model as well as on the state of the input data. Second, StreetGen uses SQL, which naturally works on sets (intrinsic SQL principle).This means that computing n road surfaces is not computing n times one road surface.This is paramount because computing one road surface actually requires using its one-neighbours in the road network graph.Thus, computing each road individually duplicates a lot of work. Third, we benefit from the PostgreSQL advanced query planner, which collects and uses statistics concerning all the tables.This means that the same query on a small or big part of the network will not be executed the same way.The query planner optimises the execution plan to estimate the most effective one.This, along with extensive use of indexes, is the key to making StreetGen work seamlessly on different scales. One coherent streets model results One of the advantage of working with RDBMSs is the concurrency (the capacity for several users to work with the same data at the same time).By default, this is true for StreetGen inputs (road network).Several users can simultaneously edit the road axis network with total guarantees on the integrity of the data. However, we propose more, and exploit the RDBMS capacities so that StreetGen does not return a set of streets, but rather create or update the street modelling.This means that we can use StreetGen on the entire Paris road axis network, and it will create a resulting streets modelling.Using StreetGen for the second time on only one road axis will simply update the parameters of the street model associated with this axis.Thus, we can guarantee at any time that the output street model is coherent and up to date. Computing the street model for the first time corresponds to using the 'insert' SQL statement.When the street model has already been created, we use an 'update' SQL statement.In practice, we automatically mix those two statements so that when computing a part of the input road axis network, existing street models are automatically updated and non existing ones are automatically inserted.The short name for this kind of logic (if the result does not exist yet, then insert, else update) is 'upsert'. This mechanism works flawlessly for one user but is subject to the race condition for several users.We illustrate this problem with this synthetic example.The global streets modelling is empty.User1 and User2 both compute the street model si corresponding to a road axis ri.Now, both users upsert their results into the street table.The race condition creates an error (the same result is inserted twice). We can solve this race problem with two strategies.The first strategy is that when the upsert fails, we retry it until the upsert is successful.This strategy offers no theoretical guarantee, even if, in practice, it works well.We choose a second strategy, which is based on semaphore, and works by avoiding computing streets that are already being computed. When using StreetGen on a set of road axes, we use semaphores to tag the road axes that are being processed.StreetGen only considers working on road axes that are not already tagged.When the computing is finished, StreetGen releases the semaphore.Thus, any other user wanting to compute the same road axis will simply do nothing as long as those streets are already being computed by another StreetGen user.This strategy offers theoretically sound guarantees, but uses a lot of memory. StreetGen Robustness Overall, StreetGen generates the entire Paris road network.We started by generating a few streets, then a few blocks, then the sixth arrondissement of Paris, then a fourth of Paris, then the entire south of Paris, then all of Paris.Each time we changed scale, we encountered new special cases and exceptions.Each time we had to robustify StreetGen.We think it is a good illustration of the complexity of some real-life streets and also of possible errors in input data. Quality Overall, most of the Paris streets seem to be adapted to our street data model.StreetGen results looks primarily realistic, even in very complex intersections, or overlapping intersections.Results are un-realistic in a few borderline cases, either because of the hypotheses or the limitations of the method.Those case are, however, easily detected and could be solved individually.Failure 1 is caused by the fact that axis 1 and 2 form a loop.Thus, in some special cases, the whole block is considered an intersection.This is rare and easy to detect.Failure 2 is caused by our method of computing intersection surface.It could be dealt with using the variable buffer.Failure 3 is more subtle and is because one axis is too short with respect to the radius.It could be fixed by taking data so that it is closer to the reality, until a very good match is reached. Scaling The entire Paris street network is generated in less than 10 minutes (1 core).Using the exact same method, a single street (and its one-neighbour) is generated in ∼ 200ms, thus is lower than the human interactive limit of ∼ 300ms. SQL set operations We illustrate the specificity of SQL (working on set) by testing two scenarios.In the first scenario (noset), we use StreetGen on the Paris road axis one-by-one, which would take more than 2hours to complete.In the second scenario (set), we use StreetGen on all the axis at once, which takes about 10minutes. Concurrency We test StreetGen with two users simultaneously computing two road axis sets sharing between 100% and 0% of road axis.The race condition is effectively fixed, and we get the expected result. Parallelism We divided the Paris road axis network into eight clusters using the K-means algorithm3 on the road axis centroid.This happens within the database in a few seconds.Then K users use StreetGen to compute one cluster (parallelism), which reducez the overall computing time to about one minute. DISCUSSION Street data model Our street data model is simple and represents well the roadway, but would need to be detailed in some Lanes are not a priority at the moment, and cannot have different width nor type (bus lanes, bicycle lanes, etc.). Our model is just the first step towards modelling streets.In particular, we do not integrate any urban objects in it.We could easily extend our street data model to add street objects positioned regarding the distance to the roadway and a curvilinear abscissa along the street axis as well as oriented with respect to the street axis. Kinetic hypothesis Overall, kinetic hypotheses provide realistic looking results, but are far from being true in an old city like Paris.Indeed, a great number of streets pre-date the invention of cars.We attempted to find a correlation between real-world corner radius (analysing OpenDataParis through Hough arc of circle detection) and the type of road or the road's average speed (from a GPS database).We could not find a clear correlation, except for fast roads; On those roads, the average speed is higher, and they have been designed for vehicles following classical engeneering rules. Precision issue All our geometrical operations (buffer, Boolean operations, distances, etc.) rely on PostGIS (thus GEOS 4 ).We then face computing precision issues, especially when dealing with arcs.Arc is a data type that is not always supported, and thus it must be approximated by segments. Figure 17: Example of a precision issue.Left, we approximate arcs with segments, which introduces errors.Right, the error was sufficient to incorrectly union the intersection surface. StreetGen uses various strategies to try to work around these issues.However the only real solution would be to use an exact computation tool like CGAL (The CGAL Project, 2015).It would also allow us to compute the circle centres in 3D. 4 http://trac.osgeo.org/geos/Fitting street model to reality StreetGen was designed from the beginning to provide a best guess of streets based on very little information.However, in some cases, we want the results to better fit reality. For this, we created interactive behaviour so that several users can fit the automatic StreetGen results to better match reality (using aerial images as ground truth).We did not created a Graphical User interface (GUI), but rather a set of automatic in-base behaviours so that editing input data or special interaction layers can interactively change the StreetGen results.Doing so ensures that any GIS software that can read and write PostGIS vector can be used as StreetGen GUI. In some cases, we may have observations of street objects or sidewalks available, possibly automatically extracted from aerial images or Lidar, and thus imprecise and containing errors.We tested an optimisation algorithm that distorts the street model from best-guess StreetGen to better match these observations.This subject is similar to Inverse Procedural Modeling, and we feel it offers many opportunities. CONCLUSION As a conclusion, we proposed a relatively simple street model based on a few hypotheses.This street data model seems to be adapted to a city as complex as Paris.We proposed various strategies to use this model robustly.We showed that the RDBMS offers interesting possibilities, in addition to storing data and facilities for concurrency.Our method StreetGen has ample room for improvements.We could use more sophisticated methods to predict the radius, better deal with special cases, and extend the data model to better use lane and add street objects.In our future work, we also would like to exploit the possibility of the interaction of StreetGen to perform massive collaborative editing.Such completed street modelling could be used as ground truth for the next step, which would be an automatic method based on detections of observations like sidewalks, markings, etc. Finding the optimal parameters would then involve performing Inverse Procedural Modelling. Figure 1 : Figure 1: StreetGen in a glance.Given road axes, reconstruct network, find corner arcs, compute surfaces, add lanes and markings. Figure 5 : Figure 5: Finding the circle centre problem.Left classical problem, middle and right using real-world data. Figure 6 : Figure 6: The method to robustly find circle centres. Figure 9 : Figure 9: Variable buffer for robust roadway width transition. Figure 13 : Figure 13: Example of results of increasingly complex intersection.
2018-12-15T09:10:39.430Z
2015-08-20T00:00:00.000
{ "year": 2015, "sha1": "9c7a9e12f5b4e40234e38cb9fda50884b23f2517", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5194/isprsannals-ii-3-w5-409-2015", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9c7a9e12f5b4e40234e38cb9fda50884b23f2517", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Geography" ], "extfieldsofstudy": [ "Computer Science" ] }
10996480
pes2o/s2orc
v3-fos-license
Learning analytics of the relationships among self-regulated learning, learning behaviors, and learning performance This research aims to investigate the relationship between self-regulated learning awareness, learning behaviors, and learning performance in ubiquitous learning environments. In order to conduct this research, psychometric data about self-regulated learning and log data, such as slide pages that learners read, marker, and annotate, was collected. The accessing activity of device types that stored the learning management system was collected and analyzed by applying path analysis and correlation analysis using data divided into high and low performers. The results indicated that the slide pages which learners read for a duration of between 240 and 299 s had positive effects on the promotion of annotation and the learning performance directly, and albeit indirectly, the enhancement of self-efficacy was affected by other self-regulated learning factors. The results of the correlation analysis indicated that self-efficacy and test anxiety are a key factor that has different effects on the number of the read slide pages in both high and low performers. Introduction Learning analytics is a key subject in educational research all over the world as the findings of learning analytics studies can be applied to improve education, create learning support services, and establish learning models, among other improvements (Gray et al. 2014). One of the key issues in learning analytics is to collect learning logs using information and communication technologies (ICTs). As ICT advances, the methods of data collection also become more various, in particular ubiquitous technologies (Yin et al. 2014). Ubiquitous technologies allow us to collect not only access logs but also location data. However, psychometric data as well as learning logs should be collected in order to analyze learners' behaviors for the provision of effective learning support, especially learning styles such as self-regulated learning (SRL) which are thought to be helpful. SRL is closely linked to the concept of autonomy, particularly in the aspects of metacognition, motivation, and learning behavior (Schunk and Zimmerman 1998;Zimmerman 1986), which enables learners to take responsibility for their own learning. Goda et al. (2013) suggested that psychometric data on self-regulated learning is useful regarding the prediction of the degree of help-seeking and learning performance of learners. Tseng et al. (2014) reported a significant relationship between learners' perception of SRL, information literacy, and information-searching strategies using the Internet. Their research indicated that information literacy plays an important role in the enhancement of SRL in the ICT era. Artino Jr. and Jones II (2012) revealed that enjoyment and frustration can be positive predictors of metacognition in online learning. Other findings suggest that the use and functions of ICT promote SRL (e.g., , whereas others have focused on developing SRL support systems (e.g., Azevedo 2005;Aleven et al. 2010). If a relationship between self-regulated learning and learning behaviors is found, the results can contribute to support learners effectively. This research aims to examine the relationship between learning behaviors, SRL awareness, and learning performance. Literacture review Self-regulated learning in a computer-based learning environment When using ICT, learners can control when, what, and how they learn, without the restrictions of time, learning space, and printed materials (Cunningham and Billingsley 2003). One of the most popular platforms worldwide, the learning management system (LMS), offers the opportunity to learn outside the classroom using the Internet. To exercise control in online learning, learners have to develop self-regulated learning (SRL) skills (Yukselturk and Bulut 2007). SRL is the active learning process used to regulate and monitor learning cognition, motivation, and behavior and to set personal learning goals, including social aspects (Wolters et al. 2003;Schunk and Zimmerman 2008). SRL is related to motivation, cognition, and self-control as it is directed toward the accomplishment of learning purposes (Pintrich 1999;Zimmerman and Paulsen 1995). SR learners are those who can prepare a learning plan, adjust it, and apply self-control and self-evaluation (Deci et al. 1996). Goda et al. (2015) suggested that high-level SR learners can control and manage their learning plan in the context of their everyday lives, using a blended learning environment. The effects of SRL seem to be different between high and low performers. Schunk and Zimmerman (1998) further compared the learning behaviors of novice and expert SRL learners in each SRL phase (see Table 1). In the forethought phase, skillful learners could articulate their final goals as well as the necessary steps toward accomplishing the same. The features of both the goal and the steps toward it were constructive and clear. Skillful learners also tended to have internal motivation and high self-efficacy. In the performance/volitional phase, skillful learners enhanced their learning by monitoring the learning process. In the self-reflection phase, they sought to evaluate their learning performance independently and tended to attribute its quality to learning strategies and practice. The SRL features of the skillful learners in each phase support learning processes by helping teachers predict learning styles and learning performance. Several scholars have also conducted studies on the computer-assisted learning environment (e.g., Azevedo 2005). Recent research has focused on SRL in an ICT-based learning environment as ICTs are now used in education and learning settings. Attitudes toward the use of ICT affect SRL. For example, Usta (2011) indicated that a negative attitude toward ICT use has a positive relationship with goal setting, time management, help seeking, and self-regulation. Greene and Azevedo (2010) indicated that learners who do well in an ICT-based environment can manage their learning using cognitive and metacognitive processes, such as ensuring the effectiveness of learning strategies, setting learning objectives, and self-monitoring. reviewed learning support in four types of ICT-based learning environments. The first is behaviorism, such as drill and practice, in which the same questions are asked and answered repeatedly, followed by the reception of the same feedback. The second is an adaptive or intelligent tutoring system, which supports the activation of metacognition and information retrieval. The third is hypertext and hypermedia, which allow the organization of digital learning materials using linked information. Hypertext and hypermedia work as open-learning material databases. The last one is simulation, which supports cognitive and metacognitive learning, such as information organization, hypothesizing, observation, and learning output. As such, an ICT-based learning environment supports SRL skill acquisition by indirectly promoting the use of cognitive and metacognitive learning strategies. Learning analytics and SRL Learning analytics have been the subject of attention in educational research all over the world as the findings of learning analytics studies can be applied to improve education, create learning supports, establish learning models, and so on (Yin et al. 2014). One of the key issues in learning analytics is to collect learning logs using ICT. As ICT advances, the methods of data collection have been various. Oi et al. (2015) investigated the relationship between the learning performance and the frequency of links among pages in learning materials using logs. The results revealed that high-achievement learners tended to use cognitive learning strategies, such as linking the pages and knowledge with learning materials. Goda et al. (2015) identified seven distinct learning behavior types using learning logs: procrastination, learning habit, random, diminished drive, early bird, chevron, and catch-up. They revealed that the students who had the learning habit type and the chevron type gained higher scores than the procrastination type. One of the common issues under discussion is how psychological variables affect learning performance in a learning environment using ICT (Greene and Azevedo 2010). Winne (2010) pointed out the great possibility to enhance SRL research using ICT, but he also pointed out the importance of the psychometric data such as belief and contextual thought in SRL research. Psychometric data as well as learning logs should be collected in order to analyze learners' behaviors for effective learning support, and in particular, learning styles such as SRL should be helpful (Roll and Winne 2015). Goda et al. (2015) focus on the access log analysis, and therefore, they did not mention the relationship between SRL and learning behaviors. Yamada et al. (2015) indicated that self-efficacy, which is one of the factors of SRL, has a significant correlation with learning behaviors, such as highlighting and annotation. Their study indicated that SRL factors directly affect the notion of procrastination and lead to learning performance. However, their limited research did not investigate the relationships between SRL and learning behaviors. Time-related learning awareness, which plays an important role in SRL awareness, can affect learning awareness and performance, such as time-management awareness and skill (Zimmerman 1990;Eilam and Aharon 2003;Bernard et al. 2009;Kizilcec et al. 2016). Eilam and Aharon (2003) investigated the process and effects of students' learning plans and highlighted the differences in the learning planning process between high and low performers. Yamada et al. (2016) demonstrated the causal relationship between time-management awareness, the submission time for learning outcomes on the LMS, and learning performance. Timerelated learning awareness and behaviors seem to impact learning awareness and behaviors according to previous research; however, these previous studies did not examine the effects of time-related learning behaviors using learning logs, and the relationships between learning behaviors, SRL awareness, and learning performance. In particular, these previous studies did not consider the perspective of ordinary learning behaviors such as the time used for reading per page of the learning material. This study considered time-related learning behaviors, in particular the reading time for each page of the learning material, and then calculated the number of pages for the reading time segmented by per minute. As a result, using learning logs on the learning support system, this study investigated the effects of reading time per page of the learning material on the use of learning strategies, SRL awareness, and learning performance. In particular, these previous studies did not consider the perspective of ordinary learning behaviors such as the time used for reading per page of the learning material. This study considered time-related learning behaviors, in particular the reading time for each page of the learning material, and then calculated the number of pages for the reading time segmented by per minute. As a result, using learning logs on the learning support system, this study investigated the effects of reading time per page of the learning material on the use of learning strategies, SRL awareness, and learning performance. If a relationship between self-regulated learning and learning behaviors is found, the results may be used to support learners effectively. Therefore, this exploratory research aims to investigate the relationships between the number of learning material slides in every reading time, the use of annotations and marker functions, SRL awareness, and learning performance. Participants and class This research was conducted on two information technology courses. One was a 15week course (course 1), and the other was an 8-week course (course 2). The participants were 127 freshman university students in information technology classes (93 students for course 1 (Engineering 57, Science 13, Medical 2, Literature 2, Economics 1, Education 4, and Art technology, 1) and 34 students for course 2 (Engineering 2, Pharmacy 8, Education 7, Medical 9, Law 5, Literature 2, Dentistry 1)). These classes were semi-obligatory for pre-service students who desired to obtain a teaching qualification. All the students had fundamental computer skills, such as Microsoft office, email, and Net surfing. The time allotted for one class per week was 90 min. Teachers asked the students to bring their laptop to the classes. In the first week, the teachers explained the usage and functions of a digital learning material reader (DLMR). The teachers distributed digital learning materials to the students with the use of DLMR and encouraged the students to read the materials in advance before every class. A DLMR allowed the students to access the learning materials on devices such as laptops and smartphones and to use marking and annotation functions whenever and wherever the Internet was available. Figure 1 shows the interface of the DLMR. The DLMR allows the learners to promote the use of cognitive learning strategies such as marker and annotation . This system has zoom and word retrieve functions, but we did not ask the learners to use these functions, including marker and annotation. The students were required to respond to questionnaires before starting the instruction in the first class (pre-class questionnaire). In every class, learners took the comprehension test at the Fig. 1 The interface of DLMR "BookLooper" beginning. After the comprehension multiple-choice test (10 questions about the current content) in every class, the students took the lecture and worked on basic computer science contents, such as encryption technology, image processing, ontology, and programming. At the end of the last class, the teachers required the learners to respond to the same questionnaire as the pre-questionnaire (post-class questionnaire). Data collection and analysis Two methods were used for the data collection: a questionnaire and log. The Motivated Strategies and Learning Questionnaire (MSLQ: Pintrich and DeGroot 1990), which consists of five factors (Self-Efficacy (SE), Intrinsic Value (IV), Cognitive Strategies (CS), Self-Regulation (SR), Test Anxiety (TA); 44 items in sum, rated on a seven-point Likert scale; see the Appendix) was used for the subjective evaluation of learners' SRL awareness. The students were asked to complete the MSLQ both before and after the classes. The differences between their responses on the pre-class and post-class questionnaires were analyzed. The second method of data collection was a log that recorded the number of pages that the learners had read and their behavior of marking, bookmarking, and annotating. The number of reading pages was counted according to the following reading duration in every 1 min: 10-59, 60-119, 120-179, 180-239, 240-299, 300-359, 360-419, 420-479, 480-539, 540-599, and over 600 s. However, the slides for which the duration was from 1 to 10 s were eliminated from the data, because the learners did not read the contents (just skipped). The reading time was calculated from the difference between and the sum of the time of page flip logs (forward and back). We counted the frequency of marking and annotation in total throughout the course. Bookmark log was not used for this research, due to very few logs (49 out of 245,096 logs). The learning performance is the final score. Descriptive data and t test for SRL The dataset, retrieved from the descriptive data and the results of the test for MSLQ factors, consisted of 121 learners' items after eliminating the missing data. MSLQ data were analyzed using the t test in order to evaluate the differences between pre-and post-course responses from the viewpoint of the improvement of SRL. Path analysis was employed to investigate the relationships between SRL, learning behaviors, and learning performance. Table 2 shows the descriptive data (averages, standard deviations) of MSLQ factors and the results of the t test, and Tables 3 and 4 show the descriptive data of the pages, which learners read, and the frequency of marker and annotation function use and the final score. The results of the t test showed that self-efficacy and test anxiety were higher in the post-class responses compared to the pre-class responses; that is, learning behaviors seemed to improve learners' self-efficacy in the class (p < 0.001, t(120) = 4.51, effect size (d) = 0.5843) but promote test anxiety (t(129) = 2.21, p < 0.05, effect size (d) = 0.4754). Significant differences between the pre-and post-questionnaire results were not found for other factors. One interesting point is that there was no significant difference between the pre-and post-stage in intrinsic value, but the SD increased even though the average decreased. In this class, the individual differences in the perception of intrinsic value appeared to be significant. As for the descriptive data on reading pages in all and each duration displayed in Table 3, many learners in these classes read the slides for 10 to 299 s, because the average number of pages did not exceed the standard deviation. However, with the data on the number of pages that learners read for over 300 s, the standard deviation is higher than the average page number. This means that the individual differences in the number of pages that learners read for over 300 s increased. Some learners took a much longer time to read the page, while others did not. Path analysis It is important to consider the kinds of learning behavior that improved learners' SRL factors and learning performance. In order to investigate the overall relationships, a path analysis was employed. Each variable of the SRL is the average difference between the post-and pre-MSLQ questionnaires (see Fig. 2 for the results). The indicators of the model fitting are acceptable: CFI 0.963, TLI 0.948, RMSEA = 0.036, χ 2 (41) 47.401, p = 0.228. The results indicated that the numbers of slides that learners read from 10 to 59 s and 120 to 179 s promoted the use of the marker function more. However, the numbers of slides read from 180 to 239 s, 240 to 299 s, and over 600 s had a negative effect on the use of the marker function, implying a significant difference between the two segments that had positive effect. One segment of the "slide number" that learners read from 240 to 299 s promoted the use of the annotation function; however, the slide numbers that learners read from 360 to 419 s had a negative effect on its use. The results also revealed the mediated functions of SRL perception between learning behaviors and learning performance. The number of slides that the learners read from 240 to 299 s had both direct and indirect positive effects on learning performance. This time range from 240 to 299 s also promoted the use of annotations. The use of annotations promoted the perception of self-efficacy, and in turn, self-efficacy enhanced learning performance. The awareness of intrinsic value is a fundamental perception that affects other SRL factors and indirectly affects learning performance. The awareness of intrinsic value enhanced the sense of awareness of cognitive learning strategies, self-regulation, and self-efficacy. The awareness of cognitive strategies use enhanced self-regulation, and consequently, self-regulation enhanced self-efficacy. Interestingly, internal value has a directly negative effect on the enhancement of learning performance; that is, learners who placed importance on the learning contents in this class significantly gained a lower test score than those who did not regard this class as important for them. However, intrinsic value indirectly had positive effects on learning performance, mediated by self-efficacy. We did not find any relationship with test anxiety. Correlation analysis Regarding SRL effects on learning performance, in this phase, a correlation analysis (Spearman's rho) was conducted in order to investigate the relationship between SRL and learning performance, divided into two groups: high performance and low performance. The purpose of the analysis was to understand the path analysis results as previous research indicated the differences in SRL awareness between high and low performers (e.g., Zimmerman 1990; Schunk and Zimmerman 1998; Nandagopal and Ericsson 2012). The learners who gained a score higher than the average of plus 1 SD were categorized into the high performance group (N = 14), and those who gained a score less than the average score of minus 1 SD were allocated to the low performance group (N = 19). Tables 5 and 6 show the results of the correlation analysis. The results show the differences between high and low performers. With regard to self-efficacy, the results showed no correlation with test anxiety with the high performer data; however, a high correlation was found with the low performer data. Concerning the correlation between self-efficacy and the slide page number in each reading time segment (1-min segment), almost all the correlations were weak negative; however, very weak to weak positive correlations were found in the low performer data. With regard to intrinsic value, very weak to weak positive correlations with the slide page number in each segment were confirmed; however, a weak negative correlation was found in the low performer data. About the awareness of cognitive learning strategy use, there were weak positive correlations with cognitive strategy use, but no correlation was found with the low performer data. Concerning test anxiety, negative correlations with the slide page number in each segment less than 360 s were confirmed, but mainly a weak positive correlation was found in the low performer data. On the other hand, for the time duration over 360 s, weak positive correlations between test anxiety and the number of pages that learners read were confirmed. With regard to the relationships between cognitive learning strategy use and the number of pages, there were significant differences between high and low performers. The correlation results with the high performer data confirmed the negative correlation between the number of pages and marker use and between the number of pages and annotation use. On the other hand, middle to strong positive correlations between the number of pages and marker use were confirmed. There were very slight negative correlations between the number of pages and annotation use, except for the number of pages with the reading time from 480 to 539 s in the low performer group. Discussion This research aims to investigate the relationships between learning behaviors, SRL awareness, and learning performance. We found mainly positive relationships between them. The number of slides that the learners read from 120 to 179 s and from 10 to 59 s promoted the frequent use of the marker function; however, the numbers of pages Table 5 The results of the correlation analysis (Spearman's rho) for high performer data (N = 14) that learners read from 180 to 239 s, from 240 to 299 s, and over 600 s inhibited the use of the marker function. In order to consider this point in detail, we referred to the results of the correlation analysis. The results revealed a positive correlation between the number of pages and marker use, and both weak negative and positive correlations between marker use and SRL in low performer data. The number of slides read from 240 to 299 s had both direct and indirect positive effects on the enhancement of the final score. In order to consider this linkage, considering the indirect path of the number of slides that learners read from 240 to 299 s, the results showed that the learners tend to add annotations on the slides from 240 to 299 s. One possible reason for the relationship between learning behaviors and SRL is that the learning behaviors of the learners depend on the time duration. Reading comprehension requires learners to complexly process input information, such as letter and word recognition, and knowledge integration (Van Gelderen et al. 2007). Walczyk et al. (2007) suggested the relationship between the learning process and disruptive compensatory reading strategies, in that deep reading processes such as semantic encoding seem to require more time for information processing. The time duration from 240 to 299 s seems to be the threshold to engage with the learning contents, according to the results of the path analysis. It indicated that learners attempted to comprehend the learning contents and pointed to both clear and unclear parts in the slide that they read for this duration. Therefore, annotation enhanced self-efficacy, which is one of the important elements of SRL. These results were supported by Bernacki et al. (2012), who suggested the effectiveness of annotation in the enhancement of SRL. However, marker use did not promote any SRL elements collected in this research, which differs from the results suggested by Bernacki et al. (2012). A possible reason could be the correlations between SRL and its functions. In the high performer data, weak-to middlelevel positive correlations between SRL and its function use were confirmed; in particular, annotation had a strong positive correlation with SRL. From these results, the effects of marker use on SRL can be restrained by annotation use. Intrinsic value is the key SRL element in the research results. Intrinsic value enhances self-efficacy, self-regulation, and the awareness of cognitive learning strategy use. In addition, it has indirect effects on learning performance, mediated by selfefficacy. Intrinsic value seems to be related with the relevance to learners' situation, which enhances learning motivation (Keller 2009). In the case of learners being aware of self-efficacy, self-regulation, and cognitive learning strategies, learners seemed to gain a high score. However, one point that differed from previous research is that there is a direct negative relationship between intrinsic value and learning performance in the path analysis results. In order to investigate this point, we conducted Spearman's correlation analysis, using high and low performers' data. The results of the correlation analysis indicated very weak to weak positive correlation between intrinsic value and the number of pages in each segment, except for the duration from 300 to 359 s; however, there was a small negative correlation between the intrinsic value of this class and the number of slides read. One possible reason for this is the indirect effects of test anxiety. With the higher performer data, the intrinsic value had a positive relationship between other SRL factors. High performers seemed to control their SRL skill well including intrinsic value and succeeded in gaining high scores, as suggested by Schunk and Zimmerman (1998). The low performer data also indicated a middle-and high-level correlation between SRL factors, but the difference between high and low performers arose from test anxiety. With the higher performer data, we found a low level of correlation between the internal value and test anxiety (0.285), but with the low performer data, there was almost no correlation (0.076). With the low performer data, however, test anxiety had a middle-high-level positive correlation with self-efficacy (0.647), but the high performer data did not (0.090) indicate similar results. Low performers seemed to recognize test anxiety as an individual factor, but high performers seemed to focus on being aware of intrinsic value with the feeling of test anxiety. These results seem to be consistent with Schunk and Zimmerman (1998), who indicated that an expert self-regulated learner tends to focus on selfevaluation through self-reflection. Cognitive learning strategy use is one of the fundamental SRL elements for the quality learning outcomes (e.g., Pintrich and DeGroot 1990). The use of markers did not affect the learning performance and SRL directly, but annotation affected self-efficacy and indirectly affected learning performance in this research. A possible reason is that the annotation feature requires learners to act more cognitively on learning material, but the marker seems to do more cognitively. Annotation, which supports cognitive learning, is one of the most effective tools and learning strategies for the enhancement of motivation and learning outcomes on hypermedia. Chen and Huang (2014) suggested that annotation supports learners' attention in learning and SRL and, as a result, enhances learning outcomes. Shang (2016) revealed that online annotation on the learning materials enhances the motivation and learning material comprehension significantly. In contrast, the marker is one of the most common tools used with e-books, but the use of this function did not have any significant effects on the enhancement of learning outcomes (van Horne et al. 2016). van Horne et al. (2016) also suggested that the use of highlighting was associated with the delay in reading time and prior use of paper-based textbooks, which are concerned with re-reading behavior. In fact, correlation analysis revealed that positive middle-level correlations were found between awareness and behaviors of the cognitive learning strategies in high learning performers, but not in low learning performers. Annotation seems to be a "cost-effective" tool for learners to enhance their learning outcome, compared with the marker tool. When learners use the marker function and look back at the highlighted pages, learners seem to consider the meaning of the marker and associated information with the marker. Annotation simply supports the learner's understanding of learning materials. This feature seems to promote the enhancement of self-efficacy and, indirectly, the learning performance. However, this correlation analysis is very limited, using Spearman's rho for only 33 datasets. Therefore, regarding further research, we need to investigate the effects of internal value and test anxiety to increase the dataset. One point that we should consider is that the average score of these classes was very high, 85 out of 100. This means that it seems to be very difficult to find a significant relationship with learning performance. These points should be considered for further research. Conclusion This research investigated the relationship between SRL factors, learning behaviors, and learning performance. The results showed that the number of slides read from 240 to 299 s indirectly had a positive influence on the enhancement of SRL learning performance. Annotation, which is the learning behavior related with learning strategy, mediated between the slides and SRL. Among the SRL factors, all the relationships between them were positive, but the internal value had a negative effect on the final score. In order to investigate this point in detail, we investigated the differences between high and low performers using correlation analysis. The results showed different correlations between intrinsic value and the number of slides read, with positive correlations among high performers and negative correlations among low performers. This research contributes useful suggestions to the learning analytics research field. SRL is an important educational theory and concept in education, globally. Previous research has tended to capture learners' SRL awareness using questionnaires or observations (e.g., Pintrich and DeGroot 1990;Zimmerman 2008;Cho and Jonassen 2009;Jansen, van Leeuwen, Janssen, Kester, and Kalz 2016). This research examined learning behaviors related with SRL using learning logs and provided viewpoints to consider SRL awareness, which is different from previous studies in SRL research. Learning analytics research tends to focus on the use of logs by utilizing ICT. However, learning analytics research mixed with educational psychology methods, such as those used here, can suggest the background of learning behaviors (learning logs). Thus, learning analytics blended with educational psychology methods can provide researchers and practitioners with various viewpoints on educational evaluation. Additionally, this research makes a practical contribution toward designing effective instruction. This research recommends that teachers design their instruction for the enhancement of learners' self-efficacy by using the awareness of cognitive learning strategies that are directly effective in enhancing internal value and indirectly effective in enhancing self-regulation. For instance, the introduction of useful cognitive learning strategies in a teacher's class may be a simple idea; however, such introduction also entails the use of an effective instructional design method. Further, this research identified several problems on which researchers can focus as part of future research. Particularly, future research can address the following three issues. Firstly, the effects of mobile usage should be investigated to promote the use of learning with mobiles. In this study, only 4 out of 90 learners used mobiles in order to read the slides. Mobile usage can enable and encourage learners to learn anywhere and at any time. The mobility seems to enhance SRL awareness. Sha et al. (2012) pointed out learners' academic achievement and motivation effect relating to the use of a mobile-based learning environment from the viewpoint of SRL. We did not find the significant effects of the mobile device use, due to the small mobile users; therefore, we should consider their suggestion for further research. Secondly, the duration of accessing days should be added as a variable for the prediction of learning performance. Regular access can influence the SRL awareness and can lead to an improvement in learning performance. Lastly, data analysis can include the addition of data from other classes. This research used data from two classes, but we should include data from other classes in order to extract a useful and versatile model for teaching and learning support. (Pintrich and DeGroot 1990) (Continued) I work on practice exercises and answer end of chapter questions even when I don't have to. Even when the study materials are dull and uninteresting, I keep working until I finish. Before I begin studying, I think about the things I will need to do to learn. I often find that I have been reading for class but don't know what it is all about. (R) I find that when the teacher is talking, I think of other things and don't really listen to what is being said. (R) When I'm reading, I stop once in a while and go over what I have read. I work hard to get a good grade even when I don't like a class. R reflected item
2017-08-08T13:05:20.589Z
2017-07-24T00:00:00.000
{ "year": 2017, "sha1": "458561c0de3897046938eb7cbb876f2697f35324", "oa_license": "CCBY", "oa_url": "https://telrp.springeropen.com/track/pdf/10.1186/s41039-017-0053-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b985238fa79b84b0254ae0cf166e761eba796f0", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
259705691
pes2o/s2orc
v3-fos-license
[18F]FDG-PET/CT in Idiopathic Inflammatory Myopathies: Retrospective Data from a Belgian Cohort [18F]FDG-PET/CT is a useful tool for diagnosis and cancer detection in idiopathic inflammatory myopathies (IIMs), especially polymyositis (PM) and dermatomyositis (DM). Data deriving from Europe are lacking. We describe [18F]FDG-PET/CT results in a Belgian cohort with IIMs, focusing on patients with PM and DM. All of the cases of IIMs admitted between December 2010 and January 2023 to the Cliniques Universitaires Saint-Luc (Belgium) were retrospectively reviewed. In total, 44 patients were identified with suspected IIMs; among them, 29 were retained for final analysis. The mean age of the retained patients was 48.7 years; 19 patients were female (65.5%). Twenty-two patients had DM and seven had PM. The mean serum creatinine kinase (CK) and the mean CRP levels were 3125 UI/L and 30.3 mg/L, respectively. [18F]FDG-PET/CT imaging was performed for 27 patients, detecting interstitial lung diseases (ILDs) in 7 patients (25.9%), cancer in 3 patients (11.1%), and abnormal muscle FDG uptake compatible with myositis in 13 patients (48.1%). All of the patients who were detected to have ILDs via PET/CT imaging were confirmed using a low-dose lung CT scan. Among the patients who were detected to have abnormal muscle FDG uptake via PET/CT scans (13/28), the EMG was positive in 12 patients (p = 0.004), while the MRI was positive in 8 patients (p = 0.02). We further observed that there was a significantly higher level of CK in the group with abnormal muscle FDG uptake (p = 0.008). Our study showed that PET/CT is useful for detecting cancer and ILDs. We showed that the detection of abnormal muscle uptake via PET/CT was in accordance with EMG and MRI results, as well as with the mean CK value, and that the presence of dyspnea was significantly associated with the presence of ILDs detected via PET/CT imaging (p = 0.002). Introduction Idiopathic inflammatory myopathies (IIMs) are a group of rare inflammatory disorders that affect skeletal muscle. They can be classified into four subgroups: polymyositis (PM), dermatomyositis (DM), inclusion body myositis (IBM), and immune-mediated necrotizing myositis (IMNM) [1,2]. Autoimmunity plays a key role in IIM pathogenesis, and autoantibodies might be identified in more than 50% of cases [3]. Autoantibodies that are specific to myositis are referred to as myositis-specific antibodies (MSAs), and antibodies that are commonly seen in myositis associated with connective tissue diseases (CTDs) are named myositis-associated antibodies (MAAs) [4]. During the last decade, substantial progress was made in identifying novel autoantibodies. For each MSA, a demonstrated correlation has been shown between specific clinical manifestations, aiding in the diagnosis, classification, and prognosis of patients into more homogeneous groups (phenotyping) [3,4]. IIMs are clinically characterized by progressive proximal muscle weakness and an elevated serum creatine kinase level. Depending on the type of myositis, other symptoms may be associated with IIMs, including fatigue, fever, dysphagia, cough, dyspnea, and arthralgia. Patients with DM also present with cutaneous lesions, including periorbital violaceous rash (also called heliotrope rash), Gottron's papules, calcinosis, Raynaud's phenomenon, skin ulcers, mechanic's hands, or periungual erythema [3,5]. An important clinical presentation of IIMs is antisynthetase syndrome (ASS), which is characterized by the association of Raynaud's phenomenon, mechanic hands, fever, interstitial lung disease (ILD), and inconstant cutaneous rash. This syndrome was named because of the first described antibody associated with this clinical entity, anti-Jo-1, that targets histidyl tRNA synthetase, which is one of the aminoacyl tRNA synthetases. To date, a total of eight anti-tRNA synthetase autoantibodies (ASAs) have been reported in myositis in addition to anti-Jo-1: anti-PL12, anti-PL7, anti-EJ, anti-OJ, anti-KS, anti-Zo, and anti-Ha [3]. Notably, studies have demonstrated that some of the particular clinical manifestations are associated with each individual ASA [3]. IMNM is also characterized by muscle weakness, but usually without skin or lung involvement. Autoantibodies such as 3-hydroxy-3-methylglutaryl-CoA reductase (HMGCR) or signal recognition particles (SRPs) are found in 66% of patients [6]. Anti-HMGCR autoantibodies have mainly been described in patients who were pre-exposed to a statin [6]. IBM is another form of inflammatory myopathy. It affects patients over 50 years old with a predominance of distal and quadriceps muscles involvement. The progression of this disease is slow, and it does not affect the skin or the lungs. Unfortunately, this subtype of inflammatory myopathy is highly refractory to treatment [7]. PM is more a diagnosis of exclusion, and physicians should always consider IBM, DM, or IMNM before reaching this diagnosis. There are no specific autoantibodies, and muscle biopsy is the only way to definitely confirm the diagnosis [3,8]. Finally, it is important to emphasize that IIMs, particularly PM and DM, may be associated with interstitial lung diseases (ILDs) and cancer [1,2]. As previously noted, myositis autoantibodies are helpful in classifying patients. Antimelanoma-differentiationassociated gene 5 (anti-MDA5) and, as previously indicated, anti-JO-1 autoantibodies and other ASAs are more frequently associated with interstitial lung diseases (ILDs). Antitranscription intermediary factor 1γ (anti-TIF1γ) and antinuclear matrix protein 2 (anti-NXP2) are more frequently associated with cancer. Electromyography (EMG) and magnetic resonance imaging (MRI) are diagnostic tools that are used to prove muscle involvement; however, those tools are not able to differentiate between myopathies. With EMG, signs of PM and DM are characterized by muscle membrane irritability, including fibrillation potential, positive sharp waves (PSWs), and/or myopathic motor units [9]. The same abnormalities have been described with IBM and IMNM. MRI usually shows intramuscular T2 hypersignaling in DM [10]. However, these findings are also possible with PM and IMNM. MRI is interesting in diagnosing IBM, as the principal IBM characteristics on MRI are the involvement of the thigh and distal muscles [11,12]. Therefore, EMG and MRI are often performed to confirm a suspicion of IIM and, eventually, to select a preferential localization site for a biopsy [13,14]. Indeed, muscle biopsy is the gold standard to confirm the type of IIM, but this technique is invasive and requires an appropriate anatomopathological laboratory. Regarding ILD, high-resolution computed tomography (HRCT) is the gold standard for examination [15]. In the literature, several diagnostic and classification criteria for IIMs have been proposed: Bohan-Peter, Dalakas, ENMC 2004, andEULAR 2017 [2,8,16,17]. The midseventies Bohan and Peter's criteria are still commonly used in clinical practice. Based on a combination of clinical signs (proximal and symmetrical muscle weakness, and/or typical cutaneous changes such as Gottron's signs), biological abnormalities (increase in serum muscle enzymes), EMG characteristics, and muscle biopsy results, a diagnosis of definite, probable, or possible (dermato)-myositis can be retained. To note, in this list, muscle biopsy is not a mandatory criterium. In 2004, to the ENMC criteria were added some clinical exclusion criteria (such as ocular weakness of toxic myopathy), including other laboratory criteria such as MRI abnormalities (increased signal on STIR; in other words, edema) and MSA detection, on top of a specific histological pattern in muscle biopsies which became mandatory; this latter criterium makes it increasingly difficult to apply the criteria in clinical practice [17]. More recently, the EULAR/ACR criteria were published in 2017. Combining easily available clinical, biological (MSA and CK enzymes), and histopathological findings in the case of a biopsy, a score is determined. Above a certain cut-off point, depending on the presence of a muscle biopsy, a diagnosis of probable or definite IIM can be retained [8]. It is important to mention that the only MSA/MAA antibody included in the 2017 EULAR/ACR criteria was the anti-JO1 antibody. The main issue with all these criteria is that they are used either as classification criteria or diagnostic criteria, except for EULAR 2017. Moreover, they do not take into account the recent progress made in autoantibodies detection and in IIM phenotyping regarding autoantibody types. [ 18 F]FDG-PET/CT is a non-invasive imaging technique, using a combination of scanner and radioactive glucose as a tracer, to detect abnormal morphological and functional changes in the whole body. In the context of IIM, especially PM and DM, it became a useful tool for cancer screening [1,18]. Recently, in a systematic review, Bentick et al. showed that PET/CT is performant for the detection of malignancies compared to conventional work-up, with a sensitivity and specificity of 66, 7-94%, and 80-97.8%, respectively [19]. More recently, [ 18 F]FDG-PET/CT was also used for IIM diagnosis as well as for the activity assessment of muscle disease and the detection of extra-muscular manifestations, such as ILD [19]. As an example, in the systematic review of Bentick et al., compared to HRCT, the sensitivity of PET/CT for detecting ILD was 93-100% [19]. However, the interpretation criteria for detecting abnormal muscle or abnormal lung uptake vary widely according to various studies. Some studies used a visual assessment, considering abnormal muscle or lung uptake superior to liver or superior to the mediastinal blood pool [20][21][22]. The semi-quantitative and quantitative parameters used by others also differ between what is studied, from SUVmax to mean SUVmax or SUVratios [21,[23][24][25][26][27][28][29]. Notably, most data are limited to retrospective studies deriving from China, Japan, Canada, France, or Spain [20][21][22][23][24][25]28,[30][31][32]. Finally, despite a recent demonstration of its efficacy, PET/CT is not included in previously detailed diagnoses or classification criteria. The objective of this study was to retrospectively describe the results of [ 18 F]FDG-PET/CT efficiency in a Belgian cohort with IIMs, focusing on patients with PM and DM. The main goals were to evaluate the performance of PET/CT imaging in disease diagnosis (e.g., abnormal muscle uptake) compared to the usual methods, such as MRI and EMG as well as PET/CT performance in ILD and malignancy screening. We also aimed to evaluate an eventual correlation between PET/CT results and clinical or biological abnormalities, such as serum CK levels and dyspnea. Materials and Methods This retrospective study was conducted at the Cliniques Universitaires Saint-Luc in Belgium, and was approved by our local institutional Ethics committee. All cases of IIM admitted between 12/2010 and 01/2023 in the Department of Internal Medicine were retrospectively reviewed. The data were collected using our institutional database (Epic electronic health record) and the database of the Internal Medicine Department. The inclusion criteria were as follows: patients ≥18 years old, dermatomyositis, polymyositis, and overlap syndrome. Overlap syndrome was considered when myositis was associated with either clinical and/or autoantibody overlap feature: scleroderma, sclerodactylia, polyarthritis, or Sjögren, or SSA/SSB, RNP, Ku, or PMScl antibodies. Only patients that fulfilled the Bohan and Peter criteria [2] or EULAR 2017 criteria [8] for the diagnosis of IIM were included in the study. Patients with immune-mediated necrotizing myositis (formerly named statin-induced myositis) and inclusion body myositis were excluded. We collected the following information: clinical and labs characteristics (including CK, antinuclear antibody, and myositis-specific autoantibodies), electromyography (EMG), whole-body muscle magnetic resonance imaging (MRI), the results of skin and muscle biopsies, 18F-fluorodeoxyglucose positron emission tomography/computed tomography [ 18 F]FDG-PET/CT, thoraco-abdominal enhanced CT scan, and treatment and outcome (mortality). All PET/CT scans were performed prior to immunosuppressive therapy, and patients fasted for at least 6 h before [ 18 F]FDG injection. The criterium used to interpret abnormal muscle uptake on [ 18 F]FDG-PET/CT was the following: the PET/CT was considered positive if FDG uptake was equal or greater than liver uptake [27,33]. The criteria used to interpret muscle biopsies were those from ENMC [17] and EULAR 2017 [8]. The criteria used to interpret EMGs were those well described by Paganoni [9]. EMG was considered positive if it showed signs of muscle membrane irritability: fibrillations potentials, positive sharp waves (PSW), and myopathic motor units [9]. MRI was considered positive if it showed intramuscular T2 hyperintensities [10]. Quantitative variables were reported as mean values, while qualitative values were shown as numbers and percentages. Statistical tests were performed using GraphPadPrism 9 (GraphPad Software, Inc., San Diego, CA, USA). For the comparison of CK values for patients with or without abnormal FDG uptake on the PET/CT, results were reported as the mean and standard deviation. The statistical analysis was performed using a non-parametric test (Mann-Whitney test) after excluding the outliers (ROUT method) and verifying the abnormality of the values distribution. The contingency analysis was performed using Fisher's exact test. A p-value < 5% was considered significant. Results In total, 44 patients were identified to have idiopathic inflammatory myopathies (IIMs); among them, 29 patients were retained for the final analysis ( Figure 1). The mean age was 48.7 years old, and 19 patients were female (65.5%). Twenty-two patients had dermatomyositis and seven had polymyositis. One patient in our cohort had only dermatologic involvement without myopathy. A diagnosis of amyopathic dermatomyositis was retained based on a skin biopsy. The clinical and biological characteristics are presented in Table 1. Results In total, 44 patients were identified to have idiopathic inflam (IIMs); among them, 29 patients were retained for the final analysis (F The mean serum creatinine kinase (CK) and mean CRP levels were 3125 UI/L and 30.3 mg/L, respectively. EMG was performed in 27 patients, and 62.9% showed signs of myositis. Whole-body muscle MRI was performed in 19 patients, and 57.8% showed signs of myositis. [ 18 F]FDG-PET/CT imaging was performed in 27 patients, in whom ILD was detected in 7 patients (25.9%), cancer in 3 patients (11.1%), and abnormal muscle FDG uptake compatible with myositis was detected in 13 patients (48.1%). The cancers detected via PET/CT imaging were as follows: one lung cancer, one esophageal cancer, and one ovarian cancer. The cancers that were not detected were one prostate cancer and one large granular leukemia. Three patients with cancer-associated myositis had consistent MSAs: two had PL7 antibodies and one had theTIF1-γ antibody. Among the patients with ILDs observed from PET/CT imaging, all were confirmed via a low-dose lung CT scan (HRCT). Some examples of patients with myositis, ILD, and cancer diagnosed with PET/CT imaging are presented in Figures 2 and 3. The patients were treated by corticosteroids (93%), methotrexate (62%), azathioprine (31%), mycophenolate mofetil (0.3%), rituximab (0.6%), and IV immunoglobulins (17.2%). Three patients showed complications with opportunistic infections (one endocarditis due to Candida albicans, one invasive aspergillosis, and one Nocardia infection), and the overall death rate was 13.7%. ovarian cancer. The cancers that were not detected were one prostate cancer and one large granular leukemia. Three patients with cancer-associated myositis had consistent MSAs: two had PL7 antibodies and one had theTIF1-γ antibody. Among the patients with ILDs observed from PET/CT imaging, all were confirmed via a low-dose lung CT scan (HRCT). Some examples of patients with myositis, ILD, and cancer diagnosed with PET/CT imaging are presented in Figures 2 and 3. The patients were treated by corticosteroids (93%), methotrexate (62%), azathioprine (31%), mycophenolate mofetil (0.3%), rituximab (0.6%), and IV immunoglobulins (17.2%). Three patients showed complications with opportunistic infections (one endocarditis due to Candida albicans, one invasive aspergillosis, and one Nocardia infection), and the overall death rate was 13.7%. We analyzed whether the abnormal muscle uptake disclosed by [ 18 F]FDG PET/CT was in accordance with the EMG or whole-body muscle MRI findings. Among the patients We analyzed whether the abnormal muscle uptake disclosed by [ 18 F]FDG PET/CT was in accordance with the EMG or whole-body muscle MRI findings. Among the patients with abnormal muscle FDG uptake observed with PET/CT imaging (13/28), the EMG results were positive in 12 patients (p = 0.004), while the MRI results were positive in 8 patients (p = 0.02) ( Table 2). PET/CT-positive: uptake = or > than liver uptake. Among the patients without abnormal FDG uptake (15/28), the EMG results were positive in nine patients while the MRI results were positive in only three patients. We performed the same analysis with the muscle biopsy results. Among the patients with abnormal muscle FDG uptake observed from PET/CT scans, myositis was confirmed via biopsy in seven patients, while in the other group, myositis was finally disclosed using biopsies in four patients (statistically not significant). We further compared the mean CK values at diagnosis in the two groups of patients, with and without abnormal muscle FDG uptake observed via PET/CT imaging. Interestingly, we observed a significantly higher level of CK in the group with abnormal muscle FDG uptake (p = 0.008) (Figure 4). Finally, we also wanted to evaluate the correlation between ILD detection by PET/CT imaging and symptoms such as dyspnea. Seven patients in our group were detected to have ILD via PET/CT scans, and all were confirmed with high-resolution chest CT (HRCT). Four of them had symptoms of dyspnea, as none of the twenty PET/CT-negative patients were asymptomatic (p = 0.002) ( Table 3). Finally, we also wanted to evaluate the correlation between ILD detection by PET/CT imaging and symptoms such as dyspnea. Seven patients in our group were detected to have ILD via PET/CT scans, and all were confirmed with high-resolution chest CT (HRCT). Four of them had symptoms of dyspnea, as none of the twenty PET/CT-negative patients were asymptomatic (p = 0.002) ( Table 3). Discussion Our study showed that, in our cohort of IIM patients, there was a significant correlation between abnormal muscle signaling found from PET/CT and EMG or MRI (p < 0.05), as well as with dyspnea and the presence of ILD in PET/CT imaging (p < 0.05). Interestingly, we found that the mean CK values were higher in patients with abnormal muscle uptake in PET/CT scans. Most of existing studies concerning [ 18 F]FDG-PET/CT scans in idiopathic inflammatory myopathies (IIMs) were retrospective trials performed in Japan and China [20][21][22][23][24][25]28,[30][31][32]; studies deriving from Europe are scarce [26,27,29,34,35], and our study is the first from Belgium. Several existing studies evaluated FDG uptake in muscle; however, many used inconstant interpretation criteria, such as visual analysis or semi-quantitative analysis (SUVanalysis) [20][21][22][23][24][25][26][27][28][29][30][31][32]. The studies of Owada, Tateyama, and Motegi et al. used visual analyses to assess FDG uptake in muscle [20][21][22]. In the report of Owada, FDG uptake correlated with EMG abnormalities [20]. In the study of Motegi et al., as in our study, FDG uptake correlated with MRI results and CK levels [22]. On the other hand, Tateyama et al. showed no correlation between FDG uptake in muscle and MRI results or CK levels [21]. Nevertheless, in these three reports, the visual criteria used were not similar. In the study of Owada, muscle uptake was considered positive if uptake was equal to or higher than the liver uptake, while in the two others, uptake was considered positive if equal or greater than the mediastinal blood pool. Concerning studies using semi-quantitative analyses (SUV max, mean SUV max, and SUV ratios), the results are also inconstant, with some reports showing a muscle FDG correlation to the CK level and/or the MRI findings [23][24][25][26] and others showing no correlation [21,[27][28][29]. It is important to highlight that in most of these studies, patients were already on corticosteroids before undergoing PET/CT scans, which can interfere with FDG uptake; this was not the case in our cohort of patients. There is also a lack of standardization in the interpretation criteria for using semi-quantitative methods in these previous studies. Notably, only one study compared visual analysis to semi-quantitative methods and showed similar diagnostic accuracy [24]. Altogether, this demonstrates that there is a crucial need for a prospective controlled trial with standardized interpretation criteria and optimal patient preparation. Recently, a promising diagnostic modality, [ 18 F]FDG-PET/MRI, was developed. This technique has the advantage of combining PET to magnetic resonance imaging [36]. Its sensitivity ranges from 60 to 100% and its specificity from 50 to 100%, and is highly dependent on the parameters used to quantify FDG uptake. The usefulness of this interesting technique in IIM also needs to be confirmed by a prospective trial. In our cohort, three patients were diagnosed with cancer-associated myositis via PET/CT (one esophageal cancer, one lung cancer, one ovarian cancer). The value of [ 18 F]FDG-PET/CT for cancer screening has been demonstrated in several studies with a high negative predictive value [30,34,35]. Callaghan et al. also showed that PET/CT has the same ability to detect cancer compared to conventional work-up (thoraco-abdominal enhanced CT scan, colonoscopy, gastroscopy, etc.) [34]. However, false negative results are also possible, particularly in situations of poorly avid lesions, small-sized tumors (e.g., below camera's spatial resolution) [37], traditionally PET/CT-negative cancers (prostate or renal cancer), or non-solid cancers, as in one of our patients. Seven patients within our group were detected to have ILD via PET/CT, and all were confirmed using high-resolution chest CT (HRCT). Interestingly, dyspnea was significantly associated with the presence of ILD via PET/CT, as was demonstrated in Owada et al.'s study [20]. Several studies have demonstrated the ability of PET/CT to detect ILDs, and showed a good correlation with HRCT [21][22][23]28,31,32]. The antimelanoma-differentiationassociated gene 5 (MDA5) antibody is a well-known IIM antibody associated with rapidly progressive ILD (RP-ILD) [38]. Identifying RP-ILD is highly important, since its mortality rates range from 19 to 27%, with the 3-month mortality rate being over 30% [39][40][41]. Three previous studies used semi-quantitative methods to identify patients that may develop this potentially lethal complication [23,31,42]. Li Y and Liang J et al. both showed that the mean lung SUV can predict RP-ILD development. Unfortunately, the lung SUV cut-off value was not similar in these two studies [23,31]. Recently, Zhang et al. showed that the PET score may be more useful than SUV for evaluating pulmonary disease activity [42]. The PET score is a five-point visual scale determined in six lung zones based on the maximum FDG uptake: one (no uptake), two (uptake ≤ mediastinum uptake), three (uptake > mediastinum but <liver uptake), four (uptake moderately above liver uptake), and five (uptake markedly above liver uptake). However, in this study, the majority of patients were already undergoing immunosuppressive therapy while subject to PET/CT, leading to a potential bias in the results of lung FDG uptake. To reiterate, this highlights the need for a larger prospective cohort. Our study has several limitations. Firstly, as there are few studies on IIM [20,24,25] due to the rareness of this disease, we had to perform a retrospective analysis on a small number of patients, and we did not include control patients. Nevertheless, our results align with a few other European cohorts of PET/CT in IIM [26,34,35]. Secondly, we did not perform a thorough comparison of muscle FDG uptake detected by PET/CT and edema or by abnormal MRI signals. This localization concordance analysis was not possible for several reasons, such as delays between the imaging technique for some patients and differences in the machine used. Detailed analyses comparing the two imaging modalities would be interesting to conduct, in order to confirm the good correlation we observed. Finally, we were not able to show a significant correlation between PET/CT and muscle biopsy, which remains the "gold standard" for the diagnosis of autoimmune myositis. This was possibly due to the small number of biopsies performed in our cohort (n = 15). As a reminder, we used the Bohan and EULAR 2017 criteria to decide on which patients to include in our study; for both criteria, a muscle biopsy is not mandatory for IIM diagnosis [2,8]. Conclusions Our study showed that PET/CT is useful for cancer and ILD detection, but we also showed that abnormal muscle uptake on PET/CT was significantly in accordance with EMG and MRI. Notably, this is the first study of this kind from Belgium. Moreover, we showed that the mean CK value was higher in patients with abnormal uptake in PET/CT imaging. We also showed that the presence of dyspnea was significantly associated with the presence of ILD via PET/CT scans. Based on our results, a prospective trial could be designed in future to confirm the correlation we observed between PET/CT results and those of other diagnosis tools (EMG, MRI, and CK values). Informed Consent Statement: Patient consent was waived due to the retrospective design of the study and the analysis of the anonymized data. Data Availability Statement: All of the data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions. Conflicts of Interest: The authors declare no conflict of interest.
2023-07-12T05:27:24.979Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "1e7f6c24b4be4e235e3aceee1979de265d656eb2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/14/2316/pdf?version=1688809327", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d6e3363c10e37321c8531a8dd3548ef8f5c4324", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
12548537
pes2o/s2orc
v3-fos-license
A novel mechanism for the regulation of amyloid precursor protein metabolism Modifier of cell adhesion protein (MOCA; previously called presenilin [PS] binding protein) is a DOCK180-related molecule, which interacts with PS1 and PS2, is localized to brain areas involved in Alzheimer's disease (AD) pathology, and is lost from the soluble fraction of sporadic Alzheimer's disease (AD) brains. Because PS1 has been associated with γ-secretase activity, MOCA may be involved in the regulation of β-amyloid precursor protein (APP) processing. Here we show that the expression of MOCA decreases both APP and amyloid β-peptide secretion and lowers the rate of cell-substratum adhesion. In contrast, MOCA does not lower the secretion of amyloid precursor-like protein (APLP) or several additional type 1 membrane proteins. The phenotypic changes caused by MOCA are due to an acceleration in the rate of intracellular APP degradation. The effect of MOCA expression on the secretion of APP and cellular adhesion is reversed by proteasome inhibitors, suggesting that MOCA directs nascent APP to proteasomes for destruction. It is concluded that MOCA plays a major role in APP metabolism and that the effect of MOCA on APP secretion and cell adhesion is a downstream consequence of MOCA-directed APP catabolism. This is a new mechanism by which the expression of APP is regulated. Introduction Alzheimer's disease (AD)* is the most prevalent cause of dementia in the elderly. AD is in part characterized pathologically by the presence of extracellular plaques consisting of deposits of the amyloid ␤ -peptides (A ␤ s) derived from ␤ -amyloid precursor protein (APP). APP is cleaved by three proteolytic activities, ␣ -, ␤ -, and ␥ -secretases (Mills and Reiner, 1999;De Strooper and Annaert, 2000). ␣ -Secretase cleaves APP within the A ␤ peptide sequence, whereas ␤ -secretase cleaves APP at the NH 2 terminus of the A ␤ peptide sequence. APP s ␣ and APP s ␤ are the NH 2 -terminal fragments generated by ␣ -and ␤ -secretases, respectively, whereas the remaining fragments are cleaved at the COOH terminus of the A ␤ peptide sequence by ␥ -secretase. The presence of mutant genes that encode presenilins (PSs) PS1 and PS2 have been linked to early onset familial Alzheimer's disease (FAD) Sherrington et al., 1995), and an increased deposition of A ␤ in plaques has been associated with these mutations Scheuner et al., 1996). There are several proteins that interact with the PSs (Van Gassen et al., 2000). Among these is the modifier of cell adhesion protein (MOCA) (Kashiwa et al., 2000;Chen et al., 2001). Although MOCA has 40% sequence homology with DOCK180 and contains SH 3 and Crk binding domains, its function was unknown. Because MOCA binds PS1 and because PS1 has been associated with ␥ -secretase activities in APP processing (Wolfe and Haass, 2001), it was asked if MOCA is involved in the regulation of APP metabolism. MOCA reduces APP and A ␤ secretion To investigate the effect of MOCA on APP protein secretion, a rat nerve cell line called B103 (Schubert et al., 1974) was stably transfected with APP 695 and again with a plasmid harboring the full-length MOCA cDNA. B103 cells normally express little or no APP (Schubert et al., 1989) and no MOCA. The expression of MOCA was detected in transfected clones of B103 (APP 695 /MOCA) at levels similar to or less than MOCA expressed in the mouse hippocampus (Fig. 1, A and D). Secreted proteins from the stably transfected cells containing either an empty vector or the MOCA gene were first stud-ied by Western blotting using antibody 6E10, which recognizes APP and amino acid residues 1-17 of human A ␤ (Kim et al., 1990). The release of secreted APP (APP s ) was decreased dramatically in all independently isolated clones expressing MOCA relative to clones containing empty vector ( Fig. 1 A). To determine the specificity of the MOCA effect on APP secretion and eliminate the possibility of clonal variation, several control experiments were performed. First, a transcriptional change was ruled out because the abundance of APP mRNA was not altered by both Northern blot and reverse transcription (RT)-PCR analysis (Fig. 1 B). Second, transient transfection of MOCA into B103 (APP 695 ) cells led to the reduction of APP secretion in an uncloned population ( Fig. 1 D). Third, the secretion of APP was not affected by the overexpression of DOCK180, a protein which has a 40% homology with MOCA ( Fig. 2 D). Fourth, the secretion of endogenous APP 751 was reduced in HEK293T cells expressing MOCA (Fig. 1 D). Lastly, since APP is a type I integral mem-brane protein we asked if MOCA alters the expression or secretion (shedding) of six other proteins. The effect of MOCA expression on the secretion of another large extracellular protein, ␤ -laminin, was examined; no difference was observed between cells transfected with APP and APP plus MOCA ( Fig. 1 A). Neural cell adhesion molecule (N-CAM) is involved in both homotypic and heterotypic cell-cell adhesions and expressed as several membrane-bound and secreted isoforms through an alternative splicing mechanism (Gower et al., 1988). No effect on the levels of intracellular N-CAM expression and/or N-CAM secretion was observed in B103 cells expressing MOCA (Fig. 1 A). A single band with molecular weight ‫ف‬ 130 kd is detected intracellularly, whereas the secreted N-CAM was aggregated into higher molecular weight complexes ( Ͼ 300 kd) as described by others (Tavella et al., 1994). Notch-1 is involved in crucial cell fate decisions during development and the processing of Notch is similar to ␥ -secretase-mediated cleavage of APP (Greenwald, 1998; De The levels of APP secreted by two clones (clones 7 and 8) stably transfected with APP and MOCA and detected with the 6E10 antibody were significantly lower than those from cells only expressing APP. The expression level of another secreted protein, ␤-laminin, was not affected by MOCA. Identical amounts of protein (80 g) were loaded in each of the six lanes, and Western blot assays were performed. The secretion of N-CAM and the intracellular levels of N-CAM were also not affected by MOCA. Actin levels were used as loading control. (B) The levels of APP mRNA in the corresponding cells were not significantly different as shown by Northern hybridization and the RT-PCR analysis. (C) MOCA had a small effect on Notch-1 expression. Actin was used as loading control. Notch-1 expression was quantitated and presented as percent expression in APP 695 . (D) The secretion of APP was significantly reduced relative to controls in B103 cells transiently transfected with MOCA and in HEK-293T cells, which secrete endogenous APP 751 . The middle panels show MOCA expression, and the bottom panel shows that the secretion of ␤-laminin was not affected by MOCA in HEK293T cells. The quantitation of the APPs, laminin, N-CAM, and Notch-1 expression was determined by scanning of the respective blots of several clones expressing MOCA (n ϭ 3) and presented as the percentage of B103 APP 695 (100%). (E) Expression of N-cadherin, E-cadherin, and APLP2 in B103 and HEK-293 cells, and the secretion of APLP2 in B103 cells. Actin served as a loading control. Strooper et al., 1999;Struhl and Greenwald 1999;Ye et al., 1999). Fig. 1 C shows that the expression of (intracellular) full-length (mol wt ‫ف‬ 300 kd), and truncated transmembrane forms (mol wt ‫ف‬ 120 kd) of Notch are only slightly altered in B103 cells expressing MOCA. Finally, previous studies have shown that the processing or metabolism of amyloid precursor-like protein (APLP) and cadherins are regulated by PSs (Naruse et al., 1998;Marambaud et al., 2002). No difference in E-cadherin and APLP2 expression was observed, nor was there a decrease in APLP2 secretion. In contrast, the expression level of N-cadherin was significantly increased in MOCAexpressing B103 and 293T cells (Fig. 1 E). The above data show that MOCA expression leads to the reduced secretion of APP but not other type-1 membrane proteins. Because PS1 is associated with ␥ -secretase cleavage of APP (Wolfe and Haass, 2001) and in situ hybridization shows that MOCA and PS1 are expressed in overlapping areas of the brain and are colocalized on cells expressing both proteins (Kashiwa et al., 2000), it was asked if the effect of MOCA on APP secretion is altered by PS1 expression or ac-tivity. The endogenous expression of PS1 was detected in B103 cells using an antibody specific to rat PS1. Fig. 2 A shows the abundant processed NH 2 -terminal fragments of PS1 (mol wt ‫ف‬ 19-21 kd). A very weak band with a mol wt ‫ف‬ 90 kd was also observed, which may be a complexed form of PS1 (unpublished data). To block PS1-related ␥ -secretase activities, several inhibitors, which are specific for ␥ -secretase, were used. All of these agents have been shown to decrease A ␤ production with concomitant increases in the levels of the corresponding COOH-terminal fragments at the concentrations we used. ␥ -Secretase inhibitor II is a transition-state analogue, which selectively inhibits the ␥ -secretase cleavage of APP and Notch-1 (Wolfe et al., 1998De Strooper et al., 1999;Berezovska et al., 2000). Several dipeptidyl aldehydes ( ␥ -secretase inhibitor III [Z-LL-CHO], ␥ -secretase inhibitor IV [2-naphthoyl-VF-CHO], ␥ -secretase inhibitor V [Z-LF-CHO], and calpain inhibitor III [Z-VL-CHO, Calp III]) were also tested (Higaki et al., 1995;Figueiredo-Pereira et al., 1999;Sinha and Lieberburg, 1999). No effect of ␥ -secretase inhibitors II and V on APP secretion was lower than that in cells only transfected with mutant PS1 (L392V). Human PS1 holoproteins (hPS1FL) and the NH 2 -terminal fragment (hPS1NTF) were detected using monoclonal antibody recognizing PS1 . The overexpression of DOCK180 protein in B103 cells is also indicated. was found in B103 cells expressing MOCA. APP secretion was partially restored by ␥ -secretase inhibitors III and IV and by Calp III in B103 cells expressing MOCA and wildtype cells. As controls, MG132 and lactacystin dramatically affected APP secretion and intracellular expression (see the following). All of the secretase inhibitors were functional because they all increased the accumulation of COOH-terminal stubs (Fig. 2 B). There was much less accumulation of these molecules in MOCA-expressing cells, probably because of the limited substrate (APP) availability. Similar results were also observed in HEK293 cells stably transfected with MOCA (unpublished data). In addition, the effect of the overexpression of human PS1 on APP secretion was studied ( Fig. 2 C). The full-length wild-type or a mutant PS1 gene was stably transfected into the B103 (APP 695 /MOCA) cells, and APP s assayed. Fig. 2 C shows that APP s release was further reduced relative to cells expressing MOCA alone in four independently isolated clones, which coexpress MOCA and wild-type (WT) PS1 or the mutant (FAD) PS1 (L392V). The effect of MOCA on APP secretion is summarized in Fig. 2 D, which also shows human PS1 overexpression (Fig. 2 D,bottom). To determine the effect of MOCA expression on A ␤ peptide production, we transfected the MOCA gene into a B103 cell line, which contains the FAD Swedish form of APP (APP sw ) and secretes A ␤ peptides into the medium (Xu et al., 1999). The levels of APP sw secretion were significantly decreased in cells expressing MOCA (Fig. 3 A). The intracellular levels of APP sw were also decreased by MOCA (Fig. 3 B). Therefore, the effects of MOCA on APP sw secretion were similar to those on wild-type APP (Fig. 1). A ␤ secretion into the culture medium was then measured by immunoprecipitation and radioautography (Fig. 3 C) and inde-pendently by the sandwich ELISA ( Fig. 3 D). The level of A ␤ in the medium derived from B103 (APP sw /MOCA) cells was considerably decreased compared with B103 (APP sw ) cells (Fig. 3, C and D). We could not detect the intracellular expression of A ␤ in any of the B103 (APP 695 ) or B103 (APP sw ) cell lines using the techniques employed to assay extracellular A ␤ . The above data show that MOCA decreases the secretion of both APP and A ␤ . Reduced APP secretion is due to APP degradation The decreased secretion of APP caused by MOCA may be due to a blockage of the secretion pathway, which will cause an intracellular accumulation of APP, a decrease in its rate of synthesis, or an increase in the rate of APP degradation. To distinguish between these alternatives, we first examined the intracellular levels of APP. Full-length APP was reduced in B103 cells containing APP 695 /MOCA relative to APP 695 cells as defined by two antibodies, 6E10 and 22C11 (Fig. 4, A and B). In addition, the level of expression of the COOH-terminal fragment of APP was diminished in cells stably transfected with MOCA as determined by immunoprecipitation with the CT-15 antibody, which recognizes the last 15 COOH-terminal amino acid residues of APP 695 (Fig. 4 C). These data rule out the possibility that MOCA expression is enhancing the intracellular accumulation of APP and suggest that the decreased APP secretion caused by MOCA expression is not due to a direct blockage of the secretion pathways. Because there is no increased intracellular accumulation of APP in cells expressing MOCA, we investigated the effect of MOCA on the turnover of APP by pulse-chase experiments. Antibodies 6E10 and CT-15 specifically immunoprecipitate the intracellular APP holoprotein, whereas antibodies GID and 22C11 performed less well (Fig. 5 A). 6E10 was used in Figure 3. Effect of MOCA on A␤secretion. (A) The levels of secreted and intracellular APPsw were examined in several B103 clones stably transfected with both APPsw and MOCA by Western blot analysis. The optical density of APPsw secretion was quantitated by NIH image, normalized to the APPsw vector alone level, and the data are presented as the percentage change plus or minus the standard error or the mean (n ϭ 3). (B) Intracellular APP was assayed by Western blotting with antibody 6E10 and quantitated as above. (C) The [ 35 S]methionine-labeled (16 h) secreted A␤ peptides were immunoprecipitated with 6E10 antibody, separated in SDSpolyacrylamide gels, and visualized by exposure to X-films. The optical density of A␤ secretion was quantitated by NIH image, and the data are presented as the percentage of change plus or minus the standard error or the mean (n ϭ 3). (D) The secreted A␤ 1-40 peptides were measured by using a sandwich ELISA kit (Biosource International). Data from four separate experiments were combined and are presented as mean Ϯ SEM pgm of A␤ per mg cellular protein (n ϭ 16/ condition). the following experiments. B103 cells expressing either APP 695 /vector or APP 695 /MOCA were labeled for 10 min in [ 35 S]methionine-containing media and chased for 15, 30, 45, and 60 min in "cold," serum-free conditioned media. In control B103 (APP 695 /vector) cells, the APP protein level decreased gradually (Fig. 5 B). In contrast, the intracellular APP protein level in B103 (APP 695 /MOCA) cells rapidly decreased. The initial levels after the 10-min labeling period were also lower than controls in cells expressing MOCA, a result which would be predicted for nascent proteins can be very rapidly degraded after synthesis . There was a temporal increase in the amount of APP protein in the medium of both cell types (Fig. 5 B), but the overall level of APP accumulation in the medium is decreased by the expression of MOCA (Figs. 1-3). Therefore, it is likely that MOCA facilitates the degradation of APP. Finally, the data in the following paragraphs rule out the remaining alternative that MOCA expression down-regulates APP synthesis, since in the presence of proteasome inhibitors the levels of extracellular APP are similar in cells expressing MOCA and controls. Proteasomes mediate APP degradation To independently verify the data on APP degradation, we studied the effect of protease inhibitors on the secretion and the intracellular accumulation of APP. Leupeptin inhibits lysosomal proteases, whereas MG132 inhibits membrane protein degradation through the ubiquitin-proteasome pathway (Soriano et al., 1999). Both APP secretion and the intracellular accumulation of APP were significantly increased in the APP 695 /MOCA cells by 10 M MG132; no effect was observed in cells treated with 100 M leupeptin (unpublished data). More strikingly, the secretion of APP in cells expressing MOCA was enhanced in concert with the increasing du- The cell lysates were collected and immunoprecipitated with antibody 6E10 (arrow is APP). The level of APP secretion was also determined at the same time. The optical density of APP expression was quantitated by NIH image, and the data are presented as the percentage change after normalization to the initial levels of APP (intracellular) or to the highest levels of APP s (n ϭ 3). (C) The effects of the protease inhibitor MG132 on the secretion of APP. Cells were incubated in the presence and absence of MG132 (10 M) for the indicated time periods. The identical amount of protein collected from the serum-free condition medium was subjected to SDS-PAGE analysis, and the APP levels were determined by Western blot analysis. The secretion of APP was increased in APP 695 /MOCA (clone 8) cells treated by MG132 as a function of time. The optical density of APP secretion was quantitated from lower exposure images by NIH image, and the data are presented as the percentage increase in APP, plus or minus the standard error of the mean (n ϭ 3). ration of MG132 treatment and was nearly restored to control levels after 16 h (Fig. 5 C). These data again show that the APP degradation is affected by MOCA. They also confirm the Northern and RT-PCR data, showing no effect of MOCA on APP transcription, and rule out the possibility that the expression of MOCA reduces APP synthesis. To confirm the above data on protein degradation and identify the mechanism which might be involved in the MOCA regulation of APP degradation, the effects of additional protease inhibitors were tested. Cells were treated with chloroquine (50 M), NH 4 Cl (5 mM Lactacystin, ␤-lactone, and epoxomicin, which specifically target the proteasome and do not inhibit lysosomal protein degradation (Craiu et al., 1997;Fenteany and Schreiber, 1998;Meng et al., 1999), effectively restored the level of APP secretion in MOCA-containing cells (Fig. 6 A). Similar effects were also observed in cells treated with the peptide aldehydes, ALLN and ALLM, which inhibit proteasomes but also inhibit lysosomal cysteine proteases and calpains (Sherwood et al., 1993;Zhang et al., 1999). In contrast, two lysosomal protease inhibitors, chloroquine and ammonium chloride (Caporaso et al., 1992), did not reverse the MOCA effects on APP secretion. As another negative control, phosphoramidon, a metalloprotease inhibitor, did not affect APP secretion, consistent with the previous reports (Parvathy et al., 1998). The effects of these inhibitors on the intracellular levels of APP were also tested and were comparable to the corresponding effects on APP secretion (Fig. 6 B). These data were confirmed by a kinetic pulse-chase analysis. Cells with and without MOCA were labeled for 10 min with [ 35 S]methionine and chased in complete medium in the presence of 10 M MG132. Fig. 7 A shows that nascent APP molecules are stabilized by proteasome inhibition and that the rate of loss of intracellular APP in cells expressing MOCA becomes similar to that of MOCA-deficient cells. These data strongly support the involvement of proteasomes in the regulation of APP degradation by MOCA. To determine if MOCA itself is affected by proteasome inhibition, MOCA expression patterns were studied as a function of time by Western blotting and confocal microscopy in the presence of MG132. The accumulation of intracellular MOCA increased after MG132 treatment (Fig. 7 B). Confocal images showed that under normal conditions MOCA is widely dispersed in the cytoplasm of B103 nerve cells (Fig. 7, C-2). MG132 treatment caused an increase in MOCA accumulation in the cytoplasm, and after 16 h treatment with MG132, there was MOCA accumulation in the perinuclear areas (Fig. 7, C-5). These data suggest that like many other proteins MOCA metabolism is regulated by the proteasome pathway. MOCA reduces cell-substratum adhesion The profound decrease in APP secretion caused by MOCA should have biological consequences. APP has multiple functions, but its ability to mediate cell-substratum adhesion has been well-documented (Schubert et al., 1989;Chen and Yankner, 1991;Schubert and Behl, 1993;Jin et al., 1994;Beher et al., 1996;Coulson et al., 1997). Because the inhibition of APP synthesis by antisense nucleotides blocks cell-substratum adhesion (Coulson et al., 1997), it would be predicted that the expression of MOCA has a similar effect and that this phenotype would be reversed by proteasome inhibitors, which restore APP accumulation and secretion. This is indeed the case when the ability of B103 nerve cells, expressing APP, MOCA, or both, to adhere to the extracellular matrix protein laminin are compared. Fig. 8 A shows that the expression of APP 695 in B103 cells, which normally lack APP, increases the rate of cell-substratum adhesion to laminin. The expression of MOCA alone in B103 cells lacking APP 695 has no effect on adhesion. In contrast, cells expressing both APP and MOCA adhere to laminin at a rate which is indistinguishable from cells expressing no APP. However, when cells expressing APP and MOCA are exposed to the proteasome inhibitor lactacystin for 5 h before the adhesion assay there is an increase in the rate of adhesion to a level indistinguishable from cells expressing APP but no MOCA (Fig. 8 B). Lactacystin does not alter the rate of adhesion of cells which do not express MOCA or wild-type B103 cells. The adhesion data are in agreement with those which show that MOCA expression increases APP breakdown and decreases its secretion (Figs. 1, 2, and 5) and that APP accumulation and secretion can be restored by proteasome inhibitors (Figs. 5 and 6). Discussion The above data show that MOCA decreases the secretion of APP and A␤ and also reduces the rate of cell adhesion. These results reflect the acceleration of intracellular APP degradation for the following reasons. (a) There is no accumulation of intracellular APP nor is there a decrease in APP mRNA in cells expressing MOCA. (b) In pulse-chase experiments, the intracellular APP protein level is rapidly reduced in cells expressing physiological levels of MOCA relative to cells not expressing MOCA. (c) The level of APP secretion and the remaining COOH-terminal fragments of APP are decreased by MOCA to a similar extent. (d) Several proteasome inhibitors increase the accumulation of intracellular APP and reverse the effect of MOCA on APP secretion and cell adhesion; lysosomal protease inhibitors are ineffective. (e) Pulse-labeled APP is also stabilized by proteasome inhibitors in MOCA-expressing cells, and unlike in the absence of proteasome inhibitors, its rate of intracellular decline is indistinguishable from that in cells not expressing MOCA. APP is a type I membrane-spanning protein whose secretion is regulated by a variety of factors including growth factors, neurotransmitters, phorbol esters, extracellular matrix molecules, and stress (Schubert et al., 1989;Mills and Reiner, 1999;De Strooper and Annaert, 2000). The mechanisms involved in the regulation of APP secretion include alternations in APP phosphorylation (Caporaso et al., 1992), the modification of protein glycosylation (Galbete et al., 2000), alternative in gene splicing (Shepherd et al., 2000) and transcription (Ciallella et al., 1994), and also changes in protein degradation (Checler et al., 2000). Although these mechanisms are diverse, it is likely that they are shared with many type I membrane proteins. APP is probably the most studied molecule of this class because of its medical importance. The intracellular sites of APP metabolism still remain controversial with ␣-secretase cleavage described at the plasma membrane, in the Golgi, or in the post-Golgi secretory vesicles (Mills and Reiner, 1999). Similarly, ␤and ␥-secretase activities have been identified in the trans-Golgi network (Xu et al., 1997), the endoplasmic reticulum/intermediate compartment (Cook et al., 1997), and the endosome/lysosome system (Haass et al., 1992). Our data do not distinguish between these alternatives but show that secretion of APP and A␤ are both reduced in the presence of MOCA. Because we do not observe an intracellular accumulation of APP or intermediate breakdown products of APP, the effect of MOCA on APP processing most likely occurs before APP reaches the cell surface. The pleotropic effects of MOCA on APP degradation may occur in the ER, which is consistent with the subcellular localization of both PS1 and MOCA (Doan et al., 1996;Kashiwa et al., 2000;Xia et al., 2000. Proteins destined for membranes or secretion are translocated into the ER, folded, assembled, and transported to a cellular destination or secreted (Hurtley and Helenius, 1989). Incorrectly folded proteins, unassembled subunits of multisubunit complexes, and mutated proteins are rapidly eliminated from the cell; misfolded proteins are translocated to proteasomes and degraded (Suzuki et al., 1998). However, in the case of unfolded proteins the conventional route may not be taken because excessive accumulation of these macromolecules within the ER lumen might lead to their aggregation and precipitation, thereby blocking the secretory pathway (Klausner and Sitia, 1990). Because APP binds to the molecular chaperones Bip/G-RP78 and HSC73 and misfolded proteins bound to Bip/GRP78 are degraded, it has been sug-gested that APP can be retained in the ER as a nascent polypeptide and degraded (Yang et al., 1998;Kouchi et al., 1999). This idea is consistent with data suggesting that the degradation pathway for APP in the ER participates in APP secretion and is distinct from ␥-secretase cleavage (Bunnell et al., 1998) and that the proteasome is involved in APP processing (Hare, 2001). Because APP secretion is restored by proteasome inhibitors and intracellular accumulation of APP is not observed in the absence of proteasome inhibitors, the secretion pathway is not blocked by MOCA. Consistent with previous data (Bunnell et al., 1998), we were unable to isolate a ubiquitin-conjugated APP complex even after the treatment with proteasome inhibitors, and we did not observe any APP intermediates caused by MOCA expression. Proteasome inhibitors increase the stability of both total and pulse-labeled APP in cells (Figs. 6 and 7) and restore the secretion rate to near control levels in MOCA-expressing cells (Fig. 5 C and Fig. 6). They also increase the intracellular accumulation of MOCA over long periods of time (Fig. 7). Although it cannot be formally ruled out in any studies that proteasome inhibitors block an unknown function to generate the resultant phenotype, we feel that this is unlikely in the experiments described here for two reasons. First, in an experiment where cells are pulse labeled for 10 min and chased in the presence of a proteasome inhibitor there was a rapid inhibition of APP breakdown, bringing it up to the rate of loss of intracellular APP in non-MOCA cells due to secretion (Fig. 7). The effect occurred well before any significant accumulation of other proteins like MOCA. The accumulation in proteins in the ER in the presence of proteasome inhibitors can sometimes indirectly inhibit the secretory pathway; in our case, APP secretion is enhanced. Second, a variety of structurally diverse proteasome inhibitors reversed the effect of MOCA on APP secretion, whereas ␥-secretase inhibitors, lysosomal protease inhibitors, and a metalloprotease inhibitor have no effect. These results strongly suggest that proteasomes are involved in MOCA-induced APP degradation. These data also suggest that nascent APP may pass through an ER environment in which complexes for both protein degradation and protein assembly coexist. In the absence of MOCA, the precursor protein follows the secretory pathway, whereas the expression of MOCA directs a significant fraction of APP to proteasomes where it is degraded. This novel MOCA-mediated pathway presents yet another way in which the expression of specific proteins may be controlled. A␤ amyloid peptides are also generated from various cellular compartments, including the ER, the Golgi apparatus, the trans-Golgi networks, lysosomes, and endosomes, through either a constitutive secretion pathway or through an endocytotic pathway in which cell surface APP moves to the lysosomes or endosomes where A␤ is produced (Mills and Reiner, 1999;De Strooper and Annaert, 2000). A␤ secretion is decreased by MOCA in our studies, probably because of the rapid degradation of APP, therefore reducing the APP source for the generation of A␤. Because the overall production of A␤ is reduced by MOCA, MOCA expression may help to reduce A␤ production in the central nervous system. It follows that the loss of MOCA function could lead to AD. PS1 controls several aspects of APP metabolism (Sisodia, 2000) and protein breakdown (Niwa et al., 1999;Katayama et al., 2000). PS1 is also required for "␥-secretase" cleavage of Notch-1. However, the proteolytic cleavage of APP and Notch are differentially facilitated (Capell et al., 2000). The PS1-dependent ␥-secretase processing of APP appears to be nonselective and occurs at multiple sites within the APP transmembrane domain. This is in contrast to the highly selective PS1-dependent processing of Notch (Yu et al., 2001). Our data show that, unlike PS1, MOCA has a minimal effect on Notch-1 degradation and that of five additional membrane proteins (Fig. 1). We also studied the effects of several selective ␥-secretase inhibitors on APP secretion modulated by MOCA (Fig. 2 B). Consistent with previous observations, there was little effect of these ␥-secretase inhibitors on APP secretion in the absence of MOCA. In contrast to the proteasome inhibitors, among the ␥-secretase inhibitors tested, only ␥-secretase inhibitors III and IV and Calp III partially revert the effect of MOCA effect on APP secretion. ␥-Secretase inhibitor II, which has no effect, is thought to be an aspartyl protease inhibitor, which selectively inhibits the ␥-secretase cleavage of APP and Notch-1 proteolysis (Wolfe et al., 1998;De Strooper et al., 1999;Berezovska et al., 2000;Esler et al., 2000). The other ␥-secretase inhibitors including III, IV, V, and the potent calpain inhibitor III are dipeptidyl aldehydes targeting cysteine proteases and may block proteasome enzymes, accounting for the inconsistency of their effects on APP secretion affected by MOCA. Finally, the additive effect of PS1 on MOCA-decreased APP secretion was demonstrated (Fig. 2 C). This observation is consistent with the fact that PS1 lowers the secretion of APP in yeast (Evin et al., 2000). MOCA also functions effectively in the presence of mutant PS1. Together, the above data show that the effect of MOCA is very different from that of the PS1-associated ␥-secretase activity. MOCA contains an SH 3 domain, interacts with the Crk adaptor protein, and shares a 40% homology with DOCK180 (Kashiwa et al., 2000). Because DOCK180 interacts with Rac and other small G-proteins, MOCA may also interact with small G-proteins involved in protein breakdown or mediate phosphorylation events between proteins involved in APP trafficking and metabolism. APP is a potent cell adhesion molecule which binds to both heparin and other extracellular matrix molecules (Schubert et al., 1989;Beher et al., 1996;Wu et al., 1997). Cells which express APP adhere more rapidly to extracellular matrix proteins than cells which do not (Fig. 8) (Schubert and Behl, 1993). Conversely, the inhibition of APP synthesis by antisense techniques blocks cell-substratum adhesion (Coulson et al., 1997). The expression of MOCA in B103 nerve cells, which do not make APP, has no effect on cellular adhesion to laminin, but reduces the rate of adhesion of cells which express APP (Fig. 8). Therefore, the effect of MOCA on cell-substratum is tightly coupled to APP expression. When the rapid breakdown of APP caused by MOCA is blocked by proteasome inhibitors, there is a return to the normal rate of adhesion for B103 cells expressing APP (Fig. 8 B). The simplest explanation for these results is that in the presence of MOCA newly synthesized APP is degraded at such a rapid rate that very little reaches the cell surface so that it is unable to participate in adhesive interactions. Because one adhesion-dependent biological activity of APP is the regulation of neurite outgrowth (Jin et al., 1994), the expression of MOCA during development could regulate cell migration and axon pathfinding. Therefore, the aberrant expression of MOCA or its loss could have pathological consequences in addition to those caused by altered A␤ secretion. Antibodies and chemicals Monoclonal antibody 22C11, which recognizes amino acid residues 66-81 of APP 695 , was purchased from Roche. Anti-GID, a rabbit polyclonal antibody against amino acid residues 175-186 of APP 695 , was provided by Dr. Greg Cole (University of California, Los Angeles, CA) and has been well characterized (Schubert et al., 1989). CT-15, a rabbit polyclonal antibody against the last 15 COOH-terminal amino acid residues of APP 695 , was provided by Dr. Edward Koo (University of California, San Diego, CA). Monoclonal antibody 6E10, which recognizes amino acid residues 1-17 of human A␤, was obtained from Dr. Richard Kiscsak (New York State Institute for Basic Research, Staten Island, NY) or purchased from Senetek. An affinity purified polyclonal antibody, which recognizes amino acid residues 2,012-2,027 of MOCA, was generated in rabbits (Kashiwa et al., 2000). A monoclonal antibody, recognizing the NH 2 -terminal amino acid residues 21-80 of human PS1, and a polyclonal antibody, recognizing the amino acid residues 303-316 of mouse PS1, were purchased from Chemicon and Oncogene, respectively. ␤-Laminin antibody (sc-6018), N-CAM antibody (sc-1507), N-and E-cadherin antibody from Transduction Laboratories, Notch antibody (sc-6014) were purchased from Santa Cruz Biotechnology, Inc., and APLP was from Dr. Gopal Thinakaran (University of Chicago, Chicago, IL). The following chemical reagents were bought from Sigma-Aldrich: chloroquine, NH 4 Cl, phosphoramidon, ALLN, ALLM, lactacystin, clasto-lactacystin ␤-lactone, leupeptin, and MG132. Epoxomicin was bought from Biomol. ␥-Secretase inhibitors II, III, IV, and V and calpain inhibitor III were purchased from Calbiochem. Cells and transfection The neuronal cell line B103 (Schubert et al., 1974) was grown in DME supplemented with 10% heat-inactivated FBS. B103 cells were stably transfected with APP 695 using G418 selection (Schubert et al., 1989) and with various plasmids by Lipofectamine 2000 (Invitrogen) using puromycin selection for MOCA or hygromycin for PS1. The stably transfected cells were subsequently cloned and screened for protein expression by Western blot analysis. Western blotting, metabolic labeling, immunoprecipitation, and ELISA For Western blotting, cells were washed twice with ice-cold PBS and lysed in lysis buffer (1% Triton, 50 mM Hepes, pH 7.5, 50 mM NaCl, 5 mM EDTA, 1 mM Na 3 VO 4 , 50 mM NaF, 10 mM Na 4 P 2 O 7 , plus a mixture of protease inhibitors [Complete Mini; Roche]). For secreted protein analysis, semiconfluent cultures (10 6 cells in 100-mm tissue culture dishes) were washed twice with serum-free medium and incubated for 20 h in 4 ml serum-free medium. The growth conditioned medium was then desalted by passage through a Sephadex G25 column, and 10% of the material was used to determine protein content. For a given experiment, equal amounts of protein (usually 15 g) were lyophilized and resuspended in 50 l of sample buffer. Protein concentrations were determined by Coomassie Plus (Pierce Chemical Co.). The same amount of protein from each sample was separated on Novex precast 10% polyacrylamide gels (Invitrogen) and transferred to Immobulin membranes (Millipore). The membranes were blocked with 5% nonfat milk in Tris-buffered saline for 1 h at room temperature. After overnight incubation at 4ЊC with primary antibodies, the antigens were detected with HRP-conjugated secondary antibodies (Bio-Rad Laboratories) using an ECL kit (Amersham Pharmacia) and exposed to film. For pulse-chase experiments, cells were grown to 80% confluency and incubated in methionine-free DME for 90 min. Cells were labeled with [ 35 S]methionine (400 Ci/ml in methionine-free DME plus 5% dialyzed FBS) for 10 min at 37ЊC, and the medium was replaced with serum-free DME medium plus N1 supplement (Sigma-Aldrich). The media were collected, and cells were lysed after different time periods. Protein concentrations were determined, and the same amounts of protein were immunoprecipitated with antibodies at 4ЊC overnight. 25 l of anti-mouse IgG agarose (Roche) were then added to each sample and incubated at 4ЊC for 2 h on a rocker platform. The immunoprecipitates were collected by centrifugation and washed four times with the washing buffer (0.1% Triton, 20 mM Hepes, 150 mM NaCl, 10% glycerol). The agarose beads were resuspended in 30 l SDS-PAGE sample buffer and boiled for 3 min to release the proteins. After 2 min of centrifugation, the supernatants were separated on 10% Tris-glycine gels. For A␤ analysis, 10-20% acrylamide Tricine gels and longer incubation times were used. The gels were dried and subjected to autoradiography and quantitated by NIH image. A␤ production was also measured by a sensitive fluorescence-based sandwich ELISA assay using a kit from Biosource International according to the manufacturer's instructions. Northern hybridization and RT-PCR Total RNA was isolated from the cells and brain tissues using Trizol reagents (Invitrogen), and mRNA was purified using a mRNA purification kit (Amersham Pharmacia Biotech). 2 g of mRNAs from each sample were separated on 1% agarose gels and transferred onto Zeta membranes (Bio-Rad Laboratories, Inc.). Northern hybridization was performed in a Ul-traHyb buffer (Ambion) with an APP cDNA fragment probe labeled with 32 P-dCTP using a rediprime kit (Amersham Pharmacia Biotech). RT reactions were performed for each RNA sample using 1 g of total RNA in RT buffer composed of 10 mM DTT, 20 M each of dATP, dCTP, dGTP, and dTTP, and 1 M of oligo (dT). The solution was heated to 65ЊC for 5 min and cooled to 37ЊC for 10 min, and then incubated in the presence of 25 U of AMV RT at 42ЊC for 1 h. Master mixes for the PCR reactions were used for each sample. The PCR reaction mixture contained forward and reverse primers (10-20 pmol each), dNTPs (200 M each as final concentration), 1ϫ PCR buffer, Taq DNA polymerase (0.5 U) (Roche), and 1 l of the RT mixture as the source of cDNA. The primers used for PCR reaction were as follows: 5Ј-ATGGATGCAGAATTCCGACATGAC-3Ј (forward) (nt 1,933-1,956) and 5Ј-CTAGTTCTGCATCTGCTCAAAGAA-3Ј (reverse) (nt 2,235-2,212) for the APP gene (sequence data available from GenBank/EMBL/ DDBJ under accession no. Y00264). Amplification was performed at 94ЊC for 40 s, 56ЊC for 1 min, and 72ЊC for 1 min, for 35 cycles. The PCR reactions with primers and RNAs but without the RT reaction were conducted as controls. After amplification, each sample was electrophoresed on a 1.5% agarose gel visualized by ethidium bromide staining. Adhesion assay The cell adhesion assays were performed as described previously (Schubert et al., 1989). Exponentially B103 (APP 695 /Vector) and B103 (APP 695 / MOCA) cells were labeled with [ 3 H]leucine for 15 h. The cells were pipetted from the culture dishes and washed three times by centrifugation with Hepes buffered medium containing 0.4% BSA (Calbiochem). No trypsin or chelating reagents were used. Aliquots of 0.2 ml containing 5 ϫ 10 4 cells were pipetted into 35-mm plastic Petri dishes coated with 5 g mouse laminin and 2 ml of the above medium. The cells did not attach to uncoated dishes. At the indicated times, the dishes were swirled 10 times, the medium was aspirated, the remaining attached cells were dissolved in 3% Triton X-100, and their isotope content was determined. The data are plotted as the percent of input cells (radioactivity) that adhered at the indicated time and are presented as the average of triplicate plates. Variation between duplicates was Ͻ5%. Immunostaining and laser confocal imaging Cultured cells were fixed in 4% paraformaldehyde for 20 min and permeabilized with 0.2% Triton X-100 for 30 min. The fixed cells were blocked in 2% BSA in PBS for 1 h and incubated with primary antibodies followed by fluorescent-conjugated secondary antibodies (Molecular Probes). The cells were then mounted under glass coverslips with antifading media containing 4% N-propyl gallate (Sigma-Aldrich). The cells were examined with a Carl Zeiss MicroImaging, Inc. LSM 5 PASCAL laser scanning microscope. 0.5-M-thick serial optical sections of the cells were recorded using the Carl Zeiss MicroImaging, Inc. LSM 5 Image Examiner software to obtain images with pixel intensity within a linear range.
2016-05-12T22:15:10.714Z
2002-07-08T00:00:00.000
{ "year": 2002, "sha1": "815dd61e5a3d3ffa40c3f36a4570451e5fad88ae", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/158/1/79/1304126/jcb158179.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "003d342ce13d7e38773574162795a20cd982402f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
201434344
pes2o/s2orc
v3-fos-license
STUDYING OF SCIENTIFIC-PHILOSOPHICAL HERITAGE OF THE EAST RENAISSANCE INTELLECTUALS IN DEVELOPMENT OF PHILOSOPHICAL EDUCATION. The development of philosophical education is an interconnected process connected with human development. It is important to pay a special attention to the comprehensive study of the scientific philosophical heritage of the Oriental thinkers, in particular, the scholars of the Islamic world, who made a worthy contribution to the development of our society in further development of philosophical education. For this purpose, first of all, it is necessary to create real-world research, new generation textbooks, books, program guides. Because during the Soviet regime, the social and philosophical heritage of our people was abolished, artificially, materialistic and atheistic tendencies were recommended, without denying the specific historical conditions, and other thought-provoking researchers were condemned as bourgeois. Based on the original sources of research in Uzbek scholars, philosophical views of Oriental philosophers such as Forobi, Ibn Rushd, Ibn Bajja, Ibn Sina, Abu al-Ghazali have been studied in more detail. A. L. Orientalists such as Kazibberdiev, S, Serebryakov, Azkul Karim, Alber Nasri have made a serious scientific work on translation of these philosophers' brochures, commenting on each concept, and the translation dictionary of their works. Therefore, this article analyzes the views of scientists of the Oriental Reformation period on the study of the scientific philosophical heritage, and highlighted the importance of the development of philosophical education in Uzbekistan. Introduction. Nowadays, the essence of reforms in the field of education based on the "National model" is not to ensure the priority of national values, but also to build an education system that builds on the development of the new era on the basis of great achievements in the world science and education system and national selfpromotion. . In this regard, the need for philosophy education and the development of philosophical thinking in young people will need to be addressed on the basis of new paradigms. Therefore, for many centuries the philosophical thinking of young people on the basis of international philosophy, in particular on the basis of national philosophy, the inclusion of the ability to think in the continuous education system, contributes to the further development of our national mentality based on national ideology. Today it is important to study the fundamental works of Oriental scholars in the development of philosophical education. In this regard, the President Sh.M.Mirziyoev said: "The issue of further development of fundamental research has remained unanimous in our attention today. We see that the rapidly developing countries of fundamental research have made a considerable progress in the development of other economic development countries. It is not accidental that the achievements of science in the world have been achieved in fundamental research. Therefore, full support of fundamental sciences and provision of this sector with gifted young cadres are put on the agenda as one of the important tasks of our state. " [Mirziyoev SH, 2017. 171]. From this point of view, philosophical thinking may, depending on its content and influence, divide or unite members of the society, or increase or decrease the status of the state in the world, or advance to the degradation of nations. There are a number of reasons why there is a strong need for philosophy in general. The globalization and integration process, covering all aspects of life in all countries of the world, is the first and foremost one. In the context of globalization, it is impossible to find dialogues between different countries without a broad philosophy, and to find ways to resolve the emerging conflicts. Serious changes in the development of fundamental sciences became the second factor that created a strong need for philosophy. After all, physics, physiology, psychology and, in general, all the major fields of science have been independent of philosophy in their research, and now it is impossible to overcome it. This was the case when quantum mechanics, general relativity theory, neurophysiology, and other fields were encountered, and the complex and numerous problems faced by them could not be solved in the narrow sphere of science, and the need to think in the field of broad philosophical horizons. These two factors have further demonstrated that no knowledge and activity can replace the deeper philosophical culture. Indeed, philosophy as true wisdom is a spiritual value that expresses man's perception of the universe, nature, the direct relationship to existence, the way of existence. Therefore, it is important to pay a special attention to the comprehensive study of the philosophical heritage of the Oriental thinkers, in particular the scholars of the Islamic world, who make a worthy contribution to the development of our society in the further development of philosophical education. Shavkat Mirziyoev, President of the Republic of Uzbekistan, addressed the Oliy Majlis of the Republic of Uzbekistan as the main objective of further development of the social sphere in the year 2019 -"The Year of Active Investments and Social Development". "In particular, the study of the ancient and rich history of our Homeland, we need to strengthen scientific research and support the activities of scientists in the humanitarian sphere. The evaluation of the past must be absolutely unbiased, and most importantly, without any ideological views. " [Mirziyoev Sh.M. lex.uz]. For this purpose, first of all, it is necessary to create real-world research, new generation textbooks, books, program guides. Because during the Soviet regime, the social and philosophical heritage of our people was abolished, artificially, materialistic and atheistic tendencies were recommended, without denying the specific historical conditions, and other thoughtprovoking researchers were condemned as bourgeois. Therefore, in the study of the scientificallyphilosophical heritage of the Oriental renaissance period scientists during that period, the materialistic spirit prevailed and historical truths were distorted. In this sense, the objective study of the historical justice, the influence of our scientists on the development of world science and philosophy is a timeframe. Here are some of the first words of our President Islam Karimov: "From the oldest stories and writings created by the minds and geniuses of our ancestors, thousands of manuscripts kept in the treasury of our libraries today, including samples of folklore, history, literature, art, politics, ethics , valuable works of philosophy, medicine, mathematics, mineralogy, chemistry, astronomy, architecture, farming and other spheres are our great spiritual wealth. The people with such a great heritage are rarely found in the world. A comprehensive study of the spiritual heritage left by our ancestors serves as an important factor in the development of philosophical education in Uzbekistan. The importance of the issues raised in this article is also highlighted in the June 23, 2017 Decree by the President of the Republic of Uzbekistan Shavkat Mirziyoev "On Measures to Establish the Islamic Culture Center in Uzbekistan" at the Cabinet of Ministers of the Republic of Uzbekistan, "Library and archives, great scholars and thinkers, saints, scholars and religious schools, founded by them, are preserved in our country and abroad. Imam al-Bukhari, Imam Termizi, the great scholars who have contributed greatly to the development of Islamic religion, including the manuscript and luggage books, historical proofs and documents, archaeological findings, artefacts, contemporary scientific research works, books and collections, video and photo documents , Scientific research on the scientific basis of scholars, such as Hakim Termizi, Abu Mansur Moturudi, Abu Muin Nasafi, Kaffol Shashi, Abdulkholiq Gijduvoniy, Najmiddin Kubro, Burhoniddin Marginiani, Bahouddin Nakshband, Khoja Ahror Valiy, their scientific and spiritual courage, wide propagation of great human qualities " [1]. It is possible to observe certain results, even the great discoveries, in the life of society in the harmony of philosophical, religious, secular doctrines, ideals, ideas and activity, and the scientific heritage of scholars from the East in the IX-XII century. The ideology of each era is based on philosophical, religious and secular roots. But philosophical, religious, and secular roots are trying to subdue the rest of the world, with the rest of the world going. This leads to various disagreements and negative consequences. It is worth noting that in this field of pre-independence studies, it is no secret that the dominant ideology in society has the same or the same level of reflection in any sphere. Some studies conducted at that time were known to have an unanimous, atheistic nature. Therefore, in the article, these sources are scientifically critical, appealed. The results of the analysis show that the scientific and philosophical heritage left by the scientists of the Oriental Renaissance has attracted great interest not only for the development of our country's science, but also on the world science with its rich, informative ideas. It is difficult to achieve certain results without the knowledge of foreign scientists, especially Western scientists, about the rich experience of studying the philosophical heritage of the Oriental Reformers in the development of philosophical education in Uzbekistan. However, the experience gained by Western researchers, the study and evaluation of achievements has shown that sometimes the question of studying the philosophical heritage of the Oriental Reformers is not objectively neutral. Western civilization and science have studied the spiritual heritage of our ancestors and have tried to absorb the aspects that are in line with their social spirit, which are necessary for their own benefit and development. They are considered unnecessary for the advancement of Western civilization, which are incomprehensible to them, and the inward elements of their souls are regarded as bidat, religious superstition, retardation, ignorance. Even some European scholars have a greater awareness of the true nature of the Oriental culture and their socio-philosophical ideas, or misinterpretation. For example, one of the scholars of the Russian philosopher V. Solovev believes: "In the Muslim world there is no positive science (the secular science is envisioned), and there is no theoretical theology, but some of the peculiar dogma of the Qur'an and the mass philosophical concepts derived from Greeks and the experimental data [ Mukhtarov O. M., 2015. B. 46]. In the views of this philosopher, we can see that the attitude toward the Oriental peoples is a poverty-stricken nation. Also, G. Vamberi's remarks are also remarkable: "Not all Asian nations, except for Japanese, can develop themselves independently of progress and renewal. Asians can only achieve culture by direct or indirect effects of Europe " [Vamberi G, 1913. 707]. This philosopher has the ability to look at the Oriental people as well. Moreover, one of the Western philosopher scientists, Oriental Thoughts, "painted with supernatural dyes," states the German philosopher Carl F., 1922. B. It is not necessary to dwell on our European thinking so far apart from the concept of world outlook of these nations [2, p.234]. Many of the above-mentioned points in the scientific philosophical heritage of the Oriental Reformers are not alien to us. It can be said that these sides, which seemed to be "backward", "defective" from European perspectives, can be a solid foundation for the rise of our spiritual outlook. Taking into account the fact that the totalitarian socialist ideology of the past has preserved national philosophy within the cage for over seventy years and the need to re-examine the scale of the changes that took place during this period in the world philosophical thinking and the past and forgotten or forgotten spiritual values, It is acknowledged that there are still many things to do to restore the original state of philosophy in the CIS. In this study we have to admit that in the present-day Western culture, the ancient philosophy has become increasingly recognizable as a science, and now it needs to restore this forgotten position of philosophy [3, p.88]. Therefore, a subjective study of this problem can be subject to subjective judgment. For example, some Uzbek and Russian scholars can only see the influence of their work in this field on the factual data, their ideological views on translating and publishing them. It is worth mentioning that the works of translation and publishing of several major works by Uzbek and Russian scientists were initially carried out in the middle and end of the 20th century. Such scientists or philosophers, E.A. Frolova, M.T.Stepanyants, A.Agnatenko, N.AIvanov, GS Shaimuhambetova and others. Researchers in these studies have tried to give information about the world, especially in the East and the West, in general. At the same time, each author drew attention to the analysis of the philosophical problems that they needed. The modern civilization requires the revision of the historical development of humanity and criteria for the identification of the development of scientific knowledge in the Islamic religion. According to this demand, Thomas P.Flint, Michael K. Rhee, Ali Akbar Vilayati, Ardakani Riza Dovari, Berns, Birincjer Rida, Vundt V., Oldenberg G., Gold, Limen Oliver, A. Korben, Seyyid Hussein Nasr, M. Mutahkhari, AA Ruby, Chittie Williams, and others [4, p.65]. There are also centers for scientific activity in many areas. In particular, the Center for Islamic and Middle Eastern Studies was established in Birmingham, England, which specializes in the study of Islamic philosophical foundations. It is located in the Philosophical and Religious Studies Department of the University of Birmingham in Birmingham, which also features the East Manuscripts Department. Here is the holy book of Islam, the oldest manuscript of the Qur'an. To find out the age of the manuscripts, scientists at Oxford Laboratory discovered that the manuscript was written between 568 and 645. This indicates that one of the oldest copies of the Koran in the manuscript has been preserved well. This Center has opened a magistracy based on scientific research, and there are adequate opportunities for researchers who wish to pursue research in this area. In addition to studying and studying Islamic sciences, these curriculums focus on Islamic history and philosophical doctrines. The philosophical doctrines Also, the formation of Oriental philosophy in the X-XI centuries was directly influenced by the philosophy of ancient philosophy, first of all Aristotle and Plato's philosophy. That is why philosophical literature pays special attention to the term "Eastern peripatetism". Among the representatives of Eastern peripatetism are the philosophers such as Ibn Sina, Abu Nasr Farabi, Abu Rayhon Beruniy, Ibn Tufail and Ibn Rushd. It was through these great writers that Western philosophical thinking in the Middle Ages began to evolve. Based on these facts, one can not conclude that the roots of Eastern and Western philosophy are one. Because philosophical doctrines in the East and Europe have a significant difference in the philosophical thinking and the features of their understanding of the world of concepts, problems analysis, and ways to solve them. This made it possible for critics of Western philosophy to critically analyze Western philosophical views. Based on the original sources of research in Uzbek scholars, philosophical views of Oriental philosophers such as Forobi, Ibn Rushd, Ibn Sina, Abu al-Ghazali have been studied much more. Uzbek philosophers and other researchers substantiated the great contribution of great thinkers to the history of the development of philosophical knowledge [5, p.34]. In our view, it is impossible to develop philosophical doctrine that is consistent with the transformations that take place in the life of the society, in the minds of the people, simply by repeating the past of the ancients. To do this, you need to be aware of world-renovated updates and pay particular attention to a particular approach and a specific approach. A. L. Orientalists such as Kazibberdiev, S, Serebryakov, Azkul Karim, Alber Nasri have made a serious scientific work on translation of these philosophers' brochures, commenting on each concept, and the translation dictionary of their works. At the same time, the study of the scientific and philosophical heritage of the Oriental renaissance scholars from the point of view of new scientific evidence in terms of tolerance ideas, such as Z.Munavvarov, A.Hasanov, M.Imomnazarov, Z.Husnidinov, It is also desirable to point out the findings [6, p.9]. However, until now, Uzbekistan, Russia, Western scientists and philosophers have not analyzed their work on the study of the scientific and philosophical heritage of the scientists of the Eastern renaissance. In addition, studies and studies have been undertaken in the Arab-Islamic countries to study and study Islamic philosophy. For example, the Egyptian University of Cairo's University of Islamic Philosophy annually holds international conferences on various issues of Islamic philosophy and publishes conference materials on a regular basis. One of the most important research works in Islamic philosophy, M.Fahri's "History of Islamic Philosophy" was published in Persian in 1983 by Nasrullo Pervi Jawadi in Tehran. Scientific research on Islâm's philosophy has also been undertaken in the Republic of Turkey, including the monographs of Prof. M.Bayrakdar and I.Abdulhamid [7, p.4] The analysis of these studies shows that philosophers of Islamic philosophy have tried to prove that not only their scientific views, but also their practical work, that philosophical training is not a field of knowledge that is difficult to understand. Therefore, our great fellow Abu Nasr al-Farabi described the philosophy of the twentieth century as follows: "When knowledge of the subject is acquired, it is educated in this respect, and if the meaning of what is created is understood, it is based on reliable evidence, If we have the confidence and the imagination, then we are talking philosophically about this information. " [8. Forobi, 1993, p. 183-184]. In the book Al-Huruf (Forbidden), Forobi says, "If the religion obeys the philosophy that is being perfected with all its common aspects, then it is true and right. However, if religion has been formed during the era of analytical philosophy, but not in the context of rhetoric, rhetoric (dialectics, dialectics), and sophistication (even with the help of any means, Obedient religion is also a lie and error. In many cases, it will be misleading from beginning to end. Philosophy also has a primary position in religion, because philosophy is a weapon, a religion, a pillar, and more precisely, a weapon of philosophy" [9]. According to Ibn Sino, all philosophical sciences are divided into two parts: theoretical and practical. The purpose of the theoretical part is to know the truth; The purpose of the practical part is to achieve happiness. Philosophical sciences, according to Ibn Sina, are divided into two types: the first one introduces us to our personal behavior and is called "practical knowledge". Because the benefits of this knowledge will be needed so that we can be sure of salvation in this world, and that our works are organized. The latter tells us the state of things in order to form us spiritually and to be happy in this world. This knowledge, which is explained on its own, is called theoretically [10, p.23]. Conclusion In conclusion philosophical education is a unique form of general culture, self-identification, the logic of the world, the phenomenon that is manifested in a particular culture, and the way in which a person's place in society is evaluated. Philosophy focuses not only on studying the essence of the human being, but also on methodology for the development of other sciences, but also on the study of the internal capacities and perspectives of human thinking. Science is widely used in scientific and theoretical doctrines that have been in existence for thousands of years to form the human mind. Studying the history of society and determining the future depends on the essence of philosophical outlook. Secondly, philosophy is the manifestation of the human mind's thinking in the form of the most general concepts, knowledge, conclusions, and the general outlook. A person's self-awareness, psychological analysis of his essence, all his goals and their associated activities are linked to historical lessons, time requirements and abilities, prospects and new needs, scientific conclusions and values, to be fair with other people . Therefore, human self-awareness and the identification of others as well as the way in which it determines its existence are part of the philosophical problems and constitute the most complex of them. Thirdly, there are some concepts that play a certain ideological-theoretical role that affects the way and philosophy of philosophy, which is embodied in the methodological principles. Indeed, philosophy summarizes, accelerates and unites the knowledge, experience gained from different countries around the world at different times. In this context, the approximation of science and the combination of problems create new opportunities for the expansion of scientific and practical activities of humanity. Fourthly, the practical functioning of human beings, the development of science has never weakened the need for philosophical thinking, but on the contrary, it has intensified. Human beings will not only be able to derive the systematic knowledge of their essence, society, nature, and thinking through their minds, but also on the basis of which they seek to produce important conclusions that are important for the development of scientific thought and practice. As a result, new discoveries for science see the world. This is a unique achievement of science. It's no secret that today's view of philosophy is changing dramatically. Forming a younger generation, thinking, and upbringing their outlook is one of the topical issues of the day, as it improves their attitude towards themselves and the world. Because the younger generation is the continuation of tomorrow's day. Fifth, the peculiarity of philosophical thinking is that such thinking is submissive to rationality, internal harmony, conflict, and proof of students. We can say from the beginning that these characteristics coincide with the emotional, irrational, and valued arguments of people. Any renewal in our society is directly or indirectly aimed at strengthening the national idea and ideological immunity. It plays an important role in intensively developing and changing social life. For this reason, a thorough study of the philosophical heritage of the Oriental renaissance philosophers today is an important basis for the development of philosophical education in our country.
2019-08-23T18:29:43.087Z
2019-07-30T00:00:00.000
{ "year": 2019, "sha1": "173d8e95619b4dbea7c949ff475d39748595ab49", "oa_license": null, "oa_url": "http://www.t-science.org/arxivDOI/2019/07-75/PDF/07-75-20.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5ea8d94dad5b5e5bd0615446e4650cb7788d0602", "s2fieldsofstudy": [ "Philosophy", "Education" ], "extfieldsofstudy": [ "Philosophy" ] }
5968766
pes2o/s2orc
v3-fos-license
Iterative Methods for Sparse Signal Reconstruction from Level Crossings This letter considers the problem of sparse signal reconstruction from the timing of its Level Crossings (LC)s. We formulate the sparse Zero Crossing (ZC) reconstruction problem in terms of a single 1-bit Compressive Sensing (CS) model. We also extend the Smoothed L0 (SL0) sparse reconstruction algorithm to the 1-bit CS framework and propose the Binary SL0 (BSL0) algorithm for iterative reconstruction of the sparse signal from ZCs in cases where the number of sparse coefficients is not known to the reconstruction algorithm a priori. Similar to the ZC case, we propose a system of simultaneously constrained signed-CS problems to reconstruct a sparse signal from its Level Crossings (LC)s and modify both the Binary Iterative Hard Thresholding (BIHT) and BSL0 algorithms to solve this problem. Simulation results demonstrate superior performance of the proposed LC reconstruction techniques in comparison with the literature. I. INTRODUCTION U NIFORM sampling is a popular strategy in the conventional Analog to Digital (A/D) converters. However, an alternative technique could be Level Crossing (LC) sampling [1][2][3][4] which samples the input analog signal whenever its amplitude crosses any of a predefined set of reference levels. LC based A/Ds represent each LC by encoding its quantized time instance along with an additional bit that represents the value of the level crossed at that time instant [3]. LC sampling generates signal-dependent non-uniform samples and benefits from certain appealing properties in comparison with the conventional uniform sampling technique. It reduces the number of samples by automatically adapting the sampling density to the local spectral properties of the signal [5,6]. Furthermore, LC based A/Ds can be implemented asynchronously and without a global clock. This in turn leads to reduced power consumption, heating and electromagnetic interference [7,8]. A seminal work by Logan [9] showed that signals with octave-band Fourier spectra can be uniquely reconstructed from their zero crossings up to a scale factor. This is a sufficient but not necessary condition for LC signal reconstruction. Previous works on LC signal reconstruction have mostly considered low [10,11] or band pass [9] signal assumption and there are few prior works that utilize sparsity [12,13]. Boufounos et. al. [13] formulates the zero crossing reconstruction problem as minimization of a sparsity inducing cost function on the unit sphere and Sharma et. al. [12] uses the Basis Pursuit (BP) and Orthogonal Matching Pursuit (OMP) [14] techniques to reconstruct the signal from LC samples. Both [12,13] formulate the LC reconstruction problem in terms of a conventional Compressive Sensing (CS) [15] reconstruction model. Contributions: In this work, we utilize the emergent theory of 1-bit CS [16,17] to formulate the LC problem. We show how the LC problem can be addressed by a system of simultaneously constrained signed-CS problems and modify the Binary Iterative Hard Thresholding (BIHT) and Binary Smoothed L0 (BSL0) algorithms to solve this problem. For further reproduction of the results reported in this paper, MATLAB files are provided online at ee.sharif.edu/∼boloursaz. The rest of this paper is organized as follows. In section II we formulate the LC problem in terms of 1-bit CS models. Section III presents the proposed BSL0 and the modified BIHT and BSL0 algorithms. Section IV provides the simulation results and finally section V concludes the paper. II. PROBLEM FORMULATION In this section, we formulate the problem of sparse signal reconstruction from level crossings and address the similarities and differences between this problem and a typical 1-bit CS problem. A. Zero Crossing (ZC) Reconstruction Suppose x(t) = N n=0 a n cos(nω 0 t), for t ∈ [0, d]. Also define the spectral support as S = {n|a n = 0}. Now, the sparse signal assumption imposes that K = |S| << N . Also denote by x[m] = x(mT ), m = 0, 1, ..., M − 1 the uniform samples taken from x(t) at rate 1/T << N ω 0 /π significantly below Nyquist in which (M − 1)T = d. It is obvious that a ZC-based A/D can extract y(t) = sign(x(t)) from the ZC time instances and the initial sign of x(t). Hence, we have y[m] = sign(x[m]). Now in vector notation we can write (1) Note that in (1), we need to estimate the sparse coefficient vector a from the sign measurements y. Of course, reconstruction is only possible up to a scale factor. Hence, we need to add the norm constraint ||a|| 2 = 1 which yields a typical 1-bit CS problem (2) that can be solved by the Matching Sign Pursuit (MSP) [18], Binary Iterative Hard Thresholding (BIHT) [17], 1-bit Bayesian Compressive Sensing [19] or any other 1-bit CS reconstruction algorithm. Once a is estimated, the sparse analog signal x(t) is estimated at infinite accuracy. B. Level Crossing (LC) Reconstruction Now consider the multi-level scenario in which the temporal instances of the signal crossings with a predefined set of reference levels is encoded and transmitted to the receiver. Lets denote the set of levels by L = {l −L/2 , ..., l 0 , ..., l L/2 }. Similar to the single-level case (zero crossings), the temporal instances of the crossings provides the following signals (3) Similarly, lets denote by x[m] = x(mT ), m = 0, 1, , M − 1 the uniform samples taken from x(t) at rate 1/T << N ω 0 /π significantly below Nyquist. Now in vector notation we can write (4) . . . in which the vectors x and a and the matrix Φ are the same as defined in subsection II-A and the vectors y −L/2 , ..., y 0 , ..., y L/2 contain the corresponding sign values. Now in order to solve the above system of signed-CS problems simultaneously, we define the vector y as (5) in which Φ (M (L+1))×(N +L+2) is made by concatenation of the Φ matrices and the level vectors according to ( Hence, to estimate the sparse vector of coefficients a, we need to solve the constrained signed-CS problem (7). Note that wherever we replace the term "1-bit CS" with "signed-CS" throughout this paper (e.g. in referring to (7)), we are emphasizing the difference between that problem and a typical 1-bit CS problem regarding the scaling ambiguity. For example, problem (7) is well-posed and does not need the additional norm constraint. In section (III), we propose efficient algorithms to solve (7). III. THE PROPOSED ALGORITHMS In this section, we present our proposed algorithms. In subsection III-A we present the Binary Smoothed L0 (BSL0) algorithm proposed for solving (2) in case where the sparsity number K is not known for reconstruction. Subsequently in subsection III-B, we present our proposed algorithms for solving the sparse LC problem (7). A. The Binary Smoothed L0 (BSL0) Algorithm Some previously proposed 1-bit CS reconstruction algorithms (e.g. BIHT) need prior knowledge of the sparsity number K for reconstruction. However, K is usually not known to the reconstruction algorithm in the real-world scenario considered in this paper. To cope with this problem, we propose the Binary Smoothed L0 (BSL0) algorithm. Note that although the simulation results for BSL0 are provided for the ZC/LC scenario in this paper, the algorithm is also applicable to the general scenario of 1-bit CS. The basic SL0 algorithm was proposed in [20,21], for finding sparse solutions to under-determined systems of linear equations. The main idea of SL0 is to apply the Graduated Non-Convexity (GNC) technique [22] and approximate the discontinuous l 0 norm by a sequence of continuous functions to enable using continuous minimization techniques. In this work, we apply the same idea to find the solution to the 1bit CS problem as stated in (2). To this end, we solve the following problem iteratively (8) in which J(a) = [Y (Φa)] − 1 , Y = diag(y) and [.] − denotes the negative function, i.e., ([a] − ) i = [a i ] − with [a i ] − = a i if a i < 0 and 0 else. Also, we have lim σ→0 + F σ (a) = a 0 . Note that the first term of the cost function (F σ (a)) enforces sparsity, the second term (J(a)) enforces consistency of the solution to the set of sign measurements and (( a 2 2 − 1) 2 ) enforces the final solution to be located on the unit sphere to avoid scaling ambiguity. The idea is to decrease σ along the iterations to better approximate the l 0 -norm while increasing λ and θ to enforce the sign and norm constraints. As stated in [20], there exists several different choices for the l 0 -norm approximation function (F σ (a)) and in this research, we assume F σ (a) = N m=0 (1 − exp(−a 2 m /σ 2 )). Hence, considering a set of fixed parameters (σ, λ, θ) for the inner gradient descent algorithm we have (9) Precisely speaking, (9) is in fact a sub-gradient of the cost function because the second term ( λ 2 Φ T (sign(Φa) − y)) is a sub-gradient of λJ(a) as proved in [17]. Algorithm 1 gives the formal presentation of the proposed BSL0 algorithm. B. The Sparse LC Problem In this subsection, we modify both the BIHT [17] and the BSL0 algorithms to solve the sparse LC problem (7). Note that the only difference between the sparse ZC model formulated in (2) and the sparse LC model (7) is the constraint on the sparse coefficient vector a . Also note that C = {a ∈ R N +L+2 | a N +2:N +L+2 = (−1) (L+1)×1 } the set of all real vectors with last L + 1 entries equal to −1 is convex. Hence, to enforce this constraint, we can simply project the solution onto C at each iteration. As C is convex, this projection will not hamper convergence of the overall iterative algorithm. For the scenarios in which K is known prior to reconstruction, the modified BIHT algorithm solves (11) To solve (11), we propose Algorithm 2 which is composed of a Gradient Descent (GD) step followed by projection both onto C and the K-sparse signal space. Algorithm 2 provides the stepwise presentation for the BIHT algorithm modified for LC reconstruction. IV. SIMULATION RESULTS In this section we demonstrate efficient performance of the proposed ZC/LC reconstruction algorithms on random sparse signals generated according to the model presented in II-A and provide comparisons with previous works. B. LC Reconstruction Performance by Modified Signed-CS Considering the sparse LC problem addressed in subsection II-B, fig. 2 provides the final reconstruction SNR values achieved by the modified BIHT and the modified BSL0 algorithms for different number of reference levels L. Note that the signal and algorithm parameters are the same as IV-A and the levels are placed uniformly in the dynamic range of the input signal. C. Comparison with the Literature for Sparse Octave-Band Signals As both prior works on sparse ZC/LC reconstruction [12,13] have considered octave-band signals for simulations, we also report the simulation results for the same scenario for the sake of comparisons. To this end, we limit the harmonics to the interval n = 201, ..., 400 and plot the probability of successful recovery by (2) against the sparsity factor in fig. 3. Similar to the literature, the reconstruction SNR values > 20dB are considered as successful recovery in this simulation. Note that this figure compares the performance of the 1-Bit CS approach to ZC reconstruction in this paper with the conventional CS approach taken by [12,13]. As observed in this figure, migrating to the 1-Bit CS model improves the reconstruction performance for sparser signals while the conventional CS (i.e. [12,13]) performs better as the sparsity factor increases.
2016-11-30T13:24:39.000Z
2016-11-30T00:00:00.000
{ "year": 2016, "sha1": "71f75acf7d675659be42bda73c1687e352287087", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "71f75acf7d675659be42bda73c1687e352287087", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
55977214
pes2o/s2orc
v3-fos-license
Innovative “Green” Tribological Solutions for Clean Small Engines Since its invention in the last quarter of the nineteenth century and during all the twentieth century, two-stroke engines penetrated in many industrial, automotive and handheld appli‐ cations where engines with high specific power, simple design, light overall weight and low cost are required. Presently, two-stroke engines are commonly used in motorcycles, scooters, chainsaw, agricultural machinery, railways grinding machines, outboard applications, etc. Introduction Since its invention in the last quarter of the nineteenth century and during all the twentieth century, two-stroke engines penetrated in many industrial, automotive and handheld applications where engines with high specific power, simple design, light overall weight and low cost are required.Presently, two-stroke engines are commonly used in motorcycles, scooters, chainsaw, agricultural machinery, railways grinding machines, outboard applications, etc. Usually, the moving parts of a two-stroke motor are lubricated either by using mixture of oil with fuel or by pumping oil from a separate tank.Both designs use total-loss lubrication method, with the oil being burnt in the combustion chamber.Therefore, the lubricating oil must meet specific requirements: it must have an optimal balance of light and heavy oil components to lubricate at high temperature, it must produce no deposits (carbon sooty and other) on moving parts, and it should be ash-less.In addition, the oil should provide good protection of moving parts at high speed under deceleration of engine with the throttle closed, when the engine usually suffers from oil-starvation.Also, two-stroke engine produce more contaminants than four-stroke engines, due to oil burning in the combustion chamber.Therefore, it is very important to reduce these contaminations to meet ecological requirements. Most challenging issue of the European technological strategy resides in complete substitution of fossil-based fuels and lubricating oils with renewable eco-friendly and high performance materials.Esters and polyglycols were identified as alternative base oils because of their high biodegradability, low toxicity; low ash formation and absence of polymer compo-nents, in [1].Synthetic esters are characterized by their polar structure, high wear resistance, good viscosity-temperature behaviour, miscibility with non-fossil fuels.Esther-base oils can be blended with various components like antifoam agents, oxidation inhibitors, pourpoint depressants, antirust agents, detergents, anti-wear agents, friction reducers, viscosity index improvers, etc., to create environmental friendly prototype engine oils and to meet the changing environmental requirements in low sulphur fuels and other alternative fuels and their application to engine oils. Low metal additives content and clean-burnt characteristics result in less engine fouling with much reduced ring stick and lower levels of dirt built-up on ring grooves, skirts and under crowns.Owing to the presence of polar ester groups in the molecule which have higher adhesion to metal surface, esters have much better lubricity than hydrocarbons.The performance of the ester-based lubricating oils can be further improved by selecting a proper base oil and additive package. Another important problem is related with performance of fuel injector system when bio fuels are used.Diesel injection nozzles consist of a body (usually in Ni-Cr steel) and needle valve (High speed steel, HSS), fitted together with very strict tolerances.The design of the nozzle, i.e., the number of orifices, their diameters, positions and drilling angles depend on specific engine application.The current trend is to use multi-hole nozzles with very small holes with diameter of only 0.10 -0.14 mm in order to improve the fuel atomization and flow pattern. Heat treatments are applied to the body and the needle to obtain the necessary hardness both on the surface and in the core of the parts and to face the following problems: • fatigue failures at high stress areas due to repeated pulses of very high injection pressures; • thermal shocks. Adequate finishing of the orifice surfaces is very important also to optimize the erosion resistance. The usage of new diesel blends characterized by different physical and chemical properties as compared with the traditional fuels could lead to modifications both in the choice of materials, geometry and positioning of orifices or their surface finishing to ensure the correct spray pattern.This work describes the results of our recent studies aimed at solving the problems related to the introduction of new eco-friendly oils and lubricants. Oil characterization Three different synthetic ester base oils have been selected to formulate three prototype engine oils with the same additive composition.These oils are different mixtures of fully saturated polyglicol-ester and mono-ester types and designated as SEMO 4, SEMO 5 and SEMO 10.Same additive package has then been added to the three bases.After comparative characterization of these prototype oils and selection of the oil with the best tribological performance (SEMO 10), a new improved formulation was developed based on the selected lubricating oil, designated SEMO 36.In addition, conventional mineral oil for two-stroke engines was used as reference oil.The additive package composition of the reference oil is different but it is ashfree as well as the other SEMO oils. Oil viscosity was characterized according to ASTM D-445-06 standard procedure in [3], and viscosity index was determined using ASTM D-2270-04 in [4]. Deposit forming tendency of the oils was characterized by the Coker test at 250 °C during 12 h.Some physical and rheological properties of the lubricating oils are shown in Table 1.Among the prototype lubricating oils, SEMO 10 has the lowest viscosity both at 40 and 100 °C, the highest flash point and the lowest deposit forming tendency. Unleaded petrol (E228) and bioethanol E85 (mixture of 85% of ethanol with 15% of gasoline) were selected to test miscibility of the lubricating oils with standard and alternative fuels.For this purpose two different lubricant/fuel ratios were used.Regarding to the miscibility method A (90% lubricant in fuel), SEMO 10 as well as SEMO 5 demonstrated good miscibility both with unleaded petrol and E85.Compared to this, the results for the 2% mixtures according (method B) differed.All tested lubricants proved to be perfectly miscible with EN228 fuel, whereas only SEMO 36 demonstrated to be fully miscible with E85.According to both miscibility methods the reference oil was only miscible with EN228.SEMO 36, when compared to its original prototype SEMO 10, has a much higher viscosity.Flash point for this lubricant is lower than for SEMO 10 but still higher than 200 °C.Wettability of the surface of the cylinder liner by lubricating oil is important for corrosion-and wear-protection of the piston rings and cylinder liner at the start-up when the temperature of the components is low.In this work, the wetting characteristic of the tested oils was determined using the Sessile Drop method.The resulting contact angles of the drops of various oils on the honed surface of the cylinder liner are shown in Table 2. Same method could not be used to determine wettability of the piston ring because of the small width of the ring.Therefore, the following procedure for qualitative comparison of the wettability of the piston ring by different oils in [12] was applied: 1 µl of oil was placed on the circular flat surface of the phosphate cast iron piston ring and then, after 30 s, the extension of the oil drop along this surface was measured.The contact angle for SEMO 36 oil on the honed cast iron was the highest among all the tested lubricating oils.The contact angles of SEMO 5 and SEMO 4 were very similar one to each other and only slightly lower than for SEMO 36.SEMO 10 had the lowest contact angle and the largest drop spread for all tested oils.The behaviour of the drop spread of the tested lubricating oils over the piston ring surface is similar to that of the contact angle, bearing in mind that large contact angle values correspond to small spread distances. Biodegradability and toxicity of the lubricating oils were examined according to the recommendations of the Organization for Economic Co-operation and Development (OECD) in [5].Biodegradability of lubricating oils was tested using OECD 301F Manometric Respirometry Method consisting of the measurement of oxygen uptake by a stirred solution of the test substance in a mineral medium, inoculated with micro-organisms in [6].Toxicity of the lubricating oils was studied using "Alga, Growth Inhibition Test" OECD 201 in [7] and "Daphnia Magna" 24 h Acute Immobilisation Test OECD 202 in [8].In the "Alga, Growth Inhibition Test", selected green algae were exposed to various concentrations of the test oils over several generations under defined conditions.Results of biodegradability test are shown in Table 3. As expected, all synthetic ester base oils successfully passed the biodegradability test, while the reference mineral oil was not biodegradable according to the standard procedure OECD 301 Biodegradation of SEMO 5 and SEMO 10 exceeded 70%.In toxicity tests both with Alga and Daphnia Magna, the oils were classified as not harmful for aquatic organisms according to the standard procedures OECD 201 and 202 (see Table 4). Tribological evaluation according to DIN 51834-2 Tribological evaluation of lubricating oils was done using ball-on-disk configuration with reciprocating motion according to the standard procedure DIN 51834-2 in [9].Ball and disk were made of 100Cr6 steel.The ball, 10 mm in diameter, performed reciprocating motion with a stroke of 1 mm and a friction frequency 50 Hz.Normal load was 50 N during short run-in period 45 s and 300 N during the test 60 min.The ball and the disk were immersed in the lubricating oil, which temperature during the test was constant and 50 °C.Friction force was measured as function of time.Friction coefficient was calculated as the ratio of the tangential force to the normal force. After test completion, diameter of the wear scar on the ball was measured using optical microscope, and, from this data, volume wear of the ball was calculated for each lubricating oils tested. Evolution of the friction coefficient in friction evaluation tests is shown in Figure 1.Oils with low additive content: SEMO 4, SEMO 5 and SEMO 10 showed an interval of frictional instability after the run-in period.In the instability period, which lasted from 400 up to 800 s, there are some sharp peaks indicating damage of surface and seizure, probably due to microwelding.The reference lubricating oil had a less pronounced instability period without sharp peaks, while SEMO 36 did not present any instability.Final values of friction coefficient after 60 min and the diameters of the wear scar on the ball are shown in Table 5.Table 5. Friction coefficient and wear of the ball in tribological evaluation test of oils (DIN 51834-2) in [12]. The volume of worn material of the ball was estimated geometrically on the basis of the diameter of the wear scar using the following equation ( 1): where R =5 mm is the radius of the ball and a is the radius of the circular wear scar. Wear specific energy, E w , that is, the ratio of the dissipated energy, E, during friction per unit mass of worn material Δm, is an important characteristic which shows the ability of a material to resist wearing.This is a complex parameter taking into account both friction, which characterizes energy supply to the material in the friction zone, and wear intensity.This parameter is considered a very useful tool to compare standard tribological evaluation and simulated tests in [10].Wear specific energy was determined using the following equation in [12]: Where v m is mean sliding velocity obtained with a reciprocating frequency of 50 Hz and 1 mm stroke, F N is the normal load, μ fr is the friction coefficient, t i and t f are respectively the initial and final time points of friction test time interval. In this study, only ball wear was determined as specified by DIN 51834-2.So, the absolute value of wear specific energy could not be determined; since wear of the disk was not measured.However, by using the ball mass loss in the denominator of eq. ( 2), the upper bound estimation of the wear specific energy can be determined.This upper bound can be used for qualitative comparison of anti-wear properties of the lubricating oils under constant friction conditions.These values, determined using eq.( 2), are shown in Figure 2. SEMO 36 and the reference oil have much higher values of the wear specific energy, than other oils.Therefore, these lubricating oils improve contacting surfaces wear protection since much larger energy should be dissipated to produce the same wear as compared to SEMO 4, SEMO 5 and SEMO 10 lubricants. Figure 2. Friction coefficient and wear specific energy in [12]. Piston ring/cylinder liner simulation Tribological simulation was performed using cast iron phosphated piston ring and cast iron cylinder liner using reciprocating motion configuration.The samples for the tests were cut from real engine parts (Minsel M165 two-stroke engine manufactured by Abamotor Energía) keeping original curved surfaces and surface finishing.The conformal contact between the piston ring and the cylinder counterpart was reproduced by placing a piston ring on a suitable frame, A, and fixing it by means of a clamp, B (Figure 3).Wear of the components was determined by weighting and geometry measurements. A -frame for fixing the piston ring segment, B -fixing clamp, C -base with oil bath for fixing cylinder liner sample.The mass change of the piston ring segments and cylinder liner sample was determined from weighting the components before and after friction tests.Since the mass change can be due to two competitive processes: (i) wear out and (ii) deposit formation from the oil at elevated temperature, estimation of wear out by weighting can give erroneous results.Indeed, after the tests the surface colour became yellowish and remained after dissolvent cleaning indicating some sparingly soluble deposits formed on the surface due to some chemical reaction.Therefore, in addition to the determination of the mass change, worn volume was calculated from surface geometry.Surface morphology of the friction zone was studied using white light confocal microscopy at three different zones along the wear track on the cylinder liner sample.The acquired 3D surface images were 0.5 mm wide in the direction of friction and each image contained 138 cross-section profiles of the wear track yielding totally 414 profiles for each sample.Firstly, the cross-section profiles were averaged for each sample and then among different samples tested using the same lubricating oil.Worn volume of the samples of cylinder liner was calculated as a product of a mean cross-section area of the groove and the total length of the groove.The cross-section area was determined by numerical integration of the crosssection profiles and then worn mass was calculated from the worn volume using the density of cast iron. Surface chemical composition of the friction zone of cylinder liner samples was characterized using Energy Dispersion X-Ray Spectroscopy (EDS). Evolution of friction coefficient in time during friction between piston ring segment and a piece of the cylinder liner is shown in Figure 4.It is possible to highlight the increment of the coefficient of friction μ fr for lubricants SEMO 4 and SEMO 5 overtaking the constant value reached by the lubricant of reference.In fact, for SEMO 4 and SEMO 5, friction coefficient gradually rose during the experiment (90 min) and did not stabilize.The growth behaviour was almost linear in time. Initial friction coefficient was about 0.2 and the final one about 0.33 in both cases.SEMO 10 and SEMO 36 showed different behaviour.The initial values were 0.2 and 0.14 for SEMO 10 and SEMO 36, correspondingly.At the beginning, after a run-in period, friction coefficient increased and reached maximum. For SEMO 36 the maximum was reached usually between 100 and 200 s from the beginning of the test, while for SEMO 10 the period of increase was longer and the maximum was reached after 700 to 1700 s from the beginning of the test.After reaching the maximum, friction coefficient decreased slowly and stabilized at 0.14 and 0.11 for SEMO 10 and SEMO 36, correspondingly.The friction coefficient of lubricant SEMO 10 showed a slow decline until reaching a constant value lower than the reference one.Friction coefficient for the improved lubricant SEMO 36 levelled out rapidly at a very low value and showed less scatter, probably due to some sort of surface deposition on the contact surfaces. The averaged cross-section profiles of each liner sample tested are reported in Figure 5. Different scales of magnitude are used for better visualization of the mean contact surface profile.It is possible to notice very good performance of the lubricant SEMO 10 and its improvement in the lubricant SEMO 36.Samples tested using SEMO 4 and SEMO 5 had deep grooves with the maximum depth 22 to 25µm.The samples tested using the reference oil and SEMO 10 had less deep grooves with a maximum depth of 4 to 5µm.Surface of the samples tested using SEMO 36 oil had some thin scratches in the direction of friction while grooves had not been formed. Figure 5. Average cross-section profile of the friction zone of cylinder liner samples tested using different lubricants in [12]. Tribology -Fundamentals and Advancements Figure 6 shows images of the friction zone of the piston ring segments after friction simulation tests with different lubricating oils.Wear and damage of the surface as function of the oil used was similar to that in the cylinder liner.In tests with SEMO 4 and SEMO 5, the material in the friction zone was heavily damaged.The wear can be classified to be of the adhesive type with intensive plastic deformation and edging.When the reference oil and SEMO 10 oil were used in the tests, the damage of the material was less pronounced than for SEMO 4 and SEMO 5, but the wear in all cases was of the adhesive type.Only small damage was observed on the piston ring segments when using SEMO 36.In this case, only summits of the circular grooves of the piston ring presented some wear and deformation.From the point of view of hydrodynamic lubrication these results may seem to be surprising, since, with the same additive composition, higher wear rate occurs for thinner oil (SEMO 10 in our case) than more viscous oils (SEMO 4 and SEMO 5).Therefore, these results lead to the following conclusions: 1) the lubrication regime should be of a boundary type and 2) surface protection against wear for SEMO 10 and SEMO 36 oils seems to be resulting from the formation of surface layer as a result of adsorption of oil components or tribochemical reactions between the oil components and the base material.Results of the mass change measurements of the components are shown in Table 6.Worn mass calculated from the worn volume is plotted vs. measured mass change in Figure 7 (dots).The experimental data are fitted by linear function with two adjusted parameters: slope and intersect (dashed line in Figure 7).The solid line is a linear fit with a fixed slope 1 and adjusted intersect.Coefficients of determination for these linear regressions are 0.983 and 0.949, correspondingly, indicating statistically significant linear relationship between the mass change and worn mass determined from the geometry of the groove.Therefore, the deposit formation has not much influence on the mass change and the last can be used as a measure of the components wear out in these tests.The upper bound of the wear specific energy was determined in accordance with eq. ( 2), using the cylinder liner mass change in the denominator of eq. ( 2).Final friction coefficient and wear specific energy are shown in Figure 8. SEMO 36 oil showed the best antifriction and wear resistance characteristics among all tested lubricants.Friction coefficient was almost a half of that for the reference oil, while specific wear energy was 7.8 times higher than for the reference oil.In comparison with the ball-on-disk tests, wear specific energy for SEMO 36 lubricant was much lower in the tribological simulation test; however, oil temperature in these two tests was different.When the ball-on-disk evaluation tests were performed at the same temperature as in the simulation test (200 °C), the value of wear specific energy was similar to that in the simulation test: 0.14 GJ/g in the ball-on-disk at 200 °C vs. 0.18 GJ/g in the piston ring/cylinder liner simulation test.Although these values are only upper bound estimations of the real values, they are close to one another.According to the structuralenergetic approach in [10], this means that the dominating wear mechanism in both cases is the same.Then, a significant decrease in the wear specific energy from 3.97 to 0.14 GJ/g with temperature increase from 50 to 200 °C implies changing in dominating wear mechanism at higher temperature.It can be stated that, under the applied experimental conditions, the chemical compositions of the base oil and the additives had greater influence on the tribological performance of the lubricants than their rheological properties. Surface characterization Surface chemical composition of the friction zone of cylinder liner samples was characterized using Energy Dispersive X-Ray Spectroscopy (EDS).Table 7 shows surface chemical composition for three different surfaces: • friction zone of the cylinder tested using SEMO 36 lubricant, • untouched surface of the same cylinder, and Silicon and manganese were alloying elements of the base material and did not show important variations in their concentration, whereas the most important variation was in the carbon and oxygen content.There was no significant difference for other elements since the oils had no metal-containing additives.Figure 9 shows surface concentration of four elements relative to iron.After the test, during which a cylinder was immersed in the SEMO 36 oil and heated at 200 °C, carbon and oxygen concentrations on untouched surfaces were slightly higher than on the reference sample, e.g., the sample not immersed into the oil.However, carbon and oxygen concentrations drastically increased on the surface of the friction zone, on which carbon was each forth atom.Also, in contrast to the untouched surface and the reference surface, on the surface of the friction zone, carbon concentration was higher, than the oxygen one.One can infer from these data that friction induced tribochemical reactions between oil components and base material to form surface layer enriched with carbon and oxygen.This surface layer or sliding lacquer may protect mating surfaces from adhesion and/or damage yielding lower friction and wear in [10]. Figure 9. Surface concentration of elements relative to iron in [12]. Experimental evaluation in real two-stroke engines (Minsel M165) After previous simulation tribological test the performance of the oils was evaluated in real two-stroke engines (Minsel M165) with a swept volume of 158 cm³, a stroke of 54 mm, compression ratio 7,1:1, power (ISO 1585) 3.53/4.8kW/HP, maximum torque 120 Nm and 4500 rpm rotation speed.Scuffing tests were performed using various lubricating oil -petrol mixtures in order to evaluate the lubricating performance of the lubricants under extreme load conditions.The test conditions applied are shown in Table 8, and the tested oil-fuel compositions are shown in Table 9. Figure 10 shows the photographs of the engine components after scuffing tests, in which the reference mineral oil was used in a mixture with pure petrol and bioethanol.Increase in the bioethanol content in the fuel led to decrease in carbon soot deposition on the engine cylinder and piston.Also, when bioethanol was used, the surface was less damaged under extreme working conditions.Figures 11 show the photographs of the engine components after scuffing tests using a SEMO 10 -petrol mixture.Some seizure between compression piston ring and cylinder was observed when using a mixture of SEMO 10 with petrol.Several vertical abrasion marks were formed in the exhaust zone of the cylinder, where the temperature was higher.However, the piston and cylinder were quite clean with only some carbon soot deposits in the exhaust zone.The state of the cylinder head was quite healthy and clean in the intake zone, the carbon residues were considered normal. Figure 12 shows the pictures of the engine components after scuffing test using SEMO 36 lubricating oil with petrol and bioethanol fuels.When using a mixture of SEMO 36 with bioethanol E85 or petrol, no scuffing or seizure was observed.Only light scratches were found on the cylinder surface, which were more pronounced when using petrol.In this case, carbon soot deposits formed intensively on the top part of the piston.The piston and cylinder were very clean, when using bioethanol. In addition, gaseous emissions from the engine were analyzed for various fuel-oil mixtures with different proportions of bioethanol to petrol: 20%, 30% and 85%.The gas emissions were measured using the Directive CE 2002/88, Portable, SH3 modality as reference limits. The differences in power and consumption were negligible when using bioethanol E10 and E20.When compared with the petrol, the NO x emissions showed an increasing trend and the emissions of CO and CH diminished in tests with bioethanol and reference oil.When using E85, the reference mineral oil was not miscible, but the new developed oil SEMO 36 was totally miscible.When using bioethanol E85, a considerable reduction in engine power was observed yielding value 13% to 22% less than in the tests with petrol.At the same time fuel consumption increased slightly between 7% and 20%, and gaseous emissions were considerably reduced (see Table 10).When using SEMO 36 the reduction in NO x emission was the most significant as compared with other gases and was probably due to the lower temperature generated. Life cycle The lifecycle analysis for a 2-stroke engine fed by petrol and E85 was carried out using the model M 165 Minsel engine running in a tiller during 1000 h, which characteristics are shown in Table 11.Two fuel + oil pairs named as "Cleanengine systems" were compared with the Conventional system for the same engine working in the same application.In the alternative Cleanengine system I the engine was fed by a mixture of bioethanol E20 and mineral oil.In the alternative Cleanengine system II, the engine was fed by bioethanol and newly developed advanced and biodegradable lubricating oil SEMO 36.The fuel and oil consumption for the conventional and two alternative systems is shown in Table 12.The Eco-indicator 99 Methodology was used for the Impact Assessment method.The components of the environmental impact are shown in Figure 13 a), while the total environmental impact is shown in Figure 13 b).Almost all components of the environmental impact as well as the total environmental impact were higher for fossil fuel.However, the climate change was more affected by the renewable system. Conventional The global environmental impact evaluated by Lifecycle Assessment tools for the Cleanengine system I and II using bioethanol was lower than for the reference system using petrol.The comparison between two alternative systems Cleanengine I and Cleanengine II showed that the last one had slightly higher environmental impact due to higher fuel and lubricant consumption that can be related to the lower calorific value of the ethanol compared to the petrol.While the reduction of the environmental impact is attributed to the reduction in emissions, the use of a biodegradable nontoxic lubricant will further reduce the environmental impact of the Cleanengine II system. Nozzles for future engines Compared to conventional liquid hydrocarbon fuels, bio-fuels exhibit considerable differences in their physical properties which significantly influence on the injector flow as well as on primary and secondary spray break-up processes.As a consequence, spray mixture formation of biofuels is considered to be largely different compared to conventional fuels under engine operating conditions with severe consequences on the combustion and emission characteristics.Hence, injection and combustion system optimization as well as optimization of the injector configuration (number of nozzle holes, diameter, spray targeting, etc.) for bio-fuels requires a detailed knowledge of how the fuel properties influence the injector flow and spray atomization characteristics.Optimization of the nozzles materials and design is an important task which will open new markets and enlarge the number of potential customers for eco-friendly applications. Tribological evaluation Different metal-doped DLC coatings were developed by Physical Vapour Deposition method (PVD).Friction and wear tests were carried out using SRV tribometer with "cylinder-on-disc" configuration in lubricated conditions.The coatings were deposited on steel cylinders and Figure 13.Results of the life-cycle and environmental impact analysis for the conventional and two alternative systems: a) components of the environment impact, b) total environmental impact in [12]. disks.The cylinder, 15 mm in diameter, performed reciprocating motion with a stroke of 2 mm and a friction frequency 50 Hz.Normal load was 50 N during short run-in period 30 s and 200 N during the test 60 min.The cylinder and the disk were immersed in fluids, which temperature during the test was constant and 25 °C. Both Cr-and Ti DLC coatings had good friction and wear behaviour and they could be a good alternative to improve tribological properties of the actual uncoated nozzles. Corrosion characterization Corrosion resistance of different materials and coatings used for nozzles fabrication (Cr and Ti DLC) was characterized using electrochemical impedance spectroscopy and potentiodynamic polarization techniques in order to determine the kinetics parameters and the corrosion mechanisms of these materials in NaCl 0.5M or K 2 SO 4 0.2M in [12]. Base nozzles material, uncoated steel X82WMo, was also characterized under corrosion conditions and compared with DLC coated samples of the same material.The electrolyte used in these tests was K 2 SO 4 0.2M.Cr DLC coating offered excellent corrosion protection.The coating did not exhibit any pores or defects, protecting effectively the substrate during immersion. Open-circuit potential (OCP) was measured during 2200 s in order to analyze the samples tendency with the exposure time.After that, an electrochemical impedance spectroscopy was performed in a frequencies range from 10 k to 10 mHz.Once impedance measurements finished, a potentiodynamic potential swept was applied from OCP-0.2 V to OCP+0.6 V at a scan rate of 0.5 mV/s. Coated nozzles had more positive potential than the reference ones.For all surfaces, OCP was stable after first 2200 s of immersion.The difference between three nozzles regarding impedance results was very notable.Cr and Ti DLC coated samples had a semicircle Nyquist diagrams implying that the electrolyte did not reach the substrate during the immersion in the dissolution.The coating acted as an effective protective barrier.Uncoated nozzle had lower corrosion resistance.Two time constants could be clearly distinguished from two maxima in the Nyquist plots.Table 13 shows the parameters obtained from equivalent circuit simulation of the experimental data and Figure 16 shows the equivalent circuits used in the simulation process.Polarization curves for the coated nozzle are shown in Figure 17.Cr DLC coating had passive behaviour and low corrosion current of the order of 10 -9 A for potentials near to OCP.Coating Ti DLC also had passive behaviour in a wide zone of the anodic During the test the engine worked for 50 hours at full load (3000 rpm).Biodiesel B30 was used as a fuel, which was a mixture of FAME (100% Biodiesel) with diesel B at a rate of 30%. Nozzle characterization after test in the engine Scanning electron microscope (SEM) and energy dispersion X-Ray spectroscopy (EDS) were used for characterization of the nozzles geometry after the engine tests.Cr DLC coating had better behaviour than Ti DLC. The microanalysis showed that for the all coatings the deposited layer on the needle persisted after the test, with the exception of the tip where the Ti DLC layer has been detached Additionally, the spray holes geometries of the nozzle body were analysed after endurance test with two different fluids: reference standard fuel and 30% biodiesel. Figure 19 shows the scanning electron microscope images (EDS) of the nozzle body tip before the engine test (real part and its corresponding silicon model for orifices internal characteristics analysis), whereas Figures 20 and 21 show the nozzles after the tests with standard diesel fuel and B30 fuel, correspondingly.Though large quantities of carbonaceous deposits could be observed on free surfaces for both fuels, no deposits were found on internal surface of spray holes.Finally, nozzle deposits were analyzed by Thermal Gravimetric Analysis (TGA), which showed no big difference in deposits composition for the nozzles operated with standard diesel and B30 blend. Conclusions Fully formulated prototype lubricants based on synthetic esters had low toxicity for aqueous organisms (algae and Daphnia Magna) and high biodegradability evaluated by the Manometric Respirometry Method. Among three developed prototype lubricating oils, SEMO 10 had the best tribological performance which was comparable with that of the reference mineral oil.Further improvement of the tribological properties of this lubricating oil was achieved by additive re-formulation. The developed lubricant, SEMO 36, exceeded the reference mineral oil in tribological performance. Our findings indicated that, in addition to the rheological properties of the lubricating oil, deposit build-up was an important factor controlling the tribological performance of the oil both in simulation experiments and real two-stroke engines.Two kinds of deposits: carbon soot and transparent sliding lacquer were observed on the engine components after tests.Build-up of a transparent sliding lacquer was especially important in the case of SEMO 36 oil and it was related with considerable reduction both in wear rate and friction coefficient.For SEMO 36, surface chemical analysis of the friction zone showed important changes in surface chemical composition, which was especially marked by increase in carbon and oxygen content. It is evident that formation of the sliding deposits stemmed from tribochemical reactions between the oil components and base material (cast iron and steel).The chemical state of carbon and oxygen atoms on the surface of friction zone should be further investigated for better understanding of these mechanisms. Tests in real two-stroke engines were performed using mixtures of the developed lubricant with petrol or bioethanol.In both cases, no seizure between piston ring and cylinder liner was observed.When using bioethanol, the engine components were clean without important carbon soot deposits. Engine power slightly decreased and fuel consumption slightly increased -on a volumetric basis when bioethanol E85 was blended with the newly developed lubricating oil SEMO 36.However, these results might be related with lower calorific value of ethanol as compared with petrol.Besides, the new lubricating oil improved scuffing resistance in combination with miscible lubricants and significantly reduced the environmental impact.In addition to low toxicity and high biodegradability, emissions of CO, NO x and hydrocarbons from engines lubricated with the newly developed lubricants were lower than with traditional mineral oil and much below the limits established for portable applications. Concluding, a new generation of lubricating oils for two-stroke engines have been developed combining low friction, good protection against wear and scuffing, no ash residue, low carbon soot or other deposit formation.These lubricating oils are compatible with bioethanol E85. Application of Cr DLC coating on injection nozzles significantly increased the corrosion resistance and improved behaviour in engine test. Though Ti-DLC coating also improved substrate corrosion resistance, its performance in engine test was worse than for Cr DLC coating. Deposit chemical composition and the nozzle performance did not significantly vary in endurance tests when standard diesel was substituted by B30 blend. Figure 1 . Figure 1.Evolution of friction coefficient in time during tribological evaluation tests of the following oils: a) SEMO 4, b) SEMO 5, c) reference oil, d) SEMO 10, e) SEMO 36.Inset in graph e) shows the initial part of the plot together with the curve of the normal load in [12]. Figure 3 . Figure 3. Experimental set-up for piston ring/cylinder liner simulation.The piston ring segments performed a reciprocating motion with a stroke of 1 mm and a friction frequency 40 Hz.Normal load was 50 N during short run-in period 45 s and 300 N during the test 90 min.During the test, the piston ring segment and the cylinder liner sample were immersed in the oil, which temperature was constant at 200 °C. Figure 4 . Figure 4. Evolution of friction coefficient in time during piston ring/cylinder liner simulation test.a) SEMO 4, b) SEMO 5, c) reference oil, d) SEMO 10, e) SEMO 36.Inset in the graph e) shows the initial part of the plot together with the curve of the normal load in [12]. Figure 6 . Figure 6.Optical images of the friction zone of the piston ring segments after friction simulation tests using different lubricants.The scale of each image is the same and shown by a scale bar in [12]. Figure 7 . Figure 7. Mass wear determined from the geometry of the groove vs. mass change of the cylinder liner samples.The dashed line is a linear regression of experimental data with two adjusted parameters: slope and intercept.The solid line is a linear regression with a fixed slope 1 in[12]. Figure 8 . Figure 8. Friction coefficient and wear specific energy in friction simulation tests in [12]. Figure 10 . Figure10.Macro images of two-stroke engine components after scuffing test using a mixture of mineral oil with petrol (a,d), bioethanol E10 (b,e) and bioethanol E20 (c,f) in[12]. Figure 11 . Figure 11.Macro images of two-stroke engine components after scuffing test using mixture of SEMO 10 lubricating oil with petrol: a) piston, b) cylinder, c) exhaust side, d) intake in [12]. Table 12 . Parameters of the conventional and alternative systems used in the life-cycle analysis Figure 14 . Figure 14.Average cross-section profile of the friction zone of discs samples (uncoated reference, Cr and Ti DLC) tested using different fuels, AGIP and B50.Different scales of magnitude are used for better visualization of the mean contact surface profile.Surface morphology of the friction zone was studied using white light confocal microscopy.The averaged cross-section profiles for each sample tested are shown in Figure14.It is possible to notice very good performance of the coatings, them had deeper grooves with a maximum depth of 5.45 µm.Cr DLC tested against AGIP fuel had better performance than Ti DLC.Two different scales of magnitude, Z, are used for better visualization of the mean contact surface profile.From 5 to -5µm for coated discs (Cr DLC and Ti DLC lubricated with AGIP ref and B50) and from 16 to -16µm for uncoated ref samples. Figure 15 . Figure 15.Nyquist diagrams.Impedance data of coated and uncoated nozzle in K 2 SO 4 . Figure 16 . Figure 16.Equivalent circuits used for the experimental data simulation.Circuit A) for Nozzle Cr DLC; circuit B) for uncoated nozzle and circuit C) for Ti DLC coating. Figure 17 . Figure 17.Polarization curves on coated and uncoated nozzles immersed in K 2 SO 4 Figure 18 . Figure 18.The engine on test bench and the tested nozzles installed on the engine Table 2 . Contact angle and oil spread distance Table 7 . reference cylinder not immersed neither heated in lubricating oil.Surface chemical composition (at.%) of the cylinder liner samples tested with SEMO 36 lubricating oil Innovative "Green" Tribological Solutions for Clean Small Engines http://dx.doi.org/10.5772/55836 Table 8 . Experimental conditions for scuffing tests of real two-stroke engines Table 9 . Oil -petrol combinations tested in a real two-stroke engine scuffing test Table 10 . [12]sion of gases from two-stroke engine tested with different lubricating oil-petrol combinations in[12]. Table 11 . Characteristics of the engine used in life-cycle analysis Table 13 . Equivalent circuit parameters of coated and uncoated nozzles Table 14 . 14)bology -Fundamentals and Advancements branch.Cr DLC and Ti DLC notably improved substrate corrosion behaviour reducing its corrosion current by several orders of magnitude (see table14).Corrosion current of coated and uncoated nozzles calculated using Tafel approachThe injectors were tested in the Minsel M-430 engine manufactured by Abamotor Energía, SL.The parameters of the engine and test conditions are shown in Table15. Table 15 . Characteristics of the engine used in engine tests to evaluate the different alternative nozzles (Cr DLC and Ti DLC).
2018-12-05T09:07:11.204Z
2013-05-22T00:00:00.000
{ "year": 2013, "sha1": "a44a8d38b94b7df8b7306d4324bd5825e8393d29", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/44868", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1f820226bc1725f023a0cf4e0c8070b7a0f7b77f", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
255944122
pes2o/s2orc
v3-fos-license
Prevalence and incidence of post-traumatic stress disorder and symptoms in people with chronic somatic diseases: A systematic review and meta-analysis Introduction Comprehensive evidence on prevalence and incidence of post-traumatic stress disorder (PTSD) and symptoms (PTSS) in people with chronic somatic diseases (CD) is lacking. Objective To systematically and meta-analytically examine prevalence and incidence of PTSD and PTSS in people with CD compared with people without CD. Methods MEDLINE, Embase, and PsycINFO were searched from inception (1946) to June 2020. Studies reporting point, 12-month, lifetime prevalence, or 12-month incidence of PTSD and PTSS in people with CD were selected and reviewed in accordance with PRISMA guidelines by two independent reviewers. Risk of bias was assessed by a combination of the Newcastle-Ottawa Scale and recommendations of the Cochrane Collaboration for non-comparative studies. Pooled estimates were calculated using random effects meta-analyses. Between-study heterogeneity was assessed using the I2 statistic. Results Data were extracted from studies reporting on point prevalence (k = 60; n = 21,213), 12-month prevalence (k = 3; n = 913), and lifetime prevalence (k = 6; n = 826). 12-month incidence estimates were not available. The pooled estimate for the point prevalence of PTSD (k = 41) across CD was 12.7% (95% CI, 8.6 to 18.4%) and 19.6% regarding PTSS (13.2 to 28.1%; k = 24). Individuals with cerebrovascular disorder (k = 4) showed the highest pooled point prevalence for PTSD (23.6%, 95% CI, 16.8 to 32.0%), those with cardiovascular diseases the lowest (6.6%, 1.9 to 20.9%; k = 5). The pooled 12-month prevalence of PTSD (k = 3) was 8.8% (95% CI, 5.5 to 13.5%) and the lifetime prevalence (k = 6) was 12.1% (7.6 to 18.5%). Pooled estimates of PTSD prevalence in people with compared to those without CD showed an odds ratio of 9.96 (95% CI, 2.55 to 38.94; k = 5). Conclusion Post-traumatic stress disorder and PTSS are common and substantially higher in people with compared to those without CD. Earlier detection and treatment of this comorbidity might improve mental and physical health, reduce the incidence of further diseases, and reduce mortality. Clinical trial registration https://osf.io/9xvgz, identifier 9xvgz. Introduction Chronic somatic diseases (CD) such as cardiovascular diseases, cancers, respiratory diseases, and diabetes account for 71% of all deaths worldwide; this is equivalent to 41 million deaths per year (1). More than 40% of people with CD also have a mental disorder; this is twice the 12-month prevalence of people without CD (2). Most studies refer to anxiety, depression, and somatoform disorders (2), while knowledge on prevalence and incidence of other mental disorders such as post-traumatic stress disorder (PTSD) is limited. Hereby, PTSD refers to a trauma-and stressor-related disorder, caused by a traumatic event, such as threatened death, serious injury [including a life-threatening condition such as CD (3)], or sexual violence (4). From diagnostic and statistical manual of mental disorders (DSM)-4 to DSM-5 the requirements for the fulfilment of the traumatic event were restricted. A threat to physical integrity -and thus a CD diagnosis -is no longer considered a criterion for a traumatic event that serves as the basis for a PTSD diagnosis (3,4) unless CD is associated with increased mortality (4). Increased mortality has been demonstrated in CD such as neurological conditions (5), musculoskeletal disorders (6), heart disease, cerebrovascular disease and cancer (7), among others. Thus, CD, which may be experienced as an aversive event given the experience of diagnosis and medical treatment, may be a possible trigger for elevated post-traumatic stress symptoms (PTSS) and, with increased mortality, may also be a possible trigger for PTSD (4). On the other hand, it has already been shown, that PTSD can be a risk factor for developing CD (8)(9)(10), and a negative prognostic factor regarding disease outcomes (11)(12)(13) and treatment adherence (12,14). Even PTSS show a significant negative impact on CD severity, treatment adherence, health problems, and functional impairment (15)(16)(17)(18). People with PTSD show metabolic dysfunction, alterations in inflammatory pathways, and neuroendocrine dysfunction that have not been demonstrated in people without PTSD (8,19,20). The pathophysiology is associated with CD, the incidence of further CD (such as cardiovascular disease and/or diabetes), poorer health recovery, and worse treatment outcomes (10). Prevalences of PTSD or PTSS in people with CD can provide insights into the frequency and relevance of this comorbidity. Studies on prevalences of PTSD usually refer to specific diseases such as chronic pain with a pooled mean prevalence of 9.7% (95% CI, 5.2 to 17.1) (21) or cardiovascular disease with an average prevalence of 12% (0 to 38%) (22). Overall, there exist systematic reviews for only a few diseases (e.g., chronic pain, cancer, acute coronary syndrome, cardiovascular disease) (21)(22)(23)(24) with different prevalences depending on the type of specific disease [e.g., cancer type or chronic pain type (21,25)]. The systematic reviews find higher prevalences for the assessment with self-reporting questionnaires than with structured interviews (21,24,25), and show inconsistent results regarding the effects of moderators time of PTSD assessment since CD diagnosis or initial treatment, setting, age, and gender (21,23,25) on prevalences. Preliminary findings also suggest a substantially increased risk for PTSD in people with CD compared to those without CD. For example, organ transplant recipients are shown to be two to five times more likely to have comorbid PTSD than the general population (26). Studies that report the incidence of PTSD or PTSS following a CD event may provide insight into the impact of a CD event on PTSD or PTSS. The discrepant estimates for specific CD limit comparisons between different diseases. Integrated information about the prevalence and incidence of PTSD or PTSS in people with CD (across CDs) is lacking, which is essential for an overall insight into the relevance of the topic. Therefore, the present systematic review and meta-analysis aimed to examine: predefined protocol was registered at Open Science Framework (OSF; identification number 9xvgz; date of registration: February 28th, 2020). Inclusion criteria and outcomes We predefined the following inclusion criteria: (1) original studies without restriction regarding publication status (i.e., peerreviewed full-text journal articles, non-peer-reviewed full-text manuscripts, and conference abstracts) to avoid data limitation, (2) reporting on or including data allowing the calculation of: point (≤4 weeks), 12-month, lifetime prevalence, or 12-month incidence of PTSD [i.e., diagnosed PTSD through structured interviews or the assessment based on PTSD according to DSM-5 (4) or ICD-10 (28) or prior DSM/ICD versions] and/or PTSS (i.e., assessment providing an indication for elevated PTSS but is not based on classification system) in people with (3) CD [defined following the definition of Kampling et al. (29) who specified a list of ICD-10 diagnoses to meet the criteria of CD]. Studies had to be available in (4) English or German language. If no full text was available even after contacting the authors, this led to exclusion. Literature search The literature search (including backward searches) was performed using the electronic databases MEDLINE, Embase, and PsycINFO from 13.03.2020 to 24.07.2020. Publications from their inception to June 2020 were considered. The search string combines terms related to CD/CD events with those related to PTSD/PTSS. Proximity operators were used and MeSH (Medical Subject Headings) terms were applied where appropriate. The search strategy is provided in Supplementary Table 1 and was validated using the PRESS guideline (30). Data extraction Two reviewers (FL and PG) independently screened titles and abstracts of all studies that resulted from the search, reviewed full-text articles of all potentially relevant articles, and extracted data from eligible full-text articles. Discrepancies were resolved by discussion with a third reviewer (LS). Data collected included author, publication year, country, sex, age, CD diagnosis, PTSD/PTSS, index trauma, type of PTSD/PTSS measurement instruments, time of PTSD/PTSS measurement after CD diagnosis or initial treatment, sample size, and estimates. If results were based on the same study sample, the most comprehensive and recent publication was considered. For intervention studies with multiple measurement time points, baseline assessment data were included to evaluate cohorts prior to study interventions. To account for the sample dependence of multiple estimates within a study, the most recent eligible PTSD or PTSS assessment date for the prevalence or incidence estimate and the most commonly used measurement tool to collect PTSD or PTSS were extracted. If information or data were missing, the corresponding authors of the study were contacted and asked for further information. Assessment of study quality Two researchers (FL and PG) independently assessed each included study for quality by using a risk of bias (RoB) appraisal instrument based on the Newcastle-Ottawa Scale (31) as well as the recommendations of the Cochrane Collaboration for the assessment of the risk of bias for non-comparative studies (32). This combined instrument addresses the quality of (1) sample representativeness, (2) prospective scheduling, (3) transparent, non-selected reporting of sample characteristics and outcomes, (4) sample size, (5) assessment, (6) data quality, and (7) comparability against a control group (full details regarding scoring provided in Supplementary Appendix 2). Studies were classified as having a high (1 to 3 points), moderate (4 to 6 points), or low (7 to 9 points) risk of bias. Discrepancies were resolved by discussion with a third reviewer (LS). Data synthesis and analysis Prevalence and incidence of PTSD or PTSS in people with CD were calculated by pooling the study-specific estimates by using a generalized linear mixed model (GLMM). GLMM is an elaborate approach advocated especially for proportions (33). Since the effect sizes are based on continuous outcome data, the maximumlikelihood was used to estimate the GLMM (34). Prevalences and incidences are reported as percentages with corresponding 95% confidence interval (CI). Binary data were pooled as Odds Ratio (OR) by using the Bakbergenuly Sample Size Method (35) and the Paule-Mandel estimator was used to estimate the between-study variance (34). Bakbergenuly sample size method uses weights based on studylevel sample size to estimate the overall effect, leading to a reduction in between-study variance bias than conventional methods (35). Estimates of PTSD or PTSS in people with CD versus without CD are reported as OR with corresponding 95% CI. Forest plots were created to visually assess heterogeneity. Quantification of heterogeneity between studies was examined using the I 2 statistic -as the ratio of variance between studies to total variance in the meta-analysis-and was tested for significance by using the Q statistics. A substantial level of heterogeneity was indicated by an I 2 statistic value of 60% and greater (36). The larger I 2 , the greater the heterogeneity within the metaanalysis, because in it the scatter between the studies proportionally outweighs the random scatter within the studies (36). Records were defined as outliers if their CI did not match the CI of the pooled effect and differed significantly. This is because records with a high sampling error are likely to derivate significantly from the pooled effect (37). Potential moderators of prevalence and incidence estimates were calculated using random effects metaregressions (36). Following the recommendation of Schwarzer et al. (38) and Borenstein et al. (39), meta-regression analysis was conducted in case of ≥10 studies per outcome (subgroup). From a number of ≥2 events per subcategory of the variable, the results of the meta-regressions are described descriptively (40,41). The influence of each study on pooled estimate results was calculated using sensitivity analysis (leave one out analysis). Publication bias was investigated by funnel plot and Egger's test (42). All analyses were performed using the packages tidyverse, meta and metafor in the R software version 4.0.2 (43). Statistical tests were two-sided and used a significance threshold of P < 0.05. Changes to the a priori study registration were that in addition to narrative reviews, case reports and lack of full text availability, secondary literature such as meta-analyses, systematic reviews were excluded. However, the literature from the secondary literature was used for the additional manual search. In the case of substantial heterogeneity (>60%), data were pooled and presented with reports on heterogeneity and its interpretation. Possible reasons for the statistical heterogeneity were explored. In order to use a consistent method, meta-regressions were performed when there was substantial heterogeneity in the population (I 2 > 60). Metaregression analyses were performed in the case of ≥ 10 studies per outcome. The changes are shown in Supplementary Table 3. Study selection and characteristics A total of 6,103 references were identified through the electronic database searches and 26 additional studies through backward searches. After removal of duplicates, a total of 4,616 studies were reviewed; of these, 589 full-text articles were reviewed, and 64 were included in the systematic review. Reasons for exclusion of full texts were wrong population, wrong study design, language, intensive care unit, no CD or not on CD list, not only CD, no prevalence/incidence calculable, no PTSD/PTSS was measured, same population sample and no availability of full text and others (for PRISMA flow diagram see Figure 1). Included studies were conducted in 17 different countries, predominantly in the USA (31 studies; 48%). 134 records from 60 references (n = 21,213) reported on the point prevalence of PTSD or PTSS; three records from three references (n = 913) reported on the 12-month prevalence of PTSD, and nine records of six references (n = 826) reported on the lifetime prevalence of PTSD. No references were found for the 12-month incidence of PTSD or PTSS in people with CD. The most commonly used questionnaires were Impact of Event Scale (Revised), PTSD Checklist and Post-traumatic Stress Diagnostic Scale. Structured Clinical Interview for DSM was the most frequently used structured interview. Nine different categorized CD were extracted (Tables 1, 2). Sample sizes of studies varied between 12 and 6,542 individuals. The mean age of individuals across studies was 41.14 years (SD = 8.72); 55.5% were female and 84.2% Caucasian. CD was diagnosed at a mean age of 21.52 years (SD = 15.64), from then on lasting a mean of 8.03 (SD = 4.58) years. With respect to study quality, four (6.3%) studies were at low, 53 (82.8%) at moderate, and seven (10.9%) at high risk of bias. A summary of selected study characteristics is presented in Supplementary Tables 4-6. See Supplementary Appendix 7 for references of included studies. Point prevalence of PTSD or PTSS in people with CD The most common measurement was self-report questionnaires (70.7% for PTSD, and 91.9% for PTSS); the remaining point prevalence estimates were collected by means of structured interviews. The duration of CD was 9.69 (SD = 6.08) years regarding PTSD and 5.36 (SD = 4.44) years regarding PTSS. 12-month Prevalence of PTSD across CDs, and their moderators All included studies assessed 12-month PTSD prevalence using a structured interview; studies assessing PTSS were not available. Selection process of primary studies. No outliers were detected. The number of studies was too small (≤ 10 studies) to calculate meta-regressions. Descriptively, an estimate for specific CDs of 14.3% for nervous system diseases (k = 1) and a weighted mean of 6.5% for malignant neoplasms (k = 2) can be reported. PTSD in people with CD compared to controls without CD All studies on the comparison between PTSD or PTSS in people with versus those without CD assessed PTSD, with 87.5% using self-report questionnaires; studies assessing PTSS were not available. Meta-analytic pooling (k = 8) yielded an OR of 7.09 (95% CI, 2.49 to 20.17) (Supplementary Figure 11), with substantial heterogeneity I 2 = 88.4% (Q = 60.51, P < 0.001, τ 2 = 10.05). After eliminating five outliers (k = 3) the heterogeneity decreased to a moderate I 2 = 36.6% (Q = 6.31, P = 0.178, τ 2 = 0.71) and the estimate yielded an OR of 9.96 (95% CI, 2.55 to 38.94) (Figure 6). The number of studies was too small (≥ 10 studies) to calculate meta-regressions. Descriptively, people with versus those without specific CD showed increased risk Publication bias The Egger's test for point prevalence estimates of PTSD (Intercept = 0.91 (95% CI, −1.4% to −3.2%, P = 0.45) (Supplementary Figure 12) and PTSS (Intercept = 0.78 (95% CI, −2.7% to −4.3%, P = 0.67) (Supplementary Figure 13) in people with CD does not indicate the presence of funnel plot asymmetry and thus publication bias. The number of studies of 12-month and lifetime prevalence of PTSD in people with CD were too small to test for small-study effects with Egger's test assessing publication bias. Discussion People with CD show seven to ten times increased odds for PTSD compared to people without CD. This impressive number is matched by a pooled point prevalence of 12.7% for PTSD and 19.6% for PTSS in people with CD. Moreover, PTSS and PTSD seem to represent a particularly increased comorbidity risk factor for people with CD, given substantially lower OR reported for other mental disorders such as depression (OR = 1. The apparent finding that the point prevalence of PTSS in people with CD was higher compared to PTSD is an inherent part of the construct definitions. For PTSD, more clinical symptoms must be present to fulfill the criteria of DSM or ICD classification systems than for elevated PTSS, even though both manifestations lead to symptom worsening of the CD, reduced treatment adherence, psychological distress, functional impairment (14)(15)(16)(17)(18), poor quality of life (46), and long-term health decline (47). Furthermore, PTSD was predominantly assessed by structured interviews, while PTSS was mainly assessed by means of questionnaires, with the latter being associated with higher prevalence rates than the former (23). These methodological factors need to be considered when interpreting the prevalence rates of PTSS and PTSD. Interestingly, prevalence rates for PTSD (6.6 to 23.6%) and PTSS (9.4 to 50.3%) varied widely across CD, while however, nonsignificantly. Due to known differences in disease severity, different treatment methods (25), remission or improvement prospects, disease impairments, the mental adjustment to CD, and rehabilitation (48), these non-significant results may rather be interpreted methodologically, given the small number of studies per subcategory, the lack of studies for certain CDs such as gastrointestinal, skin, or kidney diseases, and varying methods of assessing PTSD and PTSS. Prevalences of specific CD categories and their moderators could provide additional information about specific risk groups. For example, there was a significant increase in the point-prevalence of PTSS over the duration of CD. Additionally, people with CD and comorbid PTSD had a longer average duration of CD than those with CD and comorbid PTSS. Assuming that PTSS could lead to PTSD if it worsens, and that PTSS can worsen with the duration of CD, the longer average duration of CD in PTSD compared to PTSS could be an indication of a temporal association between symptom worsening with longer duration of CD. Swartzman et al. (25) found opposite results for the duration of cancer in the sense of a symptom reduction. Nevertheless, overall the data suggest that duration of CD is associated with symptom severity of PTSS and PTSD. For clinical practice, this would mean that mental health screening is important at both initial diagnosis and during the course of CD. Hence, there remains a long way of scientific effort for unraveling the prevalence rates and the differential impact of different CD on PTSD and PTSS, with consideration of symptom severity longitudinally. Further moderator analyses, exclusion of outliers, and sensitivity analyses could not resolve the substantial heterogeneity of studies addressing the prevalence of PTSD or PTSS in CD; therefore, prevalence estimates should be considered with caution. The differences in prevalence could have resulted from different study designs with different levels of methodological quality, study populations, sample sizes, sampling methods, data collection, and collaboration of study participants. Furthermore, demographic protective factors, such as high level of education, paired relationship or being married, being employed, higher economic status, and social support seem to play a role in the occurrence of PTSD or PTSS in CD (49, 50), and could explain heterogeneity. Also, the Forest plot of lifetime prevalence of post-traumatic stress disorder (PTSD) in people with chronic somatic diseases (CD). Forest plot of point prevalence of post-traumatic stress disorder (PTSD) in people with chronic somatic diseases (CD) compared with controls without chronic somatic diseases (CD), after eliminating outliers. dose-response effect of traumatic stress could have an impact on heterogeneity, as a higher number of different lifetime traumatic event types (i.e., more intense traumatization) were associated with a higher probability of point and lifetime PTSD and with a reduced probability of long-term spontaneous remission from PTSD (51). Moreover, the assessment approach, in which only CD as index trauma is examined compared to any index trauma, could have a substantial impact on PTSD and PTSS prevalence (52). By including the studies with different methodological approaches, a limitation of missing data can be reduced. Qualitative statements on the relevance of the topic can only be made on the basis of data pooling. Purely descriptive presentations of results could mean that many new results are not recognized. In order to be able to meta-analyze the heterogeneity with the current data situation, the variables mentioned would have to have been collected in the primary studies. These factors explaining the heterogeneity are of interest for future research. The distinctive feature of PTSD compared to other mental diseases is its explainable etiology (4,53). A CD diagnosis with increased mortality, its possible worsening, treatment, and challenging behavioral and cognitive-emotional responsibilities (e.g., coping with a diagnosis or adhering to complex treatment schedules) are disease-related distressors (48) that may traumatize (3,24,54,55). PTSD has a negative impact on CD medication and treatment adherence, especially if the PTSD was induced by the CD with increased mortality or a related medical event. CD treatment can serve as an aversive reminder and reinforce the avoidance behavior characteristic of PTSD (24, 56,57). In addition to the etiological association, PTSD is associated with poorer health behavior, which may be risk factors for developing CD and may negatively affect CD outcome (58-60). These bidirectional associations and the increased risk of comorbidity for PTSD in people with CD could be given more attention in clinical practice to support the mental and physical health of people with CD. Incidences of PTSD in people with CD might point to the etiological association. However, analysis of 12-month incidences was not possible given the lack of available primary studies. Prospective studies of comorbid CD and PTSD often address the incidence of CD in people with PTSD (58). Hereby, PTSD was associated with the onset of (self-reported) CD in a dose-response ratio. Women with the highest number of PTSD symptoms had a nearly two-fold increased risk of type 2 diabetes mellitus compared with women without PTSD or PTSS (58). Twice the incidence of PTSD compared to twins without PTSD was also found for other CD, such as coronary heart disease (59). These findings indicate a bidirectional relationship between PTSD and CD, suggesting a comprehensive medical history, including a psychosocial one, in the treatment of CD. Especially since the early diagnostics and treatment of PTSD in CD, in addition to reducing PTSD (61,62), leads to a decrease of depressive symptoms (63), sleep problems, various chronic somatic health complaints such as back pain, cough (64), and a reduction of hypertension (65) and cardiovascular risk (66). Limitations When interpreting the results some limitation of our systematic review and meta-analysis should be considered. Random error is a general problem despite the total number of 22,952 subjects on which the analysis was based. Access to the original study data and thus to epidemiologic data was not available. Study populations were found mainly from industrialized nations and only German or English language studies were included, which limits the worldwide generalizability. Although this meta-analysis included different categories of CD, most studies included people with malignant neoplasms, cardiovascular diseases, musculoskeletal disorders or nervous system diseases, so the results are dominated by these CD categories. Causal attributions between CD and PTSD or PTSS cannot be drawn based on the data. Conclusion The results suggest that PTSD is a common mental health comorbidity in people with CD. The very high OR for PTSD in people with CD compared to people without CD suggest a specific link between PTSD and CD beyond the generic fact of increased mental burden in people with CD. Systematic early PTSD screening in people with CD may facilitate identification of individuals in need of support. Earlier detection and treatment of comorbid PTSD or PTSS in people with CD might provide a means to improve health outcomes, treatment adherence, and quality of life (62,67). Moreover, we should increase awareness for this specific comorbidity in somatic health care, as clinical practice and treatment guidelines today are mainly focused on depression as comorbidity (68, 69). Data availability statement The original contributions presented in this study are included in this article/Supplementary material, further inquiries can be directed to the corresponding author.
2023-01-18T14:38:11.406Z
2023-01-18T00:00:00.000
{ "year": 2023, "sha1": "c4937abfcce5fbc5759cdb4bf2257bc05becb1f9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "c4937abfcce5fbc5759cdb4bf2257bc05becb1f9", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
201653967
pes2o/s2orc
v3-fos-license
Circulating but not faecal short-chain fatty acids are related to insulin sensitivity, lipolysis and GLP-1 concentrations in humans Microbial-derived short-chain fatty acids (SCFA) acetate, propionate and butyrate may provide a link between gut microbiota and whole-body insulin sensitivity (IS). In this cross-sectional study (160 participants, 64% male, BMI: 19.2–41.0 kg/m2, normal or impaired glucose metabolism), associations between SCFA (faecal and fasting circulating) and circulating metabolites, substrate oxidation and IS were investigated. In a subgroup (n = 93), IS was determined using a hyperinsulinemic-euglycemic clamp. Data were analyzed using multiple linear regression analysis adjusted for sex, age and BMI. Fasting circulating acetate, propionate and butyrate concentrations were positively associated with fasting GLP-1 concentrations. Additionally, circulating SCFA were negatively related to whole-body lipolysis (glycerol), triacylglycerols and free fatty acids levels (standardized (std) β adjusted (adj) −0.190, P = 0.023; std β adj −0.202, P = 0.010; std β adj −0.306, P = 0.001, respectively). Circulating acetate and propionate were, respectively, negatively and positively correlated with IS (M-value: std β adj −0.294, P < 0.001; std β adj 0.161, P = 0.033, respectively). We show that circulating rather than faecal SCFA were associated with GLP-1 concentrations, whole-body lipolysis and peripheral IS in humans. Therefore, circulating SCFA are more directly linked to metabolic health, which indicates the need to measure circulating SCFA in human prebiotic/probiotic intervention studies as a biomarker/mediator of effects on host metabolism. healthy participants 14,15 . In addition, 24 weeks of 10 g/day inulin propionate ester protected against body weight gain as compared to inulin only in overweight individuals 16 . Potential mechanisms may include a SCFA-induced inhibition of energy intake possibly mediated via the stimulation of glucagon-like peptide 1 (GLP-1) and PYY secretion, increased intestinal gluconeogenesis, increased skeletal muscle fat oxidation and improved lipid buffering capacity of adipose tissue 11 . However, increased microbial-derived acetate formation has been associated with increased body weight gain and insulin resistance in diet-induced obese rats 17 . Additionally, increased faecal SCFA have been reported in overweight and obese compared to lean participants 2,3,8,18,19 , yet it is difficult to interpret the latter data since faecal SCFA reflect the net result of colonic production and absorption 20,21 . Even though faecal SCFA are commonly used as an indicator of microbial fermentation, faecal SCFA may not accurately reflect in vivo colonic fermentation since approximately 95% of colonic SCFA are absorbed and only the remaining 5% are excreted in feces [22][23][24][25] . To obtain more information on the validity of faecal SCFA as biomarker for metabolic health effects, the associations between faecal and circulating SCFA concentrations and parameters of metabolic health were studied in a relatively large cohort of 160 participants with a wide range of body mass indices (BMI) and glucometabolic status. Using multiple regression analysis, we analysed the relationship between faecal and fasting circulating SCFA with fasting glucose, insulin, circulating lipids (free fatty acids (FFA), triacylglycerols (TAG), glycerol), insulin resistance index (homeostasis model assessment of insulin resistance (HOMA-IR)), gut hormone concentrations (PYY, GLP-1), fasting substrate utilization and inflammation markers including lipopolysaccharide-binding protein (LBP), tumour necrosis factor alpha (TNF-α), interleukin 6 (IL-6) and interleukin 8 (IL-8). We further investigated the relationship between faecal and fasting circulating SCFA profiles and peripheral insulin sensitivity index (M-value) as measured via the gold standard hyperinsulinaemic-euglycemic clamp technique. Results Mean age of the participants was 49.6 ± 14.7 years and 66.2% of participants were male with a mean BMI of 29.8 ± 4.4 kg/m 2 , a mean fasting glucose of 5.6 ± 0.6 mmol/L and a mean HOMA-IR of 3.7 ± 1.5 (Table 1). In the subgroup, peripheral (M-value) was measured in 93 overweight to obese, prediabetic men (n = 72) and women (n = 21) with mean age of 59.0 ± 7.1 years and a mean BMI of 31.8 ± 3.1 kg/m 2 , respectively. Associations between faecal and circulating ScfA concentrations. Faecal acetate and butyrate were not associated to their respective circulating concentrations, while faecal propionate was positively associated with circulating propionate (standardized (std) std β = 0.262, P = 0.002). Faecal SCFA were not related to metabolic parameters. None of faecal SCFA were significantly associated with fasting GLP-1, PYY, FFA, TAG, glycerol, glucose, insulin concentrations, HOMA-IR, inflammatory markers or fasting substrate oxidation either with or without adjustment for age, sex and BMI. In the subgroup analysis faecal SCFA were not associated with peripheral insulin sensitivity (Table 2). Fasting, circulating SCFA were related to fasting GLP-1, lipid metabolites and insulin sensitivity. All three circulating SCFA were positively associated with fasting GLP-1 concentrations (Table 3). Additionally, circulating acetate, propionate and butyrate were negatively associated with fasting glycerol, TAG and FFA, respectively. Also, circulating butyrate was negatively associated with fasting glucose. These relationships remained significant after adjustment for age, sex and BMI (Table 3, Supplementary Figure 1). Circulating SCFA were not associated with fasting PYY, LBP, IL-6, IL-8 and TNF-α. Furthermore, circulating SCFA were not related to fat and carbohydrate oxidation, expressed as percentage of energy expenditure. In the subgroup analysis of overweight/obese, prediabetic individuals, peripheral insulin sensitivity was measured using the M-value derived from the euglycemic-hyperinsulinaemic clamp technique. We found that circulating acetate was negatively associated with peripheral insulin sensitivity (M-value) whereas circulating propionate was positively related to peripheral insulin sensitivity (Table 3, Supplementary Figure 1). The relationships between circulating SCFA and insulin sensitivity remained significant after adjustment for age, sex and BMI. Discussion We investigated the relationship between faecal and fasting circulating SCFA with fasting plasma metabolites, gut hormones, substrate metabolism and inflammatory markers in a cohort with a wide range of BMI and glucometabolic status. This study shows that only circulating but not faecal SCFA concentrations were related to fasting plasma glucose, FFA, TAG and glycerol, GLP-1 and insulin sensitivity, also after adjustment for age, sex and BMI. www.nature.com/scientificreports www.nature.com/scientificreports/ Contrary to previous human studies, faecal SCFA were not related to BMI, whereas circulating butyrate and propionate were inversely associated with BMI. Circulating plasma propionate seems to be the most reflective of its respective faecal concentrations, whilst faecal acetate and butyrate were not related to their respective circulating concentrations. In line, previous literature reports that SCFA flux into the circulation and uptake in peripheral tissues rather than microbial SCFA production per se is of importance for metabolic health [26][27][28] . Our data emphasize the need to measure circulating SCFA in human prebiotic/probiotic intervention studies as a biomarker/mediator of effects on host metabolism. To our knowledge, this is the first study providing evidence that fasting circulating SCFA are positively associated with fasting plasma GLP-1 in humans. High colonic SCFA production is linked to increased GLP-1 and PYY secretion through binding of SCFA to GPR41/43 on the enteroendocrine L-cell 29 . Further, a one-year dietary fiber intervention (wheat bran, 24 g/d) increased circulating SCFA concentrations accompanied by increased levels of GLP-1 concentrations in hyperinsulinemic participants 30 . Yet, there is little known about the contribution of circulating SCFA to GLP-1 secretion during the fasted state. Circulating SCFA may stimulate GLP-1 secretion from the visceral, basolateral side of enteroendocrine L-cells as observed in isolated rat colons 31 . Besides enteroendocrine L-cells, pancreatic α-cells have been suggested to contribute to systemic GLP-1 concentrations in the fasted state 32,33 , but whether circulating SCFA act as stimuli for GLP-1 secretion warrants further investigation. In contrast to GLP-1, we did not find an association between circulating and faecal SCFA with fasting PYY. This is in contrast to human and in vitro studies reporting a stimulatory effect of SCFA on PYY secretion 12,34,35 , however to what extent SCFA and/or dietary fibres contribute to fasting PYY secretion remains to be investigated. Although the mechanisms still remain to be elucidated, the present data indicate that, despite being the net result of production, uptake and tissue utilization, circulating SCFA are more directly linked to metabolic health as compared to faecal SCFA. In our study population, only circulating, but not faecal SCFA were associated with fasting plasma metabolites. Circulating acetate was negatively associated with fasting free glycerol, an indicator of whole-body lipolysis. This is consistent with in vitro and human in vivo studies reporting that acetate has an anti-lipolytic effect 13,[36][37][38] . This may be beneficial for metabolic health in the long term, since partial inhibition of adipose tissue lipolysis may reduce systemic lipid spillover thereby attenuating ectopic lipid accumulation 39 . Further, circulating propionate was negatively associated with fasting TAG, which might be explained by the activating effect of propionate on lipoprotein lipase (LPL) in adipose tissue leading to increased TAG extraction as shown in vitro 40 www.nature.com/scientificreports www.nature.com/scientificreports/ effect of butyrate are contradictive showing pro-and antilipolytic effects of butyrate in white adipose tissue models 38,41 . Thus, circulating SCFA may be negatively related to systemic glycerol or FFA and/or TAG suggesting that increased circulating SCFA may reduce systemic lipid overflow with a potential beneficial effect on ectopic lipid accumulation and insulin sensitivity. Nevertheless, with respect to markers of insulin sensitivity, neither fasting circulating nor faecal SCFA were related to fasting insulin or HOMA-IR in the total study population. Yet, fasting circulating butyrate, but not acetate and propionate, was negatively associated with fasting glucose. This is consistent with rodent studies showing that butyrate administration may have glucose lowering effects and may improve insulin sensitivity in the postprandial state 42,43 . In obesity, insulin resistance and T2DM, the abundance of butyrate-producing bacteria is reduced, which may explain to some extent the inverse association between circulating butyrate and fasting glucose in our study [44][45][46] . In the subgroup analysis including prediabetic individuals with obesity, circulating acetate was negatively associated with peripheral insulin sensitivity. This is in contrast with previous rodent studies reporting a beneficial role of acetate on insulin sensitivity 34 and with two small-scale human cross-sectional studies including obese women or morbidly obese individuals reported either none or a positive association of circulating acetate and insulin sensitivity measured via hyperinsulinemic-euglycemic clamp, respectively 47,48 . Additionally, when acetate is administered colonically, overweight participants showed increases in fasting fat oxidation, energy expenditure, and PYY secretion 12,13 , reflective of positive effects on metabolic health. Interestingly, a kinetic study showed that intravenously infused acetate remains longer in the circulation in individuals with T2DM suggesting a disturbed acetate tissue uptake and metabolism in the context of metabolic disorders 34 . Further, exogenous and endogenous acetate production but not colonic acetate absorption differed between hyperinsulinemic and normoinsulinemic individuals after rectal infusion of sodium-acetate 37,49,50 . Thus, our findings may reflect an altered endogenous acetate metabolism rather than an altered microbial-derived acetate production in metabolically compromised individuals. In contrast to fasting circulating acetate, fasting circulating propionate was positively associated with clamp-derived insulin sensitivity. Propionate has been reported to stimulate glucose uptake in 3T3-L1 adipocytes and C2C12 skeletal muscle cells in vitro and improve insulin sensitivity (HOMA-IR) in mice fed a high fat diet 51,52 . Possible mechanisms include an increase in peripheral glucose uptake via increased GPR41 stimulation, suppression of hepatic de novo lipogenesis and increase formation of beneficial odd chain fatty acids in the liver 53 . The main limitation of our study is the cross-sectional design, which limits causal suppositions. Further, we cannot account for endogenous SCFA production, splanchnic and liver extraction or tissue utilization in this study 54,55 . Secondly, measures of GLP-1 and SCFA in the postprandial state would have been valuable. However, the study's major strength is the availability of faecal and fasting circulating SCFA in combination with metabolic markers in a relatively large cohort with a broad range of BMI and metabolic health status. This enabled us to investigate the relationship between faecal and fasting circulating SCFA concentrations with markers of lipid and energy metabolism as well as insulin sensitivity measured by the gold standard hyperinsulinemic-euglycemic clamp. For the first time, we confirmed that fasting circulating but not faecal SCFA were related to whole-body lipolysis, fasting GLP-1 and insulin sensitivity in the fasted state. Furthermore, our study calls for urgently needed mechanistic studies in humans concerning the relationship between SCFA, GLP-1 secretion and lipid metabolism. In conclusion, our data show that circulating but not faecal SCFA are linked to circulating GLP-1 concentrations, whole-body lipolysis and peripheral insulin sensitivity in humans. Of note, this highlights that circulating SCFA are more directly linked to metabolic health parameters. Therefore, our data indicate the need to measure circulating SCFA as a biomarker/mediator of effects on host metabolism in future human prebiotic/probiotic intervention studies. This may provide interesting leads for future research, which should aim to modulate the SCFA availability in the systemic circulation and its impact on peripheral tissue function Eligibility of the participants was assessed via a general health questionnaire, medical history and anthropometry during an intitial screening visit. Exclusion criteria were as follows: use of antibiotics, prebiotics, or probiotics 3 months before the study, diagnosis of T2DM, gastrointestinal or cardiovascular diseases, abdominal surgery, participants with life expectancy shorter than 5 years and participants following a hypocaloric diet. Participants did not use β-blockers, lipid-or glucose-lowering drugs, anti-oxidants, or chronic corticosteroids. All protocols were reviewed and approved by the local Medical Ethical Committee (MUMC+) and conducted in accordance with the Declaration of Helsinki (revised version, October 2008, Seoul, South Korea). Written informed consent was obtained from all participants. Methods Study design. This cross-sectional analysis included metabolic parameters as well as faecal and fasting circulating SCFA concentrations of previously performed intervention studies 12,13,[57][58][59] . In the present study, we collated and analyzed study data at baseline and thus prior to the respective interventions. In all studies, sample collection was performed after an overnight fast, and measurements were conducted according to the same standard operating procedures. Two days prior to the baseline investigation day, participants were asked to refrain from intense physical activity and alcohol consumption, and to collect a faecal sample. In the evening before the investigation day, the participants consumed a standardized low-fiber meal. (2019) 9:12515 | https://doi.org/10.1038/s41598-019-48775-0 www.nature.com/scientificreports www.nature.com/scientificreports/ Used data sets. The data set included baseline data from the following intervention human in vivo studies. These include an intervention study in prediabetic, overweight-obese individuals on the effect of antibiotics on insulin sensitivity (Clinical trial No. NCT02241421) (3), an intervention study in prediabetic, overweight-obese individuals on the effect of dietary fiber (galacto-oligosaccharides) on insulin sensitivity (Clinical trial No. NCT02271776) (4), an intervention study in normoglycemic, normal to overweight individuals on the effect of dietary fibers on gastrointestinal transit (Clinical trial No. NCT02491125) (5), and lastly two acute studies investigating the effect of different mixtures of SCFA in normoglycemic, overweight to obese individuals on human substrate and energy metabolism (Clinical trial No. NCT01826162 (6), Clinical trial No. NCT01983046 (7)). Baseline investigation day. After an overnight fast (>10 h), participants came to the laboratory by car or public transport. Anthropometry was measured including height, weight and waist to hip ratio. After inserting a cannula into the antecubital vein, blood samples were taken to measure plasma metabolites, hormones and inflammatory markers in the fasted state. After the blood sampling, participants were in a resting, half-supine position and fasting substrate oxidation was measured for 30 min using an open circuit ventilated hood system (Omnical, MUMC+, Maastricht, the Netherlands). Fat and carbohydrate oxidation were calculated according to the equations of Weir and Frayn 60,61 , assuming that protein oxidation accounted for 15% of total energy expenditure. Hyperinsulinaemic-euglycaemic clamp. Peripheral insulin sensitivity was determined in a subgroup of overweight/obese, prediabetic individuals via hyperinsulinaemic-euglycemic clamps as previously described 57,58 . In short, a cannula was inserted into an antecubital vein for infusion of glucose and insulin. To measure blood glucose, a second cannula was inserted into a superficial dorsal hand vein, which was arterialized by placing the hand into a hotbox (~50 °C). A priming dose of insulin infusion (Actrapid, Novo Nordisk, Gentofte, Denmark) was administered during the first ten min (t0-t10 min) and insulin infusion was thereafter continued at 40 mU/ m 2 /min for 2 h (t10-t120 min). By variable infusion of a 20% glucose solution, plasma concentrations were maintained at 5.0 mmol/L. Peripheral insulin sensitivity (M-value, mg*(kg*min) −1 was calculated from the mean glucose infusion rate during the steady-state of the clamp (last 30 min, stable blood glucose concentration at 5.0 mmol/L) 62 . A high M-value represents high insulin sensitivity (i.e., more glucose needs to be infused to maintain euglycemia during insulin infusion). Analysis of faecal and circulating ScfA. Faecal samples were collected at home and stored in the subjects' freezer at −20 °C maximum of two days before the baseline investigation day, transported on dry ice, and stored on arrival at the university at −80 °C. Faecal acetate, propionate, and butyrate were measured by gas chromatography-mass spectrometry (Dr. Stein and Colleague Medical Laboratory, Mönchengladbach, Germany) as previously described 63 . Plasma sample preparation for circulating SCFA analysis were performed as reported previously 64 . In short, deproteinization was performed by mixing 1 part plasma (v/v) with 2 parts methanol acidified with 1.5 mmol/l hydrochloric acid. Subsequently, samples were vortex-mixed vigorously and immediately centrifuged at 50000 × g in a model Biofuge Stratos (Hereaus, Dijkstra Vereenigde, Lelystad, the Netherlands) for 15 min. at 4 °C. 100 μl aliquots of the clear plasma supernatant were transferred into glass micro-insert vials and stored in the Combi-Pal until analysis. Samples were calibrated against external standards. The reversed phase separation was performed on a X-select ODS 2.5 µm column (150 mm × 2.1 mm I.D., Waters, Breda, the Netherlands), mounted in a Mistral Spark column oven (Separations, H.I. Ambacht, the Netherlands), set to 45 °C. Samples were completely separated from other components into the individual SCFA in a 25 min. gradient cycle between an aqueous 1 mmol/l solution of sulfuric acid and ethanol. Post-column, the solvent pH was enhanced to about 9, by mixing with 150 mmol/l ammonia in ethanol to maximize negative ionization. Samples were processed using a Combi-PAL sample processor (Interscience, Breda, the Netherlands) with Peltier chilled sample storage compartments set to 10 °C. The system was equipped with a 50 µl sample loop. Separated SCFA were detected using a model LTQ XL linear ion trap mass spectrometer (Thermo Fisher Scientific, Breda, the Netherlands), equipped with an ion-max electrospray probe. The MS was operated in MS-MS full scan negative mode. " Blood collection and biochemical analysis. Blood was collected in pre-chilled EDTA tubes (0.2 mol/L EDTA; Sigma, Dorset, UK) for SCFA, insulin, glucose, FFA, TAG, free glycerol, LBP, GLP-1, TNF-α, IL-6 and IL-8 analyses during fasting conditions. For GLP-1 and PYY analysis, 20 μl of dipeptidyl peptidase-IV inhibitor (Milipore Merck, Billerica, MA, USA) was added to EDTA and Aprotinin (Becton Dickinson, Eysins, Switzerland) tubes, respectively. Samples were centrifuged at 3500 g, 4 °C for 10 minutes; plasma was aliquoted and directly snap-frozen in liquid nitrogen and stored at −80 °C until analysis. Plasma glucose concentrations were determined using commercially available reagent kit (Glucose Hexokinase CP, Horiba ABX Pentra, Montpellier, France) involving a two-step enzymatic reaction with hexokinase followed by Glucose-6-phosphate-dehydrogenase resulting in D-gluconate-6-phosphate. The colorimetric reaction was measured using an automated spectrophotometer (ABX Pentra 400 autoanalyzer, Horiba ABX Pentra). Plasma FFA concentrations were measured using a commercially available kit (NEFA-HR(2) assay, Wako, Sopachem BV, Ochten, the Netherlands) with a two-step enzymatic reaction involving acylation of Coenzyme(Co) A followed by acyl-CoA oxidase resulting in the production of hydrogen peroxide as substrate that in the presence of peroxidase yields a blue purple pigment, measured with a colorimetric reaction using an automated spectrophotometer (ABX Pentra 400 autonalyzer, Horiba ABX Pentra). Plasma TAG were determined using a commercially available kit (Triglycerides CP, Horiba ABX Pentra) based on enzymatic reactions involving lipoprotein lipase, glycerolkinase and glycerol-3-phosphate oxidase resulting in the production of hydrogen peroxide as substrate of a colorimetric reaction measured using the automated spectrophotometer (ABX Pentra 400 autonalyzer, Horiba ABX Pentra). Plasma glycerol was measured www.nature.com/scientificreports www.nature.com/scientificreports/ after precipitation with an enzymatic assay (Enzytec TM Glycerol, Roche Biopharm, Basel, Switzerland) involving phosphorylation of glycerol to L-glycerol-3-phosphate by glycerokinase and the colorimetric reaction is measured using an automated spectrophotometer (Cobas Fara, Roche Diagnostics, Basel, Switzerland). Plasma insulin was determined with a commercially available radioimmunoassay (RIA) kit (HI-14K Human Insulin specific RIA, Millipore Merck) according to the manufacture's protocol. Plasma IL 6, IL-8 and TNF-α were determined with an commercialy available enzyme-linked immunosorbent assay (ELISA) kit (Human Proinflammatory II 4-Plex Ultra-Sensitive kit, Meso Scale Diagnostics, MD, USA). Plasma samples were assayed for total GLP-1 immunoreactivity using an antiserum that reacts equally with intact GLP-1 and the primary (N-terminally truncated) metabolite as previously described 65 . PYY concentrations were determined using a commercially available RIA kit (Human PYY (3-36) Specific RIA, Millipore Merck). Plasma LBP was measured as previously described 66 . In short, plates (Greiner Mocrolon 600 high binding; Sigma Aldrich, St. Louis, MO) were coated with polyclonal anti-human LBP antibodies. Diluted plasma samples (1:5000) and a standard dilution series with recombinant LBP were added to the plate. Detection occurred with a biotinylated polyclonal rabbit anti-human LBP IgG, followed by peroxidase-conjugated streptavidin and substrate. The detection limit for the LBP assay was 200 pg/ml. Statistical analysis. Normality of data was assessed with the Gaussian distribution and Kolmogorov-Smirnov procedure, and ln or Z-score transformation was used if assumption of normality was not met. HOMA-IR was calculated as previously described 67 . In case of missing data, the participant was excluded from the analysis. Multicollinearity was checked using variance inflation factor index <10. First, we used simple linear regression to investigate the associations between faecal and circulating concentrations of acetate, propionate and butyrate (as dependent variables) and metabolic parameters (as independent variables) i.e. insulin sensitivity (M-value), insulin resistance (HOMA-IR), circulating glucose, insulin, circulating lipids (TAG, FFA and glycerol), circulating inflammatory markers (IL-6, IL-8, TNF-α and LBP) and fasting substrate oxidation. Subsequently, we used multiple linear regression to test whether the associations between faecal and circulating SCFA and the aforementioned metabolic parameters were independent of the covariates sex, age and BMI. All data were analysed using SPSS 22.0 (IBM, Armok, U.S.) with significance set at P < 0.05. Data Availability The used intervention study data are unsuitable for public deposition due to ethical restrictions and privacy of participant data. Data are available from these studies for any interested researcher who meets the criteria for access to confidential data. Prof. Ellen Blaak (e.blaak@ maastrichtuniversity.nl) may be contacted to request study data.
2019-08-29T15:14:34.949Z
2019-08-29T00:00:00.000
{ "year": 2019, "sha1": "718adf3834346c2531b70cc81aa5c7989b7c0ece", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-48775-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "718adf3834346c2531b70cc81aa5c7989b7c0ece", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
118461868
pes2o/s2orc
v3-fos-license
The Distribution and Annihilation of Dark Matter Around Black Holes We use a Monte Carlo code to calculate the geodesic orbits of test particles around Kerr black holes, generating a distribution function of both bound and unbound populations of dark matter particles. From this distribution function, we calculate annihilation rates and observable gamma-ray spectra for a few simple dark matter models. The features of these spectra are sensitive to the black hole spin, observer inclination, and detailed properties of the dark matter annihilation cross section and density profile. Confirming earlier analytic work, we find that for rapidly spinning black holes, the collisional Penrose process can reach efficiencies exceeding $600\%$, leading to a high-energy tail in the annihilation spectrum. The high particle density and large proper volume of the region immediately surrounding the horizon ensures that the observed flux from these extreme events is non-negligible. INTRODUCTION Prompted by the recent paper by Bañados et al. (2009) [BSW], there has been a great deal of interest in the potential of Kerr black holes to accelerate particles to ultra-relativistic energies and thus to probe a regime of physics otherwise inaccessible. The vast majority of this work has been analytic and thus largely limited to the most simple photon and particle trajectories in the equatorial plane. Here we present a more numerical approach that focuses on calculating the fully relativistic distribution function of massive test particles around a spinning black hole. With this distribution function and a simple model for the dark matter annihilation mechanism, we can then calculate the annihilation rate and observed spectrum as a function of black hole spin and observer inclination. It has been noted repeatedly in recent works that the net energy gained through the Penrose process is quite modest, as is the fraction of collision products that might escape, and thus the astrophysical importance of the BSW effect is questionable (Jacobson & Sotiriou 2010;Bañados et al. 2011;Harada et al. 2012;Bejger et al. 2012;McWilliams 2013). We argue here that two primary factors (to our knowledge largely neglected in previous work) could greatly enhance the astrophysical relevance and observability of this annihilation. The first is an energy-dependent cross section for dark matter (DM) annihilation. This could take many forms, the simplest of which are p-wave annihilation (Bertone et al. 2005;Chen & Zhou 2013;Ferrer & Hunter 2013), where the cross section scales like the relative velocity, or a threshold energy, above which the cross section increases greatly. This latter assumption is a natural choice for a model that includes multiple DM species, with the more massive particles intermediate products in the annihilation process towards gamma rays [see, e.g., Zurek (2014)]. Because gravity is the only known force capable of accelerating dark matter particles to high energies, it is possible that new annihilation channels could occur jeremy.schnittman@nasa.gov around black holes that are completely inaccessible everywhere else in the universe. The other effect considered here is the relativistic enhancement of the density close to the black hole. This is due to the time dilation of observers near the horizon. In a steady-state system, one can think of dropping particles into the black hole from infinity at a constant rate Γ ∞ as measured by coordinate time t. To an observer near the black hole measuring proper time τ , an enhanced rate Γ ∞ (dt/dτ ) is seen, with dt/dτ > 1. For annihilation rates that scale like the density squared, the local annihilation rate will be enhanced by (dt/dτ ) 2 . Of course, the products will get redshifted on their way back out to an observer at infinity (McWilliams 2013), but we are still left with a net enhancement of dt/dτ . Even without this relativistic enhancement, numerous models also predict an astrophysical enhancement of the dark matter density in the galactic nucleus. Adiabatic growth of the central black hole will capture a large number of particles onto tightly bound orbits, growing a steep density spike as the black hole grows (Gondolo & Silk 1999;Sadeghian et al. 2013). Gravitational scattering off the dense nuclear star cluster will also lead to a dark matter spike (Gnedin & Primack 2004), similar to the classical two-body scattering result of Bahcall & Wolf (1976). At the same time, selfannihilation (Gondolo & Silk 1999) and elastic scattering (Shapiro & Paschalidis 2014;Fields et al. 2014) will act to flatten out this spike into a shallow core more similar to the unbound population. Because our approach to this problem is predominantly numerical, we can easily include treatment of a range of black hole spins, particle distributions, and cross sections, and not limit ourselves to special cases with analytic solutions. Therefore, we can calculate how often those extreme cases are likely to occur in a real astrophysical setting [for a notable exception to the analytic approaches of earlier work, see the exhaustive Monte Carlo calculations of Williams (1995Williams ( , 2004) that explored the limits of the Penrose process in the context of Compton scattering and pair production in accretion disks and jets]. Of particular interest has been the following question: for two particles each of mass m χ falling from rest at infinity and colliding near the black hole, what is the maximum achievable energy for an escaping photon? We find that this limit exceeds 12m χ for an extremal black hole with a/M = 1, significantly higher than previously published values of 2.6m χ (Bejger et al. 2012). We explain the underlying reason for this discrepancy in a companion paper (Schnittman 2014). Initial conditions The primary goal of this paper is to calculate the 8dimensional phase-space distribution function df (x, p) of DM particles around a Kerr black hole. Two of these dimensions are immediately removed due to the assumption of a steady-state solution and stationarity of the metric, and the mass-shell constraint of the particle momentum, leaving us with df (r, θ, φ, p r , p θ , p φ ). This function is further reduced to five dimensions because axisymmetry removes the dependence on φ. To calculate the distribution function, we first distinguish between two basic populations: the particles gravitationally bound and unbound to the black hole. The properties of the bound populations are more sensitive to underlying astrophysical assumptions, and will be discussed below in Section 2.4. The unbound population is more straightforward: we simply assume an isotropic, thermal distribution of velocities at a large distance from the black hole. Here, "large distance" is taken to be the influence radius r infl of a supermassive black hole with mass M , and the DM velocity dispersion is set equal to the stellar velocity dispersion σ 0 of the bulge (thus the "unbound" population considered in this paper is still gravitationally bound to the galaxy, just not the black hole). From the "M-sigma" relation (Ferrarese & Merritt 2000) we take M ≈ 2 × 10 7 M ⊙ σ 0 100 km/s 4 (1) and r infl ≡ GM σ 2 0 ≈ 8 pc σ 0 100 km/s 2 . (2) In units of gravitational radii r g = GM/c 2 , the influence radius is typically quite large: Given this outer boundary condition, we shoot test particles towards the black hole with initial velocities drawn from an isotropic thermal distribution with characteristic velocity σ 0 . As we are only interested in the distribution function relatively close to the black hole, we can ignore any particle with impact parameter greater than ≈ 1000 r g . For those particles that we do follow, we calculate their geodesic trajectories with the Hamiltonian approach described in detail in and used in the radiation transport code Pandurata. A schematic of this procedure is shown in Figure 1. As the particle moves around the black hole and passes through different finite volume elements, the discretized distribution function df (r i , θ j , p) is updated with appropriate weights. The great advantage of this Hamiltonian approach is that the integration variable is the coordinate time t in Boyer-Lindquist coordinates (Boyer & Lindquist 1967). Because this is the time measured by an observer at infinity, it determines the rate at which particles are injected into the system in the steady-state limit. Then the distribution function can be populated numerically by assigning a weight to each bin in phase space through which the test particle passes, with the weight proportional to the amount of time t spent in that volume. The process is repeated for many Monte Carlo test particles until the 5-dimensional distribution function is completely populated. Following Schnittman & Krolik (2013), we define local orthonormal observer frames, or tetrads, at each point in the computational volume. Depending on the population in question (i.e., bound vs. unbound), it is convenient to use either the zero-angular-momentum observer (ZAMO; Bardeen et al. (1972)) or the "free-falling from infinity observer" (FFIO) tetrads. In all cases we use Boyer-Lindquist coordinates (Boyer & Lindquist 1967), where the metric can be written Geodesics and Tetrads This allows for a relatively simple form for the inverse metric: with the following definitions: Unless explicitly included, we adopt units with G = c = 1, so distances and times are often scaled by the black hole mass M . The ZAMO tetrad can be constructed by Figure 1. Schematic of our method for populating phase space with geodesic trajectories. The test particles are injected at large radius (r 0 = 10 7 rg) with thermal velocities with dispersion σ 0 ≪ c. Those particles passing within 1000 rg of the black hole contribute to the tabulated distribution function in each volume element (r i , θ j ) through which they pass, with a weight proportional to the amount of coordinate time t spent in that zone. i+1 r i θ j+1 θ j σ 0 n 0 r where we designate tetrad basis vectors byμ indices, while coordinate bases have normal indices. To construct the FFIO tetrad, the time-like basis vector e (t) is given by the 4-velocity u µ = g µν u ν corresponding to a geodesic with u t = −1, u θ = u φ = 0, and from normalization constraints, Then the spatial basis vectors e (ĩ) are constructed via a standard Gram-Schmidt method and aligned roughly parallel to the Boyer-Lindquist coordinate bases. Any vector can be represented by its components in different tetrads via the relation whereby the components are related by a linear transformation E μ µ : These uμ are the components that we use for the tabulated distribution function. 1 Because of the normalization constraints, we need only store three components of the 4-momentum in each spatial volume element, making the total dimensionality of the distribution function five: two space and three momentum. In Pandurata, the geodesics are integrated with a variable time step 5 th order Cash-Karp algorithm . This technique very naturally matches small time steps to regions of high curvature and thus areas of high resolution in the spatial grid. For each time step, a weight proportional to the coordinate time spent on that step is added to the distribution function for that particular volume of phase space. Because the particle typically remains within a single volume element for many time steps, we find that interpolation errors are small. The spatial momentum components γβĩ can be positive or negative and span many orders of magnitude. To adequately resolve the phase space and capture the relativistic effects immediately outside the black hole horizon, we find that on order ∼ 10 3 bins are required in each dimension. If the entire phase-space volume were occupied, this would correspond to an unfeasible quantity of data. Fortunately, this volume is not evenly filled, so such a hypothetical 5-dimensional array is in fact exceedingly sparse. In practice, we are able to use a dynamic memory allocation technique that only stores the non-zero elements of the distribution function. Yet even so, a well-resolved calculation can easily require multiple GB of data for a single distribution function, and to adequately sample this phase space requires on the order of ∼ 10 9 test particles, with each geodesic sampled over thousands of time steps. Fortunately, this is a trivially parallelizable problem, so it is relatively simple to achieve sufficient resolution in a reasonable amount of time with a small computer cluster. Unbound Particles As mentioned above, for the unbound population, the outer boundary condition for the phase space density at r infl is relatively well-understood. The velocity distribution is thermal with characteristic speed σ 0 2 . The spatial density of dark matter is measured from galactic rotation 2 While there could be some small anisotropy in the dark matter velocity distribution at r infl , it is unlikely to be correlated with the black hole spin. Thus the predominantly radial velocities of incoming particles will be independent of polar angle, and therefore for all intents and purposes appear isotropic from the black hole's point of view. Similarly, even if the DM velocity distribution at the influence radius is not strictly Maxwellian, this too will have little curves at kpc distances from the nucleus, and then must be extrapolated in to pc distances with a combination of observations and stellar profile modeling. For example, in the Milky Way the DM density near the Sun is 0.3 GeV/cm 3 , and the radial profile can be reasonably well-modeled with a simple ρ ∼ R −1 profile, giving a density of ∼ 10 3 GeV/cm 3 at r infl . Inside of r infl there is almost certainly an additional bound component to the DM distribution (Gondolo & Silk 1999), so the unbound population described here can best be understood as a strict lower bound on the phase space density. Outside of ∼ 100 r g the unbound population can be treated as a collisionless gas of accreting particles, as in Zeldovich & Novikov (1971). In the Newtonian limit, the density and velocity dispersion can be written n(r) = n 0 1 + 2GM and In Figure 2 we show the spatial density of unbound particles as measured by a FFIO around a Kerr black hole with spin parameter a/M = 1, as well as the mean particle momentum as measured in that frame. We find very close agreement to the Newtonian results all the way down to r ∼ 10r g . The deviation of the momentum from the Newtonian solution is due largely to the special relativistic terms proportional to the Lorentz boost γ. The proper density is governed by two competing relativistic effects. One is time dilation and the other is spatial curvature. Close to the black hole, the particle's proper time τ slows down relative to the coordinate time t measured by an observer at infinity, giving a large dt/dτ . This has the effect of increasing the number density because, in a steady state, particles are injected into the system at a constant rate-as measured by an observer at infinity. The injection rate measured by an observer close to the black hole is higher by a factor of dt/dτ , leading to her seeing a larger proper density. In fact, the proper density would be even higher if it weren't for another important relativistic effect: the stretching of space around a black hole. Specifically, the Boyer-Lindquist radial coordinate element dr corresponds to a greater and greater proper distance as the observer approaches the horizon. This naturally gives a greater proper volume dṼ , shown as a solid curve in Figure 3. Again, we show the Newtonian value dV /dr = 4πr 2 as a dashed curve. Because the particle interaction rates scale like n 2 v dṼ , all these effects combine to increase the importance of reactions near the black hole. In Figure 4 we plot the momentum distributions of unbound dark matter particles, as observed by a FFIO in the equatorial plane, at a relatively large distance from the black hole: r = 100M . Each 1-dimensional distribution is calculated by integrating over the other two momentum dimensions. We also plot the momentum magnitude γ|β| in panel (a). Because the particles all have relatively small velocities at infinity β 0 ≈ σ 0 /c ≪ 1, their velocities in the weakly relativistic region r g ≪ r ≪ r 0 are given by v ≈ 2GM/r, corresponding to v ≈ 0.14c for r = 100M . For the three spatial components of the momentum distribution, we see a nearly isotropic velocity distribution with a few subtle but interesting deviations. First, we note how there is a slight deficit of particles with positive pr. This is due to capture by the black hole of particles coming in from infinity with nearly radial trajectories. By definition, these particles also have small values of pθ and pφ, depleting the distribution function in those dimensions around β = 0. While the distribution in the θ dimension is symmetric, note that the depletion in the φ distribution is offset to slightly negative values of pφ. This is due to the well-known preferential capture by Kerr black holes of retrograde particles with angular momenta aligned opposite to the black hole spin. In Figure 5, we plot the phase-space distribution for the same boundary conditions as in Figure 4, but now at r = 2M . The difference is quite dramatic, but all the features are essentially due to the same physical mechanisms. This close to the horizon, there is a very strong depletion of outgoing particles with pr > 0, as most particles are captured by the black hole. The only particles that can avoid capture at this radius have prograde trajectories in the equatorial plane. Thus, the distribution is now peaked around pθ = 0 instead of showing a deficit. There is also a strong peak near pφ = 1 due to the relatively stable, long-lived prograde orbits that circle the black hole multiple times before getting captured or escaping back out to infinity. In fact, the distribution of coordinate momentum is significantly more lopsided to p φ > 0, but this is masked in Figure 5d because this distribution is measured by an observer with u φ > 0 herself. The sharp fall-off of the azimuthal distribution above pφ ≈ 1 is due to the angular momentum barrier of the black hole. Particles with higher values of pφ simply never reach this small radius. To the best of our knowledge, these distribution functions have never been calculated before for a Kerr black hole. However, the particle number density can be determined analytically for a non-spinning Schwarzschild black hole, in the limit of σ 0 ≪ c. This allows at least one test of our numerical methods, although admittedly not a very strong one, as most of the interesting features are related to the far more complicated orbits around a spinning black hole. We follow the approach of Baushev (2009), who integrates the distribution function with fixed energy, carefully setting the angular momentum integration bounds based on which orbits are captured from a given radius. The results are shown in Figure 6, with our numerical calculation plotted as a red curve and the analytic result in black, showing perfect agreement. Note that Baushev's expression is given for a coordinate density rather than a proper density, which also explains the sharper peak at small r. Bound Particles As mentioned above, the unbound population can be thought of as a lower limit on the total DM density. There will also likely be a substantial population of particles that are gravitationally bound to the black hole. As described in Gondolo & Silk (1999), the origin of the bound population is the adiabatic growth of the supermassive black hole on a timescale much longer than the typical orbital time. This physical mechanism can be understood as follows: as a marginally unbound DM particle passes within r infl , a small amount of baryonic matter is accreted into this region, deepening the potential well just enough to capture the particle onto a marginally bound orbit. Once captured, the particle continues to orbit the black hole while conserving its orbital angular momentum as the black hole continues to gain mass. This has the effect of shrinking the radius of the orbit. Over time, more particles are captured and subsequently migrate closer to the black hole, building up a steep density spike (Gondolo & Silk 1999). Inside of the inner-most stable circular orbit (ISCO), there is a sharp falloff in the density spike due to plunging trajectories (Sadeghian et al. 2013). Here we do not attempt to solve for the slope of the density spike at large radii but leave it as a free parameter, and fix the density at the influence radius as for the unbound population: n bound (r) = n 0 (r/r 0 ) −α . Following Gondolo & Silk (1999), we also allow for the possibility of a density upper bound n annih due to annihilation losses occurring over very long timescales. To populate the phase-space distribution for the bound population, we follow a similar method as described above for the unbound particles, but instead of launching them from large radius with a limited range of impact parameters, now we launch them in situ with a isotropic thermal velocity distribution, as measured by a local ZAMO. These particles begin much closer to the black hole, so the relativistic Maxwell-Jüttner velocity distribution is used (Jüttner 1911), with the characteristic virial temperature Θ(r) = 1/2[1 − ǫ ZAMO (r)], where ǫ ZAMO (r) − 1 is the specific gravitational binding energy of the ZAMO. Because many of the particles launched close to the black hole get captured, we first integrate their trajectories for a few orbital periods to ensure they are in fact on stable orbits. Only then do they contribute to the tabulated distribution function. Additionally, a small fraction of the test particles from the tail end of the velocity distribution will in fact be unbound, and these are similarly discarded. As with the unbound distribution, for each step along its trajectory, the test particle contributes to the phase space distribution a small weight proportional to the amount of time spent on that step. Yet now, instead of using the coordinate time dt, we use the proper time of the ZAMO frame from which the particles are launched, including an additional weight to ensure the appropriate radial form of the density distribution at larger radii. In Figure 7 we plot the radial density distribution and mean relative momentum of the bound particles, as measured in the ZAMO frame, in the equatorial plane around a Kerr black hole with spin a/M = 1. The density profile is constructed so that ρ(r) ∼ r −2 at large radii. We clearly see major differences relative to the unbound population shown in Figure 2. Because of the lack of stable orbits close to the black hole, the bound population declines inside r ≈ 4M , which corresponds roughly to the mean radius of the ISCO for randomly inclined orbits around a maximally spinning black hole. This effect was described in detail for non-spinning black holes in Sadeghian et al. (2013). For equatorial circular orbits, only prograde trajectories are allowed inside of r = 9M . This leads to all particles moving in roughly the same direction closer to the black hole, and explains why the relative momentum γβ rel does not increase nearly as fast for the bound population as it does for the unbound population, which allows plunging retrograde trajectories, and thus more "head-on" collisions. In Figure 8 we show the 2D density profile in the x − z plane for both bound and unbound populations, for a/M = 0 and a/M = 1. The horizon in Boyer-Lindquist coordinates is plotted as a solid black line. For comparison purposes, the density scale is normalized to the mean value at r = 10M . In reality, the density of the bound particles could be orders of magnitude greater at these radii (Gondolo & Silk 1999). The most obvious difference here is the depletion of bound orbits inside of the ISCO, which lies at r = 6M for non-spinning black holes. For spinning black holes, the radius of the ISCO is a function of the particle's inclination angle, ranging from r = 1M for prograde orbits in the equatorial plane, to r = 5.2M for polar orbits, and r = 9M for retrograde equatorial orbits. Inside of the ISCO, there is also the "marginally bound" radius, where particles with unity specific energy can exist on unstable circular orbits. This radius is also a function of inclination angle, and is plotted in Figure 8 as dotted curve. Inside of this orbit, no bound particles will be found (for improved visibility, we have left this region white, not black, as would be required by a strict adherence to the color scale). One interesting feature of Figure 8 is that the density of the unbound population around spinning black holes doesn't show any obvious θ-dependence. It appears that the enhanced density due to long-lived prograde orbits is almost exactly countered by the lack of retrograde orbits at the same latitude. In Figure 9 we show the phase space distribution Figure 6. Comparison of our numerical results (red) with the analytic expression (black) for the particle density derived by Baushev (2009) for a Schwarzschild black hole. The density here is defined in the coordinate, not proper, frame, leading to a much steeper rise at small r. In Boyer-Lindquist coordinates, the horizon for a non-spinning black hole is at r = 2M . for each of the momentum components, as measured by a ZAMO in the equatorial plane at large radius (r = 100M ). Compared to the equivalent plot for the unbound distribution (Fig. 4), we see a number of significant differences. First, the fact that these particles are bound requires that E < 1, and the imposed virial energy distribution results in mean velocities that are smaller than those of the unbound population by a factor of ∼ √ 2. Second, because we require stable, longlived orbits, there is a larger depletion around pθ = 0 and pφ = 0, as these trajectories are all captured by the black hole and thus do not contribute at all to the distribution function. Similarly, we see a larger asymmetry due to the preferential capture of retrograde orbits with pφ < 0. In Figure 10 we plot the same momentum distribution functions, now at r = 2M . Here the contrast with the unbound population (Fig. 5) is even greater. The only stable orbits at this radius are prograde, nearly circular, nearly equatorial orbits. This results in a relatively narrow distribution clustered around uμ = [ √ 2, 0, 0, 1] in the ZAMO frame. This narrower range in allowed velocities will have a profound impact on the shape of the annihilation spectrum, as we will see in the following section. ANNIHILATION PRODUCTS Once we have populated the distribution function, we can calculate the annihilation rate given a simple particle-physics model for the dark matter cross section. Again, it is simplest to work in the local tetrad frame. Including special relativistic corrections (Weaver 1976), the local reaction rate is given by the following: where γ 1 and γ 2 are the Lorentz factors of two particles as measured in the tetrad frame, v rel is their relative velocity, and σ χ is the annihilation cross section (potentially a function of the relative velocity). R(x) has units of [events per unit proper volume per unit proper time], so we multiply by dτ /dt to get the rate observed by a distant observer. The distribution function f (x, p) is calculated numerically using the methods of Section 2. As discussed there, the numerical representation of f can have upwards of 10 8 elements, so the direct integration of equation (12) is generally not computationally feasible. Instead, we use a Monte Carlo sampling algorithm to pick random momenta for each particle with an appropriate weight based on the magnitude of f and the size of the discrete phase space volume. The spatial integration, however, is carried out directly, looping over coordinates r and θ. This is shown schematically in Figure 11. For each volume element, a large number (typically ∼ 10 6 ) of pairs of particles are sampled, and for each pair, a center-of-mass tetrad is created. The total energy in the center-of-mass frame is given by where m χ is the rest mass of the DM particle, and u = p/m χ is the particle 4-velocity. The 4-velocity of the center-of-mass frame is then given by The center-of-mass tetrad is constructed with e (t) = u com . The spatial basis vectors are totally arbitrary, as they are only needed to launch photons with an isotropic distribution in the center-of-mass frame. Two photons, labeled k 3 and k 4 in Figure 11, are launched in opposite directions, each with energy E com /2 in the center-of-mass frame. We then transform back to a coordinate basis for the geodesic integration of the photon trajectories to a distant observer. As in , for the photons that reach infinity, Pandurata can generate an image and spectrum of the emission region. An example in shown in Figure 12 for the annihilation signal from the unbound population around an extremal black hole, limiting the emission signal to the region r < 100M . While the flux clearly increases towards the center of the image, because the density and velocity profiles are relatively shallow (see Fig. 2 above), the net flux is actually dominated by emission from large radii. These annihilation events are not very relativistic, so produce a strong, narrow peak in the observed spectrum, centered at the DM rest mass energy. The annihilation events occurring closer to the horizon sample a much more energetic population of particles. Restricting ourselves to only those events where the center-of-mass energy is greater than 1.5× the combined rest mass of the annihilating particles, we can zoom in to the center of Figure 12. The result is shown in Figure 13, now focusing on the inner region within r < 6M . At these small radii, the effects of black hole spin become much more evident. One such effect is the characteristic shape of the Kerr shadow, defined by the impact parameter of critical photon orbits (Chandrasekhar 1983). The observed flux is clearly asymmetric, as the prograde photons originating from the left side of the image have a much greater chance of escaping the ergosphere and reaching a distant observer. There is another interesting feature of Figure 13 that we believe is novel to this work. Namely, the purple lobes emerging from the "mid-latitude" regions near the center of the image. These are regions of greater photon flux, albeit very highly redshifted. Recall, this image is created by considering only annihilations with moderately high center-of-mass energy. Near the equatorial plane, extreme frame dragging ensures that the velocity dispersion is highly anisotropic, with most of the DM particles and their annihilation photons getting swept along on prograde, equatorial orbits. Above and below the plane, the DM distribution is more isotropic, leading to a more isotropic distribution of outgoing photons. Yet if one goes two far off the midplane, it becomes more difficult for the photons to escape. At the mid-latitudes, there is just enough frame dragging for photons to escape, yet not so much that they get deflected away from the observer. The spectrum corresponding to this image is also plotted in Figure 13. Not surprisingly, the red and blue wings of the annihilation line shown in Figure 12 come from the most relativistic events. As pointed out by Piran & Shaham (1977), even reactions with very high center-of-mass energies will typically lead to photons with low energies as measured at infinity, thus explaining the red tail of the annihilation spectrum. The highenergy tail above E = 2m χ is due exclusively to Penroseprocess reactions where one of the annihilation photons has negative energy and gets captured by the black hole (Penrose 1969;Piran et al. 1975). Earlier analytic work predicted that the maximum energy attainable from the collisional Penrose process was 2.6m χ for particles falling from rest at infinity (Harada et al. 2012;Bejger et al. 2012). Because our calculation is fully numerical, it was able to reveal previously unknown trajectories leading to very high efficiencies with E > 10m χ , as seen in Figure 13. Closer inspection revealed that these high-energy photons are created when an infalling retrograde particle collides with a outgoing prograde particle that has just enough angular momentum to reflect off the centrifugal barrier, providing the necessary energy and momentum for the annihilation photon to escape the black hole (Schnittman 2014;Berti et al. 2014). Due to the strong forward-beaming effects within the ergosphere, the escaping photon flux is highly anisotropic, with the peak flux and highest-energy photons emitted in the equatorial plane. Figure 14 shows the predicted annihilation spectra for observers at different inclination angles for the same DM profile as shown in Figure 13. Again, we restrict ourselves to the highestenergy reactions with E com > 3m χ . It is also instructive to plot the annihilation flux as a function of the emission radius. In Figure 15 we show both the observed flux (solid curves) and the flux that gets captured by the black hole (dashed curves) as a function of radius, integrated over all observing angles. The emission is further subdivided by the center-of-mass energy of the annihilating particles. Of course, the photons emitted closer to the black hole have a greater chance of getting captured. For the unbound population, the total escape fraction ranges from f esc = 93% at r = 10M down to f esc (2M ) = 14%, and f esc (1.1M ) = 0.25%. At small radius, these numbers are somewhat smaller than those calculated by Bañados et al. (2011), who only considered critical trajectories in the equatorial plane, where the escape probability is greatest. Yet at large radius, our distribution includes particles with typically greater impact parameters, and thus greater chance for escape. Another interesting feature of the curves in Figure 15 is the very sharp cutoff above a critical radius for each energy bin. This is a natural consequence of conservation of energy. Because all unbound particles come in from rest at infinity with E = m χ , the available kinetic energy in the center-of-mass frame is simply the gravitational potential energy M m χ /r at that radius. For example, to reach a center-of-mass energy of 10% above the rest mass energy, the particles must fall within r ≈ 10M . Also note that inside r ≈ 4M , most of the photons are captured, while outside of this radius, most escape. This is in close agreement with what we found for plunging orbits inside of the ISCO of a Schwarzschild accretion flow in . On the other hand, for the bound population of DM particles (Fig. 15b), which by definition are not plunging, we find that the photon escape fraction is more than 90% at all radii, greatly increasing the relativistic effects observable from infinity. This is consistent with the classic calculation by Thorne (1974) which found that for thin accretion disks limited to circular, planar orbits outside the ISCO, the fraction of emission ultimately captured by the black hole was never more than a few percent, even for maximally spinning black holes where the majority of the flux emerges from extremely close to the horizon. As we showed in Schnittman (2014), the peak energy attainable from particles falling in from infinity is a strong function of the black hole spin. Now, considering the full phase-space distribution function of the particles, we can see how the shape of the spectrum depends on spin. In Figure 16 we plot the flux seen by an equatorial observer, again limited to the high-energy annihilations with E com > 3m χ . For even marginally sub-extremal spins, the peak photon energy falls precipitously. As the spin decreases further, the number of collisions with E com > 3m χ also decreases, thereby reducing the total flux observed. Lastly, the decreasing spin also increases the critical impact parameter for capturing prograde photons, making it harder for the annihilation flux to escape to infinity. Recall from Section 2.3 above that the density of the unbound distribution scales like n ∼ r −1/2 . From the rate calculation in equation (12) we see that the annihilation rate [events/s/cm 3 ] scales like R(r) ∼ r −3/2 . Including the volume factor dV = 4πr 2 dr we can write the differential annihilation rate as dR/dr ∼ r 1/2 . In other words, the unbound contribution to the annihilation signal diverges at large radius. In practice, the outer boundary can be set as the black hole's influence radius, typically 10 6−7 r g . This means that the observed signal will essentially be a delta function in energy, with only small perturbations from the relativistic contributions at small r, and thus measuring spin from annihilation lines would be a very challenging prospect indeed. Two possible effects provide a way around this problem, each with its own additional uncertainties. One possibility is that the annihilation cross section is a strong function of energy, increasing sharply above some threshold energy. This is admittedly rather speculative, and in conflict with leading DM models of self-annihilation (Bertone et al. 2005). On the other hand, we do not even know what the dark matter particle is, or if there are many DM species making up a rich "dark sector," with all the beauty and complexity of the standard model particles (Zurek 2014). One could easily imagine a DM analog of pion production via the collision of high-energy protons, in which case the only reactions could occur immediately surrounding a black hole, the ultimate gravitational particle accelerator. In this case, by construction the annihilation rate is dominated the region immediately surrounding the black hole. Another possibility is that the DM density is dominated by a population of bound particles. As described above in section 2.4, this population arises through the adiabatic growth of the black hole through accretion, capturing marginally unbound particles while also making the bound particles ever more tightly bound Gondolo & Silk (1999); Sadeghian et al. (2013). This process will generally lead to a much steeper density profile, such as the n ∼ r −2 distribution we use here. In this case, the differential reaction rate scales like dR/dr ∼ r −5/2 so the annihilation spectrum is now dominated by the particles at smallest radii. In both cases-energydependent cross sections and a large bound populationthe relativistic effects described in Section 2.3 (expanded proper volume and time dilation) push the most important interaction region to even smaller radii, and thus Figure 11. For a given phase-space distribution f (x, p), the annihilation rate is calculated in each discrete volume element around the black hole. Every annihilation event samples the distribution function to get the momenta for the two dark matter particles p 1 and p 2 and produces two photons k 3 and k 4 with isotropic distribution in the center-of-mass frame. The product photons then propagate along geodesics until they reach a distant observer or get captured by the black hole. x,p the annihilation spectra are even more sensitive to the black hole spin. In Figure 17 we show the annihilation spectra for both the bound and unbound populations for a variety of spins, now including emission out to r = 1000M . The relative amplitudes are somewhat arbitrary, because we don't know what the relative densities of the two populations might be (see discussion below in Sec. 4), but it is almost certain that the bound population should dominate, possibly even by many orders of magnitude (Gondolo & Silk 1999). At the same time, the unbound signal will be even narrower and have a greater amplitude peak than shown here, as it is dominated by lowvelocity particles at large radius. So while their overall amplitudes are uncertain, the detailed shapes of the spectra away from the central peak are relatively robust, depending only on the properties of geodesic orbits near the black hole. In this broad part of the spectrum, the bound and unbound signals show very different behavior. For nonspinning black holes, no particle can remain on a bound orbit inside of r = 4M (see Fig. 8), so there are no annihilation photons coming from just outside the horizon, and these are the photons that produce the most strongly redshifted tail of the spectrum. As the spin increases and the ISCO moves to smaller and smaller radii, the line becomes steadily broader. On the other hand, the unbound particles are found all the way down to the horizon, where they can annihilate to highly redshifted photons regardless of the black hole spin. Comparing Figures 5 and 10, we see that the unbound particles probe a much greater volume of momentum Figure 12. Simulated image and spectrum of the annihilation signal from unbound dark matter out to a radius r = 100M around a Kerr black hole. The observer is located in the equatorial plane. While the brightness peaks towards the black hole, the total flux is dominated by annihilations at large radii. The central shadow is clearly seen, blocking emission coming from the far side of the black hole. The photon energy E is scaled to the dark matter rest mass mχ. space at small radii. This in turn leads to a greater chance of producing the extreme Penrose particles that characterize the blue tail of the spectrum. Because all the bound particles are essentially on the same prograde, equatorial orbits, it is much more difficult to achieve annihilations with large center-of-mass energies, so the high-energy cutoff in the spectrum is much closer to the classical result for a single particle decaying into two photons in the ergosphere (Wald 1974). In short, for bound particles the red tail of the spectrum is a better probe of black hole spin, while for the unbound population, the blue tail is the more sensitive feature. But in both cases, higher spin leads to a broader annihilation line. OBSERVABILITY In addition to the dependence on the dark matter density profile, the amplitude of the annihilation spectrum will also depend on the unknown dark matter mass and annihilation cross section. At this point, it is only possible to use existing observations to set upper limits on these unknown parameters. One major obstacle that has plagued nearly all observational efforts to detect dark matter annihilation is the existence of more conventional astrophysical objects such as active galactic nuclei Figure 13. Simulated image and of the annihilation signal around an extremal Kerr black hole, now considering only annihilations with Ecom > 3mχ. The observer is located in the equatorial plane with the spin axis pointing up. While the image appears off-centered, it is actually aligned with the coordinate origin. The photon energy E is scaled to the dark matter rest mass mχ. (AGN), pulsars, and supernova remnants, all of which are powerful sources of high energy gamma rays. One solution to this problem is to focus on nearby dwarf galaxies, Figure 15. Flux reaching infinity (solid curves) and getting captured by the black hole (dashed curves), as a function of the centerof-mass energy and radius of annihilation, for both bound and unbound populations. The black hole spin is maximal. Note that the scale on the y-axis is arbitrary, and depends strongly on the annihilation cross sections and peak density. The radial flux profile, on the other hand, is a robust result for these populations. Yet for our purposes, it turns out that the strongest upper limits actually come from the most massive galaxies with the most massive central black holes. Massive elliptical galaxies have the added advantage of being relatively quiescent both in nuclear activity and star formation [e.g., Schawinski et al. (2007)]. As mentioned above, the annihilation signal from the unbound population will be dominated by flux at large radius. It is difficult enough to spatially resolve even nearby black holes' influence radii with HST, much less gamma-ray telescopes, so any potential annihilation signal will tell us little about the black hole itself. Prospects for detection of an unambiguous black hole signature improve if we consider annihilation models that include an energy dependence to the dark matter cross section. For example, p-wave annihilation mechanisms will have cross sections proportional to the relative velocity between the two annihilating particles [see Chen & Zhou (2013) ;Ferrer & Hunter (2013) and references therein]. Unfortunately, from equation (11) we see that this would only lead to an additional factor of r −1/2 in the integrand of equation (12), which would still be dominated by the contributions from large r. This effect is shown in Figure 18, which plots the predicted spectra for two annihilation models: σ χ (v) = const (black curves) and σ χ (v) ∝ v (red curves). The black hole spins considered are a/M = 0 (dashed curves) and a/M = 1 (solid curves), and in all cases only the unbound population is included. Integrating out to r = 10 4 M , we see only a slight difference in the shape of the spectrum, with the σ χ (v) ∝ v model leading to a slightly broader peak (all curves are normalized to give a peak amplitude of unity). Another possible annihilation model is based on a resonant reaction at some energy above the DM rest mass, as suggested in Baushev (2009). If the cross section increases sharply around a given center-of-mass energy, this would have the effect of focusing in on a relatively narrow volume of physical space around the black hole, as in Figure 15. Alternatively, the cross section could abruptly increase above a certain threshold energy, if new particles in the dark sector become energetically allowed, analogous to pion production via proton scattering. In either the resonant or threshold models for the annihilation cross section, one might imagine a pair of heavier, intermediate dark particles getting created and then annihilating to two photons as in the direct annihilation model. If, for example, the mass of these intermediate particles is 1.5m χ , then the observed spectrum would look like those plotted in Figures 13 and 16. With a significant increase in the cross section above such an energy threshold, these relativistically-broadened spectra could in fact dominate over the narrow line component produced by the rest of the galaxy. A less exotic option would be the simple density enhancement due to the bound population. If this is sufficiently large, it would easily dominate over the rest of the galaxy and also produce a characteristically broadened line sensitive to both black hole spin magnitude and orientation relative to the observer. Somewhat ironically, one of the things that could ultimately limit the strength of the annihilation signal from bound dark matter is annihilation itself. If the adiabatic black hole growth occurred at high redshift, then in the subsequent ∼ 10 10 years, the bound population will get depleted via selfannihilation at an accelerated pace due to its high density (Gondolo & Silk 1999;Gonzalez-Morales et al. 2014). On the other hand, if the black hole grows through mergers, or experiences even a single merger since the last extended accretion episode, it is quite likely that the bound dark matter population could get completely disrupted. The details of such an event are beyond the scope of this paper, but could be modeled by following test particles bound to each black hole through the merger, via post-Newtonian calculations (Schnittman 2010) or numerical relativity (van Meter et al. 2010). The observational challenge is readily apparent: the black holes with the largest bound populations will tend to be in gas-rich galaxies with a lot of accretion and high-energy nuclear activity that could overwhelm the DM annihilation signal. The more massive black holes, residing in gas-poor quiescent galaxies, are also more likely to have lost their cloud of bound dark matter through a history of mergers. Even in the event that a gas-rich spiral galaxy hosts a quiescent nucleus, the black holes in those galaxies tend to have lower masses (Kormendy & Ho 2013). While the relation between black hole mass and dark matter density is quite complicated for the bound population, it is relatively straightforward to calculate for the unbound population, which we can take as a lower bound on the DM density. Recall the influence radius r infl is the distance within which the gravitational potential is dominated by the black hole, as opposed to the nuclear star cluster or dark matter halo. From equation (2) we see that the influence volume scales like r 3 infl ∼ M 3/2 , while the total mass enclosed is-by definition-of the order of M . If the dark matter and baryonic matter have similar profiles (by no means a certainty!), then more massive black holes should have lower surrounding DM density, with n infl ∼ M −1/2 . Because the unbound DM density falls off more rapidly outside the central core, the annihilation flux F unbound will be dominated by the contribution from around r infl , so we can estimate with D the distance to the black hole and the mean velocity at the influence radius v infl = σ 0 . If we consider a threshold energy annihilation model where all the flux comes from inside a critical radius r crit ∼ few × r g , then the density scales like n crit ∼ n infl (r infl /r crit ) 1/2 ∼ M −3/4 while the relative velocity scales like v crit ∼ σ 0 (r infl /r crit ) 1/2 ∼ M 0 . The net flux then scales like In both cases, it appears that the brightest sources will be the closest, as opposed to the most massive. Now consider the case where the annihilation signal is dominated by the bound contribution, the bound density is in turn limited by a self-annihilation ceiling as in Gondolo & Silk (1999), and there is a threshold energy above which the cross section greatly increases. In this case, the flux is simply proportional to the total volume within the critical radius, so F bound ∼ M 3 D −2 . With this scaling, the greatest flux will actually come from more distant, more massive black holes. For example, NGC 1277, with a mass of 1.7 × 10 10 M ⊙ and at a distance of 20 Mpc (van den Bosch et al. 2012), could give an observed flux over a thousand times greater than our own Sgr A * ! Recent works by Fields et al. (2014) and Gonzalez-Morales et al. (2014) have argued that current Fermi limits of gamma-ray flux from Sgr A * and nearby dwarf galaxies with massive black holes already place the strongest limits on annihilation from DM density spikes. Based on the arguments above, we believe that even stronger limits should come from more distant, massive galaxies. The other important advance presented in the present work is that, for either the energy-dependent cross sections, or the steep density spikes, the annihilation signal will be dominated by the region closest to the black hole, and thus a fully numerical, relativistic rate calculation is absolutely essential. Lastly, we should mention that gamma-rays, while the primary observable feature explored in this work, are not the only promising annihilation product. High-energy neutrinos could also be produced in some annihilation channels, particularly those with energy-dependent cross sections like p-wave annihilation (Bertone et al. 2005). While neutrinos obviously present many new detection challenges, the successful commissioning of new astronomical observatories like IceCube make this approach an exciting prospect (Aartsen et al. 2013). Furthermore, the non-DM backgrounds may contribute significantly less confusion in the neutrino sky. DISCUSSION As apparent in the previous section, there are still far too many unknown model parameters to allow for quantitative predictions of the annihilation flux from dark matter around black holes. Sadeghian et al. (2013) put it best: "There are uncertainties in all aspects of these models. However one thing is certain: if the central black hole Sgr A * is a rotating Kerr black hole and if general relativity is correct, its external geometry is precisely known. It therefore makes sense to make use of this certainty as much as possible." We have attempted to follow their advice to the best of our ability. Thus, in order of decreasing confidence, the results in this paper can be summarized by the following: • For a given DM density n infl and velocity dispersion σ 0 at the black hole's influence radius, the fully relativistic, 5-dimensional phase-space distribution has been calculated exactly for any black hole spin parameter, covering the region from r infl all the way down to the horizon. • Given this distribution function and a model for dark matter annihilation, the observed gamma-ray spectrum can be calculated by following photons from their creation until they are either captured by the black hole or reach the observer. Two important relativistic effects serve to increase the annihilation rate as compared to a purely Newtonian treatment: time dilation near the black hole effectively raises the density of the unbound population in a steady-state distribution being fed from infinity; and transforming from coordinate to proper distances greatly increases the interaction volume in the region immediately around the black hole (see Fig. 3). • Our numerical approach has unveiled previously overlooked orbits that can produce annihilation photons with extreme energies, far exceeding previous estimates for the maximum efficiency of the collisional Penrose process (Schnittman 2014). The peak energy attainable for escaping photons is a strong function of the black hole spin. • The population of bound dark matter has also been calculated numerically, although this depends on two additional physical assumptions: a local isothermal velocity distribution with a virial-like temperature; and an overall radial power-law for the density, as found in Gondolo & Silk (1999) and Sadeghian et al. (2013). Including only the longlived stable orbits, we found that the density peaks in the equatorial plane somewhat outside of the ISCO, forming a thick, co-rotating torus around the black hole spin axis. Because the bound population is not plunging towards the horizon, the emerging flux has a much greater chance of escaping the black hole. • The annihilation spectra from both the bound and unbound populations are sensitive to the spin parameter, but in opposite ways: the unbound spectrum varies mostly in the high-energy cutoff, with higher spins allowing higher-energy annihilation products; the bound population moves closer and closer to the horizon with increasing spin, giving a stronger red-shifted tail to the annihilation spectrum. Both bound and unbound spectra become more sensitive to observer inclination with increasing spin, as the spherical symmetry of the system is broken. • For dark matter particle physics models with an energy-dependent cross section (particularly one that increases with center-of-mass energy), the annihilation spectrum will be a more sensitive probe of the black hole properties. For DM models incorporating a rich population of dark sector species, black holes may be the most promising way to accelerate these particles and observe their interactions. • The shape of the annihilation spectra is relatively robust, but the normalization is highly dependent on uncertain parameters such as the dark matter density profile and cross section. If the unbound density profile follows the baryonic matter, with the shallow slopes seen in core galaxies, the observed flux should be a relatively weak function of black hole mass. If, on the other hand, the annihilation signal is produced by the most relativistic population within r crit ∼ few × r g , then the signal could scale like M 3 and thus be dominated by the most massive black holes in the local Universe. While this paper has treated the bound and unbound particles separately, future work will also consider the self-interaction between these two populations (Shapiro & Paschalidis 2014;Fields et al. 2014), which may lead to a single, self-consistent steady-state distribution with density slope between −1/2 and −2. Future work will also focus on developing a robust framework in which we can use existing and future gamma-ray observations to constrain various parameters of the particle physics (e.g., m χ , σ χ (E), and the annihilation mechanism, i.e., line vs continuum) and astrophysical models (n infl , the bound distribution normalization and slope, and the black hole mass, spin, and inclination). While initial work will focus on setting upper limits on reaction rates by looking at quiescent galaxies, our ultimate ambition is nothing short of an unambiguous detection of dark matter annihilation around supermassive black holes.
2015-06-22T19:39:04.000Z
2015-04-14T00:00:00.000
{ "year": 2015, "sha1": "3dc0e50fba03e13187457863e908cc6e9f659d51", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1506.06728", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3dc0e50fba03e13187457863e908cc6e9f659d51", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
9647851
pes2o/s2orc
v3-fos-license
FH535 Suppresses Osteosarcoma Growth In Vitro and Inhibits Wnt Signaling through Tankyrases Osteosarcoma (OS) is an aggressive primary bone tumor which exhibits aberrantly activated Wnt signaling. The canonical Wnt signaling cascade has been shown to drive cancer progression and metastasis through the activation of β-catenin. Hence, small molecule inhibitors of Wnt targets are being explored as primary or adjuvant chemotherapy. In this study, we have investigated the ability of FH535, an antagonist of Wnt signaling, to inhibit the growth of OS cells. We found that FH535 was cytotoxic in all OS cell lines which were tested (143b, U2OS, SaOS-2, HOS, K7M2) but well tolerated by normal human osteoblast cells. Additionally, we have developed an in vitro model of doxorubicin-resistant OS and found that these cells were highly responsive to FH535 treatment. Our analysis provided evidence that FH535 strongly inhibited markers of canonical Wnt signaling. In addition, our findings demonstrate a reduction in PAR-modification of Axin2 indicating inhibition of the tankyrase 1/2 enzymes. Moreover, we observed inhibition of auto-modification of PARP1 in the presence of FH535, indicating inhibition of PARP1 enzymatic activity. These data provide evidence that FH535 acts through the tankyrase 1/2 enzymes to suppress Wnt signaling and could be explored as a potent chemotherapeutic agent for the control of OS. INTRODUCTION Osteosarcoma (OS) is the most common primary bone tumor, with approximately 1000 new cases in the United States every year (Fletcher et al., 2002). Ninety percent of OS patients are children and young adults between the ages of 10 and 30 (Ritter and Bielack, 2010). Lack of response to the standard chemotherapy regimen is the major cause for disease progression in OS (Bacci et al., 1993;Fletcher et al., 2002). Canonical Wnt signaling has been frequently tied to chemotherapy resistance and poor prognosis in OS; moreover, the current literature supports the concept that Wnt signaling is involved in tumor metastasis and proliferation in OS (Flores et al., 2012;Chen et al., 2015). Additionally, overexpression of Wnt promoting factors, and under-expression of endogenous Wnt inhibitors, are correlated with disease intensity and poor prognosis (Mandal et al., 2007;Lu et al., 2012). Still, the chemotherapy regimen in OS has remained constant in the past several decades, and newly developed Wnt-targeting chemotherapeutics have not yet reached approval for clinical use in cancers. Activation of β-catenin by canonical Wnt signaling affects cell cycle progression, cellular differentiation, and susceptibility to chemotherapeutic agents (Pinto and Clevers, 2005;Basu et al., 2016;Nussinov et al., 2016). The tankyrase enzymes (TNKS1/2, also referred to as PARP5A/B), have emerged as attractive regulators of Wnt, due to the discovery of their role in Axin2 post-translational modification (Huang et al., 2009). The TNKS1/2 proteins are members of the poly-(adenosine diphosphate-ribose) polymerase (PARP) family of enzymes, and they serve to modify Axin2 by Poly(Adenosine diphosphate-Ribosyl)ation (PARylation), targeting Axin2 for destruction (Smith et al., 1998;Huang et al., 2009). Axin2 scaffolds a group of Wnt regulating proteins known as the β-catenin destruction complex, which poly-phosphorylates β-catenin. This phosphorylation targets β-catenin for ubiquitination and proteasomal destruction (Stamos and Weis, 2013). Decreased available Axin2 protein reduces elimination of β-catenin, allowing β-catenin to translocate to the nucleus where it stimulates the transcription of various target genes. FH535 is a small molecule inhibitor of canonical Wnt signaling through β-catenin, and a number of reports have tested its utility for blocking Wnt signals (Handeli and Simon, 2008;Su et al., 2015;Wu et al., 2015). While FH535 has been shown repeatedly to inhibit Wnt signaling, the specific mechanism by which FH535 acts upon Wnt is yet unclear. We found that FH535 is indeed a potent inhibitor of Wnt signaling in OS, and is cytotoxic to the OS cell lines which were tested. Further experiments revealed that treatment with FH535 decreased PARylation of Axin2, as well as PARP1 auto-PARylation, indicating that FH535 acts on Wnt signaling through inhibition of the TNKS1/2 enzymes. These findings provide a conclusive mechanism for FH535 inhibition of canonical Wnt signaling, and suggest that non-specific blockade of PARP1 may confound Wnt-specific claims based on the effects of FH535. Moreover, this report provides preliminary data suggesting the utility of tankyrase inhibition in OS, and reveals critical information regarding a widely used inhibitor of Wnt signaling. Mammalian Cell Culture Human OS cell lines 143b, 143b-DxR, U2OS, SaOS-2, HOS, as well as the mouse OS cell line K7M2 were cultured in 75 cm −2 flasks in penicillin/streptomycin free Dulbecco's Advanced Modified Eagles Media -F12 (DMEM-F12) formulation. The 143b-DxR cell line was derived from the 143b-wt cell line by repeated cycles of doxorubicin challenge, selection of resistant colonies, and expansion. Primary human osteoblast cells (HOBs) were obtained from cancellous bone surgical waste and established as cultured explants in accordance with procedures approved by the Mayo Clinic Institutional Review Board (IRB) and as previously described Robey and Termine (1985) and Wimbauer et al. (2012). In accordance with HIPAA and Mayo Clinic IRB authorization a waiver was obtained, and hence informed consent was not obtained prior to use of HOBs from surgical waste. Cells were stored in liquid phase nitrogen prior to use. Cell cultures were passaged prior to confluency and to a total of less than 10 passages before final use. Cytotoxicity Assay Cells were plated at a density of 4 × 10 4 cells per well in 24-well polystyrene dishes 16 h prior to treatment. Drug was diluted in serum free/penicillin free/streptomycin free DMEM-F12 and incubated for the desired time at 37 • C. After treatment time, media was removed, wells were rinsed with Dulbecco's phosphate buffered saline (DPBS), and cell viability was measured by Cell Titer 96 R MTS reagent assay (ProMega). Colony Forming Assay Cells were cultured to 60-80% confluency as described above, trypsinized and counted using a hemocytometer. Cells were diluted in DMEM-F12, plated at desired concentrations in 6-well polystyrene dishes, and allowed to settle and attach to the plate for 4 h at 37 • C. After settling, treatments were added to the dishes. Cells were incubated at 37 • C for 7-12 days until countable colonies had formed. Colonies were stained with crystal violet and counted. Groups of >50 cells were defined as a colony. Protein Collection Cells were plated at 5 × 10 5 cells per dish in 100 mm dishes 16 h prior to treatment. Drug was diluted in serum free/penicillin free/streptomycin free DMEM-F12 and incubated for the desired time at 37 • C. After treatment, dishes were washed with DPBS, and cells were scraped from the dishes into protease/phosphatase inhibitor solution containing complete mini protease inhibitor tablet (EDTA-free, Roche), 10 mM β-glycerophosphate, and 100 µM sodium orthovanadate. Collected cells were centrifuged briefly at 1,500 RPM, lysed, and centrifuged at 14,000 RPM for 10 min to pellet nuclear sediment and debris. Supernatant was collected and protein concentration was measured using Bio-Rad Protein Assay Reagent Dye (Bio-Rad). Immunoprecipitation PAR modified proteins were immunoprecipitated according to described methods (Gagne et al., 2011). Tannic acid (200 µM) was included in wash solutions and cell lysis buffer as a PAR-glycohydrolase inhibitor. pADPr mouse monoclonal antibody clone 10H, anti-mouse IgG, as well as Protein G beads were purchased from Santa Cruz Biotechnology. The cell lysis buffer described by Gagne, et al. was prepared with the substitution of 20 mM pH 8.0 Tris-EDTA and 0.1% IGEPAL. Following immunoprecipitation, proteins were separated by SDS-PAGE on a 7.5% Tris-HCl gel (Bio-Rad). Membrane transfer, blocking, and antibody staining were performed according to standard western blotting methods. mRNA Transcript Analyses RNA was collected and purified from treated cells using Trizol reagent (Invitrogen), and Zymo Quick-RNA TM MicroPrep kit. 2.0 µg of RNA was used to produce cDNA using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Fisher Scientific). qRT-PCR was performed on an ABI HT7000 Thermocycler using SYBR GreenER (Thermo-Fisher). A total of 10 µL reaction volume containing 25 ng of sample cDNA was used for each reaction. Primer sequences used: Luciferase Activity Cells were plated in 12-well dishes at 8 × 10 4 cells per well. Sixteen hours after plating, cells were transfected with Super 8x Topflash (0.04 µg/well) and β-catenin (0.08 µg/well) using the FuGene 6 transfection reagent (Promega). The transfected cells were treated 24 h post-transfection for 48 h. After completion of the treatment, media was removed, cells were rinsed with DPBS, lysates were collected in Passive Lysis Buffer (Promega) according to manufacturer's instructions and luciferase activity was measured using single Luciferase Assay System (Promega). Cell Cycle Analysis 143b-wt or 143b-DxR cells were seeded into 6-well polystyrene plates at 1.5 × 10 5 cells per well. Sixteen hours after plating, cells were treated with vector (DMSO) or treatment groups for 24 h. After treatment, cells were harvested by trypsinization, then washed with and re-suspended in ice-cold PBS. After pelleting the cells by centrifugation (1,500 RPM for 5 min) equal volumes of PBS and 95% ethanol were added to the cells and stored at 4 • C until further processing. RNase A (1.5 mg/ml; Sigma-Aldrich, St. Louis, MO, United States) was added to each tube before incubating them at 37 • C for 15 min. Finally, propidium iodide (100 µg/ml; Roche, Indianapolis, IN, United States) was added to each tube prior to analysis by flow cytometry (Flow Cytometry Core at the Mayo Clinic). FH535 Is Highly Toxic in Osteosarcoma Cells, Does Not Affect Human Osteoblasts In all OS cell lines tested, we found that treatment with FH535 was highly toxic to the OS cells (Figure 1). This observation was in contrast to an observed lack of toxicity in cultured HOBs ( Figure 1A). FH535 inhibited cell viability, as measured by MTS assay, decreased colony forming ability, and increased numbers of dead cells (Live/Dead staining) among OS cell lines at varying concentrations (Figure 1 and Supplementary Figure 1). The U2OS (human) and K7M2 (mouse) OS cell lines exhibited increased sensitivity to FH535 treatment (Figures 1B,D), compared to other cell lines tested (143b-wt, SaOS-2, and HOS). Additionally, we developed an in vitro cellular model of acquired doxorubicin resistance in OS in order to test the efficacy of Wnt inhibition in OSs that does not respond to the standard chemotherapeutic regimen. The resulting cells exhibited a high degree of resistance to doxorubicin compared to their parental line (Figure 2A) and overexpressed the ATP-binding cassette transporter family member Multidrug Resistance Protein 1 (MDR-1) ( Figure 2B). These doxorubicin-resistant cells (143b-DxR) were able to be sensitized to doxorubicin by verapamil, which is a competitive inhibitor of MDR-1 ( Figure 2C) (Safa, 1988). Thus, the 143b-DxR cell line developed a mechanism of resistance which is particularly dependent on MDR-1, a substrate of β-catenin mediated transcription (Lim et al., 2008;Flahaut et al., 2009;Correa et al., 2012). The viability data showed that the 143b-DxR cell line was highly sensitive to FH535 treatment, relative to its parental cell line ( Figure 2D). Additionally, cell cycle analysis demonstrated G1 accumulation in the parental 143b-wt cells which had been treated with FH535, while the 143b-DxR cells accumulated in S-phase -demonstrating a response which was unique from the parental cell line, and more robust ( Figure 2E). Topflash Luciferase Reporter and Axin2 mRNA Are Inhibited by FH535, While Axin2 Protein Is Increased While several groups have clearly shown FH535 to inhibit canonical Wnt signaling via β-catenin, the molecular target of FH535 had yet to be identified (Bjorklund et al., 2014;Gedaly et al., 2014;Liu et al., 2014). Consistent with reports in other cell models, our study demonstrates FH535 inhibition of β-catenin transcriptional activity (Topflash reporter) ( Figure 3A). In further support of β-catenin inhibition, we found that Axin2 mRNA transcript levels were inhibited by FH535 treatment at 24 and 16 h in the 143b-wt, 143b-DxR, and U2OS cell lines ( Figure 3B and Supplementary Figure 2A). Treatment with FH535 resulted in stabilization of Axin2 protein, a β-catenin transcriptional target (Figures 4A,B). This observation was marked in the 143b-DxR cell line (Figures 4A,B right panels), while the 143b-wt cell line showed little change in Axin2 protein, corresponding to the decreased sensitivity to PARylation of Axin2 Is Inhibited by FH535 in Osteosarcoma Cells The inverse responses observed in Axin2 mRNA and protein suggested that Axin2 protein accumulation was a result of decreased degradation rather than increased production. The TNKS1/2 enzymes regulate Axin2 degradation by PARylation, a mechanism which was elucidated in Huang et al. (2009). Results from recently developed TNKS1/2 inhibitors have shown stabilization of Axin2 protein, decrease in Axin2 mRNA, and Wnt inhibitory activity, comparable to the observed effects of FH535 in OS (Bao et al., 2012;Quackenbush et al., 2016). Inhibitors of TNKS1/2 block Wnt signaling by reducing the PARylation of Axin2 by TNKS1/2. Thus, to test the ability of FH535 to inhibit TNKS1/2, we performed immunoprecipitation of PAR-modified proteins using a method similar to that described by Gagne et al. (2011). The results demonstrated that Axin2 is PAR-modified in OS cells, and FH535 significantly blocked PARylation of Axin2 (Figure 4C left panel). FH535 Blocks PARP1 Auto-PARylation and Upregulates c-MYC mRNA We also assessed potential non-specific activity against total cellular PARylation, by co-immunoprecipitation for PAR-modified PARP1 following FH535 treatment. This experiment showed clear blockade of PARP1 auto-PARylation during FH535 treatment, in addition to blockade of Axin2 PARylation (Figure 4C right panel). Additionally, we found that c-MYC mRNA was strongly increased in the 143b-wt and U2OS cell lines following FH535 treatment for 24 h (Figure 4D) or 16 h (Supplementary Figure 2B). In contrast, treatment with IWR-1, a highly specific inhibitor of the TNKS1/2 enzymes, did not induce c-MYC mRNA expression (Supplementary Figure 3) Lu et al., 2009;Narwal et al., 2012). This finding is added evidence of FH535 inhibition of PARP1 auto-PARylation, as unmodified PARP1 acts as a co-activator of c-MYC, in association with the transcription factor E2F-1 (Simbulan-Rosenthal et al., 2003). We further explored the possibility that FH535 may cause the observed blockage of Axin2 Frontiers in Pharmacology | www.frontiersin.org PAR-modification by inhibited expression of TNKS1 or TNKS2, and found no change in TNKS1 or TNKS2 mRNA expression following FH535 treatment (Supplementary Figure 4). DISCUSSION Outcomes among OS patients with local or metastatic disease have remained constant since the introduction of neoadjuvant chemotherapy (Isakoff et al., 2015). The regimen of methotrexate, Adriamycin (doxorubicin), and cisplatin has been established as the standard of care for OS. Recent trials testing the therapeutic value of combination therapies, such as interferonα-2b or addition of ifosfamide and etoposide, have not reported significant improvements Marina et al., 2016). In Europe, mifamurtide has been approved for use in OS following a Children's Oncology Group trial, although it has not yet found widespread acceptance, and is not approved in the United States (Chou et al., 2009;Ando et al., 2011). These findings highlight the need for the evaluation of molecular susceptibilities in OS, and the development of corresponding targeted therapies (Cleton-Jansen et al., 2009). The development and characterization of highly specific inhibitors of Wnt signaling provides new possibilities for treatment options in OS and other Wnt-dependent cancers. The present work demonstrates the cytotoxic effects of the small molecule FH535 in OS cells and elucidates the molecular mechanism by which FH535 inhibits Wnt signaling. A number of studies have highlighted conferred dependency on Wnt in a variety of cancers, driven by mutations in APC or β-catenin (Fukushima et al., 2001;Filipe et al., 2009). Indeed, several reports have demonstrated pharmacologic inhibition or genetic silencing of TNKS1/2 to have strong tumor inhibiting effects in cellular and animal models of cancers (Casás-Selves et al., 2012;de la Roche et al., 2014). In contrast, APC and β-catenin driving mutations in OS are not widely reported, although Wnt dependence has been clearly shown in OS. However, the study by Stratford and colleagues reported OS susceptibility to tankyrase inhibition (Stratford et al., 2014). The findings in the Stratford study, together with the work reported here (Figures 1, 2) suggest that TNKS1/2 directed therapies in OS and other cancers not known to contain frequent Wntdriver mutations may prove to be a beneficial course of action. While many groups have reported Wnt-inhibition as a means to sensitize to various chemotherapy toxins, our data demonstrate that chemotherapy resistant cells may exhibit a more robust response to Wnt inhibition alone (Figure 2) (Dieudonné et al., 2012;Wickström et al., 2015;Zhang et al., 2016). Recent work has shown the upregulation of Wnt markers in response to DNAdamaging chemotherapies (Alakhova et al., 2013;Martins-Neves et al., 2016;Zheng et al., 2016). Thus, exposure to DNA-damaging molecules may select for a population of cells which depend on Wnt signaling. These reports may aid to explain the sensitivity of the doxorubicin-resistant cells to FH535 treatment. The data in this report indicate that FH535 may target TNKS1/2 and PARP1, resulting in Wnt inhibition and reduction in OS cell survival. The TNKS1/2 enzymes positively regulate canonical Wnt signaling by facilitating the destruction of Axin2. TNKS1/2 inhibitors reduce PARylation of Axin2, and limit canonical Wnt signaling through β-catenin. Studies which have analyzed the affinity of TNKS1/2 inhibiting compounds have shown variable specificity for the PARP-family enzymes. XAV939, a TNKS1/2 inhibitor, is reported to cross-react with PARP1 with an IC 50 of 0.11 µM and with PARP2 at 2.2 µM (Huang et al., 2009;Gunaydin et al., 2012). The majority of characterized TNKS1/2 inhibitors have been shown to have affinity for either the nicotinamide subsite of TNKS1 or TNKS2, the adenosine subsite, or both subsites of the TNKS1/2 enzymes (Lehtiö et al., 2008). Due to the high degree of conservation of the nicotinamide subsite between PARP family members, it has been hypothesized that the adenosine subsite is more desirable for TNKS1/2-specific targeting (Gunaydin et al., 2012). Indeed, adenosine-site directed inhibition has been achieved by IWR-1 and G007-LK; as these small molecules exhibit decreased cross-reactivity with other PARPs Voronkov et al., 2013). The data within this study (Figure 4) suggest that FH535 may be grouped with other small molecules which have been shown to inhibit both PARP1 and TNKS1/2. Detailed examination of the structure-activity relationship of FH535 in the context of PARP1 and TNKS1/2 may facilitate increased specificity through rational modification of the molecule. Still, secondary inhibition of PARP activity may be therapeutically Statistical significance determined by one-way ANOVA with Tukey's multiple comparison test, * * * * indicates p < 0.0001, "n.s." indicates no statistically significant difference. beneficial in OS, particularly when combined with targeting of Wnt signaling, as demonstrated by in vitro data in this study. While this study reports the mechanism of FH535 activity in OS cell lines, and presents in vitro data that demonstrate toxicity to OS cells, further work will be required to show the broad application and in vivo potential of the drug. The doxorubicin resistant model developed in this study is an example of one type of chemotherapy resistance, while other mechanisms may also be involved in regulating the sensitivity of OS to various chemotherapy regimens. Thus, while OS dependency on Wnt has been widely reported in vitro, as well as in animal and human samples of OS, our model may represent a single subset of human OS cells which have developed resistance to currently used chemotherapeutic agents Martins-Neves et al., 2016). Additional work will also be required to determine the efficacy, stability, and pharmacokinetics of FH535 in animal models of OS, as these properties have been highly variable in other molecules targeting the TNKS1/2 enzymes . In sum, this study details the susceptibility of OS, including a model of chemotherapy resistant OS, to growth inhibition with the small molecule FH535. Additionally, the study clarifies the mechanism by which FH535 inhibits Wnt signaling, and reports the inhibition of PARP1 auto-modification in addition to TNKS1/2 blockade. Our results suggest that Wnt-targeting chemotherapeutics, such as FH535, may be promising candidates for treatment of OS. AUTHOR CONTRIBUTIONS CG: wrote the manuscript, analyzed data, performed experiments, and planned the study. TM: performed experiments and analyzed data. KS: performed experiments. AM: planned the study and interpreted data. MY: planned the study and interpreted data.
2017-06-15T18:58:04.426Z
2017-05-23T00:00:00.000
{ "year": 2017, "sha1": "4653decec7975a541ac56638e39359df1e6466a3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2017.00285/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4653decec7975a541ac56638e39359df1e6466a3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119501859
pes2o/s2orc
v3-fos-license
D3-branes in NS5-brane backgrounds We study D3-branes in an NS5-branes background defined by an arbitrary 4d harmonic function. Using a gauge-invariant formulation of Born-Infeld dynamics as well as the supersymmetry condition, we find the general solution for the $\omega$-field. We propose an interpretation in terms of the Myers effect. Introduction and summary Any configuration of parallel NS5-branes creates a non-trivial string background, described by the following fields on the transverse four dimensions : Here, V can be any harmonic function of the four transverse coordinates X µ . These backgrounds play an important rôle in (little) string theory; they are related to various exact string backgrounds. For instance, the near-horizon geometry of k superposed NS5-branes is R Φ ×SU (2), where R Φ is the linear dilaton background. The near-horizon geometry of NS5-branes spread on a circle [1] is related by T-duality to an orbifold of SL(2, R)/U(1) × SU(2)/U (1). In [2], an NS5-branes background is used to exhibit the effect of worldsheet instantons on T-duality. NS5-branes spread on a three-sphere provide another interesting configuration, with a dilaton everywhere finite [3]. All this motivates the study of D-brane probes in such backgrounds. Such probes have already been used in some particular cases of NS5-branes background [4,5] and in some U-dual configurations [6,7]. First, the D1-branes (we mean branes extending along one of the four transverse dimensions, with an unspecified number of flat directions parallel to the NS5-branes) are not affected by the NS5-branes and are straight lines 2 . On the contrary, the D3-branes can take quite complicated shapes [5], in relation with the Hanany-Witten effect [8]. In this note we investigate the general properties of D3-branes in all such backgrounds. Let us summarize the results. First, an useful tool to study the shapes of D3-branes will be the gauge-invariant rewriting of the Born-Infeld equations of motion, Eq. (2.2). We will briefly comment on the geometrical significance of this rewriting in terms of a non-symmetric second fundamental form. Then we will write the Born-Infeld and SUSY equations for D3-branes in general NS5-branes backgrounds. Those two equations turn out to be equivalent. We will find the general solution for the ω-field on the brane, Eq. (3.7). This ω-field describes the D1-brane charge of the D3-brane, enabling us to speculate about the D3-branes being formed as bound states of D1-branes via a kind of Myers effect [9]. Invariant Born-Infeld equations of motion The Born-Infeld action reads where ω ij =B ij + F ij is the gauge-invariant worldvolume two-form, subject to the constraint dω =Ĥ where dB = H. The action is gauge-invariant, as well as the equation of motion for the F -field E k = −∂ i δL BI δF ik ; but not the equation of motion for the embedding X µ (x i ), that is E µ = δL BI δX µ − ∂ i δL BI δ∂ i X µ . However, it is possible to add a combination of E k to the equation E µ and to obtain an equivalent, gauge-invariant equation. This was already done in [10], where the equation E µ − E j B ν µ ∂ j X ν = 0 was used 3 . Here we propose a different combination, which will turn out to have a much more interesting geometrical interpretation : Indeed this equation may be rewritten where we used the spacetime connection and the induced worldvolume connection The equation (2.2) involves the following two-form with values in the tangent space This generalizes the second fundamental form and shares its basic properties. Indeed our Ω is transverse (Ω µ ij ∂ k X µ = 0), and it satisfies generalized Gauss-Codazzi equations where R N is the curvature of the spin connection ω ab for some orthonormal basis ξ µa of the normal space; explicitly we have Thus, we were able to reformulate the Born-Infeld equations of motion in a gauge invariant manner, using the connection with torsion Γ and the associated second fundamental form on the brane. This suggests that those objects should contribute to the derivative corrections to the Born-Infeld action when the B-field is present, generalizing purely gravitational terms of [11,12]. More generally, this points to the relevance of the connection Γ for the D-branes geometry, as was already noted in [13]. For the moment, we will only use the gauge invariance of eq. (2.2) in order to study D3-branes in an NS5-branes background, without having to fix a gauge for the B-field or to find an F-field on the brane. The case of D3-branes In order to write the equations which determine the geometry of a D3-brane and its worldvolume two-form ω ij , let us define this geometry by the equation K(X µ ) =cst and write the most general local solution to the F -field Born-Infeld equation of motion E k : Here ϕ is some function on the brane, and we normalize ǫ ijk so that it is a tensor, ǫ 123 = √ detĝ, where we defineĝ ij = ∂ i X µ ∂ j X µ . We raise spacetime indices with δ µν , not with the metric G µν = V δ µν . Thus, the worldvolume metricĝ ij , with which we raise indices, does not coincide with the standard induced metric V ∂ i X µ ∂ j X µ . The unknown functions K and ϕ are subject to two equations, which we write using the projector onto the brane First, we have the gauge-invariant Born-Infeld equation Second, we should not forget the equation dω ij =Ĥ : Our two equations are second-order partial differential equations. It is possible to find a first order equation by studying the supersymmetry condition for the brane, which will turn out to be equivalent to the Born-Infeld equation. First, the background preserves the following supersymmetries : Then the D3-brane SUSY condition is This does not depend at all on the harmonic function V defining the background. With our notations, this can be rewritten in the form At any given point, the existence of such a ξ is guaranteed by the fact that v µ v µ = 1. However, one shoud not forget that ξ should always remain in the same direction, ξ = V We can use this equation to eliminate ϕ from our expressions. In particular, the solution for the ω-field is The equations (3.2) and (3.3) both take the form Now that we have studied the local properties of the D3-branes, let us say a word about the quantization conditions. A quantized quantity is, as usual, defined for any two-cycle S 2 of the brane, which is the boundary of some 3-surface M : This quantity measures the RR charge of the D3-brane. Three situations can happen : first, if the background is created by localized individual NS5-branes, then the quantization is automatically satisfied. Second, if the D3 passes through a stack of NS5, then the angle at which the D3 emerges from this stack is quantized, like in [5]. Third, if the NS5 are spread, then the quantization has to be added by hand. For instance when the NS5 are spread on a circle, then the D3 has to intercept a quantized portion of the circle. Examples and discussion In this last section we want to give a physical interpretation of our results in terms of the Myers effect. Let us first mention a few examples, where our equations can be solved or at least reduced to differential equations. • Case when the background is asymptotically flat, V → 1 at infinity : near infinity our D3-brane is nearly flat and we can solve the equations for its shape, • Superposed NS5-branes: V = 1 + kl 2 s r 2 , the partial differential equation on K reduces to a differential equation after assuming K(v µ X µ , r) (for any constant v µ ). See [5] for more details. In the case of R Φ × SU(2), the S 2 factor of the brane can be formed from a stack of superposed D0-branes on SU(2) via the Myers effect. The R Φ ×SU(2) background is the near-horizon geometry of the background defined by superposed NS5-branes, and in this case the D3 should be considered as a bound state of a flat D3 with a stack of D1-branes ending on the NS5s (see Figure 1). This is confirmed by computing its D1-brane charge. However, this only holds when the flat D3-brane does not go too far from the NS5s, so that the throat of the corresponding curved D3 is not as thin as the string length ℓ s . A limiting case occurs when we consider D1-branes going to infinity without ending on any D3; then there is no D3-brane solution which would be a candidate for a bound state of those D1s, and the Myers effect does not occur. This might seem a bit strange when we look at the near-horizon limit, but one should not forget that this region suffers from a strong coupling problem and cannot be expected to give reliable information. On the contrary, in the flat region far from the NS5s, the D1s are not expected to form any bound state. Now, we are led to conjecture that this phenomenon is general and that every supersymmetric D3-brane in an NS5-branes background is a bound state of D1s (with or without a flat D3-brane at infinity depending on the background). The main evidence we have is the existence of the constant vector v µ , which indicates the direction of the D1-branes of interest. Our D3 preserves the same supersymmetries as those D1-branes and should have the correct charge, as hinted by Eq. (3.7). It would be interesting to study this kind of Myers effect using the nonabelian Born-Infeld action, however one should take into account the fact that the original D1-branes may end on a D3-brane.
2019-04-14T02:51:15.968Z
2003-01-14T00:00:00.000
{ "year": 2003, "sha1": "67f4c5c7f7cd7b2ab85275097be92ebfeeb0eb1e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0301092", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "67f4c5c7f7cd7b2ab85275097be92ebfeeb0eb1e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2285546
pes2o/s2orc
v3-fos-license
Assessing the validity and intra-observer agreement of the MIDAM-LTC; an instrument measuring factors that influence personal dignity in long-term care facilities Background Patients who are cared for in long-term care facilities are vulnerable to lose personal dignity. An instrument measuring factors that influence dignity can be used to better target dignity-conserving care to an individual patient, but no such instrument is yet available for the long-term care setting. The aim of this study was to create the Measurement Instrument for Dignity AMsterdam - for Long-Term Care facilities (MIDAM-LTC) and to assess its validity and intra-observer agreement. Methods Thirteen items specific for the LTC setting were added to the earlier developed, more general MIDAM. The MIDAM-LTC consisted of 39 symptoms or experiences for which presence as well as influence on dignity were asked, and a single item score for overall personal dignity. Questionnaires containing the MIDAM-LTC were administered face-to-face at two moments (with a 1-week interval) to 95 nursing home residents residing on general medical wards of six nursing homes in the Netherlands. Constructs related to dignity (WHO Well-Being Five Index, quality of life and physical health status) were also measured. Ten residents answered the questions while thinking aloud. Content validity, construct validity and intra-observer agreement were examined. Results Nine of the 39 items barely exerted influence on dignity. Eight of them could be omitted from the MIDAM-LTC, because the thinking aloud method revealed sensible explanations for their small influence on dignity. Residents reported that they missed no important items. Hypotheses to support construct validity, about the strength of correlations between on the one hand personal dignity and on the other hand well-being, quality of life or physical health status, were confirmed. On average, 83% of the scores given for each item’s influence on dignity were practically consistent over 1 week, and more than 80% of the residents gave consistent scores for the single item score for overall dignity. Conclusion The MIDAM-LTC has good content validity, construct validity and intra-observer agreement. By omitting 8 items from the instrument, a good balance between comprehensiveness and feasibility is realised. The MIDAM-LTC allows researchers to examine the concept of dignity more closely in the LTC setting, and can assist caregivers in providing dignity-conserving care. Background In light of the ageing population and the fact that people live a relatively longer period of time with chronic diseases and disabilities, concerns about losing personal dignity may increasingly arise [1,2]. Personal dignity is a type of dignity which relates to a sense of worthiness, is individualistic, tied to personal goals and social circumstances, and can be taken away or enhanced by circumstances or acts from others [3][4][5]. It should be distinguished from basic dignity, which is the inherent dignity of each human being and can be regarded as a universal and inalienable moral quality [1,6]. Earlier studies have shown that loss of personal dignity is associated with depression, hopelessness, a desire for death [7] and requests for euthanasia and physician-assisted suicide [8][9][10][11]. It is this type of dignity that is therefore important to understand, assess and preserve within the context of health care. The dignity concept can contribute to care in the last phase of life because it goes beyond assessment of physical and psychosocial health status and includes one's perception of worthiness, both as an individual and in relation to close others and society [12][13][14]. By now, there is a substantial amount of knowledge on how patients nearing death [15][16][17] and older people in nursing homes [18][19][20][21][22] understand the concept of dignity. The majority of these studies had a qualitative design, and described the factors that can preserve or undermine personal dignity, and their interrelatedness. Some of these empirical studies have served as a basis for the development of a measurement instrument for dignity. An example is the Patient Dignity Inventory, a 25-item list which was validated in patients in a palliative care program (predominantly cancer patients with a life expectancy of less than 6 months) [23,24]. Another instrument targeting at dying patients is the dignity card-sort tool, which can be used to rank factors influential in the loss or preservation of dignity at life's end [25,26]. Recognizing the need for an instrument that is applicable to a more general patient population, our research group has developed the Measurement Instrument for Dignity AMsterdam (MIDAM) and examined its content validity in people with one or more advance directive(s) [27]. A setting for which no such measurement instrument is yet available is the long-term care setting. As compared to the general patient population, some aspects probably become more important for those who live permanently in an institution. Patients who are cared for in long-term care facilities not only face threats to dignity arising from functional and/or cognitive decline, they are also confronted with an unfamiliar living environment, little privacy, are often heavily reliant on staff and increasingly lack social networks, making them rather vulnerable to lose personal dignity [18,28]. A measurement instrument can give insight regarding those who are most at risk of losing dignity, and can be used to better target more effective, dignity-conserving care to an individual patient. Therefore, the aim of this study is to provide the long-term care setting with a valid and reliable measurement instrument. In this article we describe how the already existing MIDAM was adapted to create the MIDAM-LTC, and how we tested its content validity, construct validity and intra-observer agreement in a sample of 95 nursing home residents. Furthermore, we examined possibilities to reduce the length of the instrument, in order to make it feasible for use in practice. Design and study population The starting point for this study was our earlier developed measurement instrument for self-perceived dignity, retrospectively named MIDAM [27]. This instrument consists of 26 items (symptoms or experiences) categorized in 4 domains: (I) evaluation of self in relation to others, (II) functional status, (III) mental state and (IV) care and situational aspects. On the basis of the results of an extensive qualitative interview study among 30 nursing home residents [22,29], we added items specific for long-term care facilities to the MIDAM, i.e. items with regard to the way residents are treated by nursing home staff, living circumstances in the nursing home, living in a group, limited capacity of nursing home staff, sense of belonging and sense of meaning (see Table 1). These six themes were frequently mentioned by nursing home residents in the interview study, but not adequately reflected in the MIDAM. In a process of reflection and interaction, we formulated 13 items following the structure of the original MIDAM-items. We abundantly added items in order to be comprehensive. The extension "for Long-Term Care facilities" was added to the name of the instrument; MIDAM-LTC (see Additional file 1). To test the instrument's psychometric properties, data were collected on the general medical wards (long-stay units for people with physical illnesses) of six nursing homes in the Netherlands. Nursing home residents were recruited with help from a unit manager, elderly care physician or the most important nurse on the ward. Eligible residents were all those on the wards who were cognitively able to understand the instructions and questions. Because many nursing home residents could not write anymore, an additional criterion was that a resident had to be able to communicate with an interviewer who would administer the questionnaire face-to-face and fill in the answers. All eligible residents received an information letter approximately one week before the interviewers came to the nursing home, and they could indicate whether or not they wanted to participate in the study at the time the interviewer visited them one week later. By consenting to participate in the study, the nursing home resident also gave us permission to ask his/her contact person in the care record (closest relative), elderly care physician and responsible nurse for information about the mental and physical health status of the resident. The questionnaire For each item (symptom or experience) in the MIDAM-LTC, respondents were first asked whether it applied to their life (by thinking about the past 2 days). Only if they answered affirmatively, they were asked to what extent this influenced their sense of dignity on a five-point scale (see Table 2). We deliberately distinguished between these two questions, in order to prevent respondents not suffering from a certain symptom or experience rating its putative association with dignity. In addition, we asked respondents to rate their sense of dignity on a 10-point scale (1 = sense of dignity completely lost, 10 = sense of dignity completely intact). Besides these questions about personal dignity, the questionnaire asked for respondent characteristics, and contained measurements of related constructs like the WHO-Five Well-being Index, the EQ-5D, and a question to rate quality of life (on a 10-point scale). In order to assess the resident's mental and physical health status, Table 1 Origin of items added to the MIDAM in order to create the MIDAM-LTC Theme from qualitative study [22,29] Items already in MIDAM [27] Items added to create MIDAM-LTC the Cognitive Performance Scale [30], Barthel Index [31] and Karnofsky Performance Status Scale [32] were presented to the nurses. The elderly care physician and closest relative provided information about the resident's diseases. Procedure We first piloted the questionnaire with three nursing home residents, after which a few items of the MIDAM-LTC were reformulated. Subsequently, the questionnaire was administered face-to-face to all participants by four interviewers (among whom MOV and IG) and took approximately 30 minutes for all questions, of which the MIDAM-LTC lasted about 20 minutes. Residents were handed a card with the answering options to help them choosing their answer. Ten residents were asked to fill in the questionnaire by using the 'think aloud' method [33]. This method was used to elicit data on nursing home residents' thought processes as they responded to the items and to assess whether they understood the questions as we intended them. To be able to study intra-observer agreement, we visited the nursing homes one week later and asked 49 residents to fill in the questionnaire a second time with help from another interviewer than the week before. We intended to have two measurements of half of the participants, and asked the residents who were available at the time of our second visit. Only a few declined to participate twice. We chose for a time interval of one week, which we estimated to be long enough to prevent recall, though short enough to prevent that major changes in personal dignity occurred [34]. The study was approved by the Medical Ethics Committee of the VU University Medical Center. Analysis of psychometric properties Each item of the MIDAM-LTC can be considered a causal indicator, meaning that items are not expected to correlate with each other, and that even a single symptom or experience may suffice to undermine dignity, while a low sense of dignity need not necessarily imply that someone suffers from all the symptoms listed [35,36]. Therefore, performing a factor analysis or calculating a total score of all items was not meaningful. Instead, each item had to be considered separately, while we kept in mind that the instrument needed to be comprehensive, though take as little effort and time as possible to be filled in, in order to be feasible for use in practice. Content validity was determined through two approaches. To examine whether all items in the MIDAM-LTC were relevant for personal dignity, we first calculated the mean scores per item for influence on dignity and the percentage of respondents who indicated that the item (if present) influenced their dignity quite a lot or very much (score 4 or 5). Because a valid ground for omitting items in models with causal indicators is that they occur too infrequently to be worth reporting [36], we decided to omit those items which fulfilled two criteria: a mean score for influence on dignity lower than 2.50, and a percentage below 25% of respondents who indicated that the item influenced their dignity quite a lot or very much (score 4 or 5). Comprehensiveness was assessed by asking the residents who answered the questionnaire while thinking aloud whether they missed any items that influenced their dignity. To assess construct validity, we tested several hypotheses about the relation between dignity and related constructs (Table 3), based on expectations arising from our qualitative interview study. Pearson's correlation coefficients between the constructs were calculated. According to Cohen, we classified a correlation coefficient over 0.5 as a strong relation, 0.3 to 0.5 as moderate, 0.1 to 0.3 as small, and below 0.1 as no relation between constructs at all [37]. In examining the intra-observer agreement after one week, we were especially interested in absolute measures of agreement. Since Intraclass Correlation Coefficients Table 3 Hypotheses to assess construct validity Hypothesis Explanation 1. The number of items where people indicate that it influences their dignity (very) much (score 4 or 5) correlates strongly with the single item score for overall personal dignity (on a scale from 1 to 10). Although even a single symptom or experience may suffice to violate dignity for an individual nursing home resident, we expect thaton study population levelthe more items influence dignity to a large extent, the lower the single item score for personal dignity. 2. Both the score for quality of life (on a scale from 1 to 10) and the score on the WHO Well-Being Scale (on a scale from 1 to 100) correlate moderately to strong with the single item score for personal dignity (on a scale from 1 to 10), though the correlation with the WHO Well-Being Scale is stronger than with the score for quality of life. These expectations arise from the results of our interview study [22,29], in which we noticed that many nursing home residents associated 'quality of life' with their physical health status. Because personal dignity encompasses relational aspects as well, we expect 'well-being'which might have a more holistic connotationto be more closely related to the concept of dignity. 3. Both the score on the Karnofsky Performance Status Scale and the score on the Barthel Index correlate low to moderately with the single item score for personal dignity (on a scale from 1 to 10), though the correlation with the Barthel Index is stronger than with the Karnofsky Performance Status Scale. Whereas the Barthel Index measures physical functioning on 10 Activities of Daily Livingand the Karnofsky Performance Status Scale simply by one questionwe expect more variation in the scores on the Barthel Index, and therefore a stronger correlation with personal dignity. However, we expect these low to moderate correlations since physical functioning is only one aspect of personal dignity. and Cohen's kappa values are relative measures of agreement [38], we calculated agreement percentages between the two observations. First, we compared the single item scores for overall personal dignity on both measurements. Next, percentages of agreement were calculated for each item's influence on dignity in two different ways; we distinguished between exact agreement and agreement if we allowed the scores to differ one point on the 5-point scale. We hereby reasoned that an item could not undermine dignity if not present in a resident's life -a non-affirmative answer on the first questionand scored it accordingly on the second question (score 1 on the 5-point scale). In this way, we could also take the first question regarding presence into account in calculating the agreement percentages. In addition, we looked at each item's mean score for influence on dignity on both measurements, and tested whether these differed from each other with paired sample t-tests. Finally, average scores and average agreement percentages across all items' influence on dignity were calculated. Sample characteristics In total, 131 residents were approached to participate in this study. Twenty-one residents declined to participate, two residents were absent on the days the interviewers were in the nursing home and two residents died after they had received the information letter. A further eleven residents were excluded by the time of the interview, as it turned out that they were cognitively unable to understand the questions. This resulted in 95 participating residents, whose characteristics are shown in Table 4. The majority of the respondents were female and they averagely resided 744 days (median 575 days) in the nursing home. The most frequently reported diseases were heart diseases, rheumatoid arthritis and stroke. Respondents rated their sense of dignity on average as 7.3 (SD 1.6) on the 10-point scale. Content validity In Table 5, the items are ranked according to the mean scores given for influence on dignity, and ordered per domain. The percentage of nursing home residents who agreed that an item applied to their life ranged from 98.9% ('Using medical-technical aids') to 5.3% ('Not looking well-groomed' and 'Not made any meaning or lasting contribution') and were highest in the domain 'Functional status'. Most of the 13 added items specific for long-term care facilities were frequently present in the study population. However, their mean influence on sense of dignity was generally rather small. The highest mean scores for influence on dignity could instead be found in the domain 'Evaluation of self in relation to others' , and ranged from 3.25 ('Feeling worthless for friends and family' and 'Not treated with respect by caregivers') to 1.87 (' A changed physical appearance'). Nine items barely exerted influence on dignity according to the formulated criteria (marked with an asterisk in Table 5). Five of them were items that were specifically added for the long-term care setting. Many respondents had several diseases. The most prevalent diseases are listed in the table, but many others were mentioned (e.g. Chron's disease, cataract, pneumonia, polyneuropathy, ALS, aneurysma hydrocephalus, epilepsy). 3 The Barthel Index assesses ability to perform activities of daily living: 0 = total dependence -20 = maximum independence. 4 A higher score on the WHO-Five Well Being Index (from 0 to 100) indicates more well-being. 5 The EQ-5D assesses health-related quality of life on 5 dimensions 'mobility, self-care, usual activities, pain/discomfort and anxiety/depression': -0.33 = severely disabled on all domains -1 = perfect health. Presence: between 0 and 2 missing observations per item. 2 Influence on dignity: between 0 and 2 missing observations per item (of the respondents who indicated that an item applied to their life). *For this item the mean score for influence on dignity is lower than 2.50, and less than 25% of the nursing home residents to whom the item applied indicated this to influence their dignity quite a lot or very much (score 4 or 5); this item will therefore be removed from the instrument. **Although the criteria for omission are met by this item, we reasoned to keep this item in the instrument in a reformulated phrase. Before definitively removing these nine items from the instrument, we listened to the recordings of the 10 nursing home residents who filled in the MIDAM-LTC while thinking aloud, to find out the reason for the small extent to which an item averagely influenced dignity. This analysis revealed that some symptoms or experiences, although present in a resident's life, did not undermine dignity because the resident was satisfied with the way it was. For example, the majority of the residents who agreed that they had a small room did not long for a bigger room ("Where do I need it for?"). More or less the same argumentation was given for the small influence on dignity for the items 'Little opportunity to shower' ("I don't want to shower more often, it makes me tired") and 'Having little contact with other residents' ("They are too demented or fighting with each other, so I'd rather be on my own"). The items 'Receiving little time from nurses' and ' A lot of different nurses' hardly undermined dignity, because residents ascribed the presence of both items to the circumstances rather than to nurses' unwillingness ("They are terribly busy because of the lack of staff. If I really need them, they will make time for me"). 'Being forgetful' and ' A changed appearance' were regarded and accepted as belonging to the ageing process and therefore not undermining dignity ("Yes I have more wrinkles, grey hair and I forget things occasionally, but may I? I'm 82 years old!"). An extra reason why ' A changed appearance' barely exerted influence on dignity was that some nursing home residents said they had lost weight, which they actually regarded as positive. There was one item for which no sensible explanation could be found for the low scores on a) presence and b) influence on dignity: 'Mentally unable to take decisions'. Possibly, these low scores were a consequence of our decision to include only respondents who were cognitively able to understand the questions. It might also be that the way the item was phrased could have discouraged respondents to indicate the item applied to them. We therefore chose to keep this item in the instrument and reformulate it more mildly into 'I feel unable to take major decisions'. In addition, the thinking aloud method revealed that the item 'Feeling bored and experiencing every day as the same' consisted of two different aspects; nursing home residents could experience every day as the same, while not feeling bored. Therefore, we decided to reformulate this item into ' All days seem colourless to me'. Furthermore, the items 'Not able to carry out usual activities' and 'Lost fighting spirit' were differently interpreted by the residents. For some, usual activities were activities they used to do in the past (e.g. cycling), whereas others thought of activities they did in the nursing home (e.g. reading and watching TV). To correct for these different interpretations, and to only include those activities participants have a current need for, this items needs to be reformulated into 'Not able to carry out activities I would like to do'. As for fighting spirit, some interpreted this as 'enjoying all organized activities in the nursing home' , whereas others considered this as 'standing up for themselves'. However, this latter item does not need to be reworded, as it concerns the same character trait that lies at the root of these two different manifestations. Since the purpose of the MIDAM-LTC is to give insight regarding those who are most at risk of losing dignity, high scores on items are merely a signal to start questioning the source of dignity-related distress. Finally, no items were missed by the nursing home residents. Construct validity We tested our first hypothesis (see Table 3) without the eight items of which we had decided to omit them, as described above. The number of items where residents indicated that it influenced their dignity (very) much (score 4 or 5) correlated moderately with the single item score for overall personal dignity (r = -0.49), just missing the threshold to be classified as a strong correlation. Our second hypothesis was supported by the data: Pearson's r for the relation between the single item scores for quality of life and personal dignity was 0.50, and between the WHO Well-Being Scale and personal dignity 0.53. Unfortunately, we could test our third hypothesis only partially. According to our expectations, the correlation between the Barthel Index and personal dignity was low (r = 0.23). However, scores on the Karnofsky Performance Status Scale were sometimes unrealistically high (for some residents even '100' which means perfectly healthy), indicating that some nurses did probably not understand the question or took the average nursing home residents as a reference in mind when answering this question. Calculating a correlation coefficient between this scale and the single item score for dignity was therefore not appropriate. Table 6 shows the percentages of agreement for each item's influence on dignity, as well as average agreement percentages across all items of the MIDAM-LTC (without the eight items of which we had decided to omit them). The average exact agreement percentage for all items combined was 70.6% and increased to 83.4% when we allowed one point difference on the five-point scale. Intra-observer agreement In the latter condition, individual item's agreement ranged from 59.6% to 95.8%. No significant differences between the mean scores for influence on dignity on both measurements existed; neither for all items combined, nor for any individual item (data not shown). Of all nursing home residents who rated their overall personal dignity on both measurements, 50.0% gave the exact same score on the single item 10-point scale. A further 30.4% of the residents differed only one point in their ratings. Discussion By implementing all changes and omitting some items, the revised MIDAM-LTC consists of 31 items (see Additional file 1), and a good balance between comprehensiveness and feasibility is realised. The MIDAM-LTC has good basic standard psychometric properties. Content validity refers to the extent to which the concept of interest is comprehensively represented by the items of an instrument. That no aspects were missed provides evidence for a good content validity of the MIDAM-LTC. In addition, the 13 added items specific for long-term care facilities were derived from an extensive qualitative interview study with 30 nursing home residents [22,29], so these items were considered relevant by the target population. Given that we abundantly added items in our efforts to be comprehensive, it is therefore not surprising that five of these 13 added items were not found to have a large influence on personal dignity (although frequently present) and could be omitted from the instrument. In contrast, only three items from the general MIDAM [27] were found to barely influence dignity, demonstrating the validity of these already existing items across different settings. Construct validity, applicable in situations in which there is no gold standard, refers to whether the instrument provides the expected scores, based on existing knowledge about the construct [39]. Our expectations regarding the extent to which personal dignity correlated with other constructs were virtually all supported by the data, and the one expectation that was not (formulated in the first hypothesis) came very close to confirmation. This shows that the MIDAM-LTC has good construct validity. Although dignity appears strongly related to quality of life, it is noteworthy that nursing home residents rated their dignity generally higher (with a mean Between 0 and 4 missing observations per item. 2 Before calculating these percentages, items were recoded to exert no influence on dignity (score 1 on the 5-point scale) if they did not apply to a nursing home resident. of 7.3 out of 10) as compared to their quality of life (with a mean of 6.6 out of 10). Firstly, this finding suggests that personal dignity is a resilient construct. Most nursing home residents seem able to withstand the various physical and psychological challenges they face, making a great undermining of dignity rather the exception than the norm. Secondly, this higher score for dignity suggests that nursing home residents can distinguish between personal dignity and quality of life, despite the overlap in physical, socio-psychological and spiritual aspects reflected in both. Whereas overall quality of life exceeds health-related quality of life, which in turn is more than health status only [40], dignity may go beyond quality of life because it also brings one's perception of being worth of respect from themselves and from others along. As quality of life is defined as a subjective integration of all aspects of one's life deemed relevant [41], personal dignity may stress more importance on the evaluation of oneself in close relation to others. In our qualitative studies, we found that relational and societal aspects could undermine a resident's dignity, but preserve or enhance it as well [22,29]. For example, being socially involved with others, receiving good professional care and social support, and being amongst disabled others and therefore less prone to exposures of disrespect from the outer world could help residents to maintain or regain their dignity. Presence of these preservative factors may explain the relatively high score given for personal dignity in this study. An adequate intra-observer agreement is attributable to more than 83% of the residents who gave a practically consistent score for the each item's influence on dignity over a week, and to more than 80% of the residents who did this for the single item score for overall personal dignity. Some lower agreement scores were found for individual items that were more prone to fluctuate in time (e.g. 'feeling guilty about calling on the nurses a lot'). Similar good results were obtained in a study on the psychometric properties of the Patient Dignity Inventory [24]. Test-retest reliability was then measured over a 24hour time frame, which might be too short to ensure no recall bias was introduced. That we found these high percentages even after one week implies that personal dignity and the items in the MIDAM-LTC are quite stable. Strengths and limitations The MIDAM-LTC is, to our knowledge, the first instrument measuring dignity specifically targeted at the population living in long-term care facilities. Although nursing homes are only one facility providing long-term care, we believe that the MIDAM-LTC is relevant for all people living in any kind of long-term care institution. Since the adjustment of the instrument is based upon the perspectives of nursing home residents -who are probably the most severely disabled patients within the long-term care setting -it is likely that the added items cover the whole range of relevant aspects influential to the dignity of persons living in institutions. By using the think-aloud method in 10 nursing home residents, we gained valuable insight in the thought processes they engaged in when responding to the questionnaire. However, in interpreting their answers, we must be aware that answers obtained by the think-aloud method may be more social-desirable than the answers received from the other nursing home residents. Our study was limited to the experiences of residents who were able to think and communicate about dignity. Although applicable to people who suffer from mild cognitive decline, the instrument cannot provide relevant information in people with more severe cognitive incapacities. Another limitation is that the MIDAM-LTC can only detect whether a certain symptom or experience undermines personal dignity, and not what preserves dignity. It might be that the absence of certain symptoms or experiences could actually have improved one's sense of dignity, e.g. if a resident is one of the few still being able to go to the toilet independently. However, an instrument measuring both undermining and preservative factors would require a different structure, and would possibly become too complex to be understood by respondents. To provide the long-term care setting with a feasible instrument, we therefore only focused on the factors that undermine dignity, as they are more relevant for practice. Conclusions The MIDAM-LTC appears to be a reliable and valid instrument for the assessment of factors influencing personal dignity in the long-term care setting. By reducing the number of items listed, the feasibility of the instrument for use in practice has increased. It allows researchers to examine the concept of dignity more closely in long-term care, e.g. by investigating distributions of sources of dignity-related distress across various patients. Caregivers working in long-term care institutions could use the MIDAM-LTC to assist them in providing dignity-conserving care, by identifying the factors that undermine a patient's personal dignity.
2017-06-25T19:31:35.337Z
2014-02-11T00:00:00.000
{ "year": 2014, "sha1": "bfed5d0fcdef45c03dc4f0f623a308bd1ff56194", "oa_license": "CCBY", "oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/1477-7525-12-17", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "735822bd11e59eba4d5c95811db6603d6aebd543", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
58544433
pes2o/s2orc
v3-fos-license
Mapping and quantification of ferruginous outcrop savannas in the Brazilian Amazon: A challenge for biodiversity conservation The eastern Brazilian Amazon contains many isolated ferruginous savanna ecosystem patches (locally known as ‘canga vegetation’) located on ironstone rocky outcrops on the top of plateaus and ridges, surrounded by tropical rainforests. In the Carajás Mineral Province (CMP), these outcrops contain large iron ore reserves that have been exploited by opencast mining since the 1980s. The canga vegetation is particularly impacted by mining, since the iron ores that occur are associated with this type of vegetation and currently, little is known regarding the extent of canga vegetation patches before mining activities began. This information is important for quantifying the impact of mining, in addition to helping plan conservation programmes. Here, land cover changes of the Canga area in the CMP are evaluated by estimating the pre-mining area of canga patches and comparing it to the actual extent of canga patches. We mapped canga vegetation using geographic object-based image analysis (GEOBIA) from 1973 Landsat-1 MSS, 1984 and 2001 Landsat-5 TM, and 2016 Landsat-8 OLI images, and found that canga vegetation originally occupied an area of 144.2 km2 before mining exploitation. By 2016, 19.6% of the canga area was lost in the CMP due to conversion to other land-use types (mining areas, pasturelands). In the Carajás National Forest (CNF), located within the CMP, the original canga vegetation covered 105.2 km2 (2.55% of the CNF total area), and in 2016, canga vegetation occupied an area of 77.2 km2 (1.87%). Therefore, after more than three decades of mineral exploitation, less than 20% of the total canga area was lost. Currently, 21% of the canga area in the CMP is protected by the Campos Ferruginosos National Park. By documenting the initial extent of canga vegetation in the eastern Amazon and the extent to which it has been lost due to mining operations, the results of this work are the first step towards conserving this ecosystem. Introduction Several studies have investigated conservation and threats to biodiversity and ecosystem services in tropical rainforests [1]. Deforestation rates in the Amazon, the largest remaining tropical forest in the world, have also been well studied [2]. However, little information is available regarding the unique ecosystems found on ironstone rocky outcrops on the tops of plateaus and ridges. In the Carajás Mineral Province (CMP), located in the Eastern Amazon, these ferruginous outcrop savanna ecosystems are called "canga" [3] and occur within a dense forest matrix typical of the Amazon rainforest biome [4]. Canga vegetation, also associated with the presence of iron ore, is known to exist in at least two more regions in Brazil, namely, the Quadrilátero Ferrífero, or Iron Quadrangle [5], and the lateritic banks at Corumbá [6]. There are other types of open vegetation in the Amazon (Fig 1), but they are different from canga vegetation and are determined by different soil conditions (lateritic or very poor sandy soils). In 1967, geologists from United States Steel discovered these ferruginous outcrops on top of the ridges of the CMP, which is one of the most important metallogenic provinces in the world (2), and Serra do Cipó (3). In Bahia State (BA), the vegetation occurs in Chapada Diamantina (4). In Pará State (PA), the vegetation occurs in Serra dos Carajás (5), Maraconaí (6), and Maicuru (7). In Amazonas State (AM), the vegetation occurs in Serra dos Seis Lagos [24], while in Mato Grosso do Sul (MS), the vegetation occurs in Morraria de Urucum (9). N1, N4, N5, N8, S2, S11, S23, S38, and S43 are examples of geomorphic units located in the study area. B) The Shuttle Radar Topography Mission (SRTM) elevation map of the study area with canga vegetation before mining implementation. The red and black lines represent the boundaries of the Carajás National Forest (CNF) and the Campos Ferruginosos National Park (CFNP) protected areas, respectively. The digital elevation model (SRTM, 1 arc-second) was obtained from USGS Earth Explorer (https://earthexplorer.usgs.gov) and the CNF and CFNP shapefiles from ICMBIO (http://mapas.icmbio.gov.br/i3geo/datadownload.htm). All other layers and photos were produced by the authors and are copyright-free. (AMG), and seventh (JTFG) authors were supported by CNPq through research scholarships. The specific roles of this author are articulated in the 'author contributions' section. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: MFC is employed by Vale S. A. This does not alter our adherence to PLOS ONE policies on sharing data and materials. that contains large deposits of iron, as well as manganese, nickel, copper and gold [7]. Significant investments in mineral and ore exploration and exploitation have occurred over the past four decades [8]. Brazil's constitution and National Forest Code require that in order to obtain a mining license, no net loss of biodiversity and only minimal environmental impacts can occur. Licensing processes demand basic information about biota and environmental services associated with future mining areas [9, 10]. Mining activities must be conducted in the interest of controlling their interference in the environment. Hence, it is necessary to present a Degraded Area Recovery Plan (PRAD in Portuguese), when the environmental viability of the project is assessed [11]. Compliance with legal demands brought opportunities for research, which has contributed to increasing the knowledge about flora in canga of the CMP [4] and in other Brazilian mining sites located in Minas Gerais, Bahia, and Mato Grosso do Sul (Fig 1). However, it is clear that there are few floristic links between the Amazonian canga and the species found in the Brazilian cerrado, which are prevalent in the Amazonian savanna in lowlands, sandy soils, or on top of plateaus, as seen between Venezuela and Suriname [12]. Two locations within our study area, the cangas of the Carajás National Forest (CNF) and the Campos Ferruginosos National Park (CFNP), contain 856 seed plant species, most of which are herbs (40%), with 24 endemic species. For invasive plant species, the same two localities contain 17 exotic invasive plant species, most of them located in the recently created CFNP [12]. However, knowledge of plant growth strategies and other factors that could affect the dynamics of recovery or rehabilitation of canga vegetation is still very limited [13]. Canga plateaus surrounded by evergreen forests are considered isolated entities, although little is known about the dispersal between plateaus. Recent genetic analyses have demonstrated that two perennial morning glories (Ipomoea spp.) exhibited gene flow between these canga plateaus, and genetic diversity in these species was not influenced by the size of the plateaus [14]. Another work focusing on obligate cave dwellers revealed decreasing community similarity with increasing distance between the caves, suggesting that these organisms are indeed moving between caves and plateaus [15]. As opposed to the suppression of canga vegetation from mining in the Iron Quadrangle (Minas Gerais State, southeastern Brazil), which began in the 18 th century, the suppression of canga vegetation in the CMP began only recently, in the 1980s. Some authors have described the environmental degradation of canga vegetation in Brazil [16,17] and of a similar vegetation type in the ironstone ranges of Australia [18,19], recommending the establishment of protected areas to guarantee their conservation. In the CMP, seven protected areas are in place, pursuing a balance between mining and conservation. On one hand, protected areas safeguard mining-licensed operations from illegal activities through a green protected belt; on the other hand, the mining companies participate in the protection of natural areas, preventing fires and undesired human occupation through regular surveillance [20]. This kind of protection appears to have been achieved inside protected areas of the Carajás region, where the forests are mostly undisturbed. In contrast, the surrounding areas of Carajás (the Itacaiúnas River watershed area) have lost 70% of its natural land cover (forests) over the past 40 years due to agriculture and cattle grazing [21]. The future expansion of mining is regulated by the CNF Management Plan that was recently published, which recommends that mining could expand until reaching 14% of the total CNF area [22]. However, the CNF Management Plan does not specify the minimum extent of canga that must be preserved. Hence, the current loss of canga areas is still a challenge since the areas of loss have only been estimated by analogic aerial photographs [23] that are not sufficiently accurate, unlike the orthorectified satellite images used in this study. The accurate mapping and quantifying of canga vegetation areas within the eastern Amazon could be a first step to guide conservation strategies. Other authors have already discussed the importance of conserving the canga ecosystem and its rich biodiversity [12]. Human disturbances are a major threat and began in the 18 th century with vegetation suppression and anthropogenic burning in support of early mining activities, cattle industry, eucalyptus plantation and wood extraction [17]. Currently, the harvesting of ornamental plants (such as orchids), road construction, urbanization, and invasive species are also considered to be important threats [17]. In this study, we aim to evaluate the land-cover and land-use (LCLU) changes in the canga vegetation of the CMP (eastern Brazilian Amazon) during the cycle of mining operations in order to quantify the impact of mining on canga vegetation. The objectives of this study are (1) to present a geographical object-based Landsat image classification to quantitatively assess the extent of canga areas in the study area before mining projects were implemented in 1973; (2) to determine the extent of canga and forest areas around the beginning of mining activities (1984) and different snapshots in time afterwards, specifically in 2001 and 2016; and (3) to assess the average rate of canga vegetation suppression by land-use changes from one snapshot in time to another. This study is important due to the current lack of effective mapping and quantifying of changes to canga vegetation area using orthorectified satellite images during mining operations. This gap in our current knowledge is a challenge that hinders the national and local understanding of canga vegetation in a setting of open pit mining inside protected areas, as well as determining the necessary next steps to protect this vegetation. Materials and methods This project was carried out in the Carajás National Forest under permission of IBAMA (SIS-BIO 35594-2). Study area The study site is represented by the CMP ridges in the eastern Brazilian Amazon [7]. This region is recognized as a major Neoarchean tectonic province of the Amazonian Craton [25]. Geologically, this region is called the Carajás Formation, and it is composed of banded iron formations (BIFs) represented by jaspilites, with mafic rocks situated above and beneath it. Andesites, basalts, volcanoclastic materials, and gabbro are also present [26]. During the formation of these iron-rich deposits, the weathering processes of the Carajás Formation rocks occurred under humid climate conditions that allowed the formation of an extensively weathered profile on basic volcanic and BIF rocks. This alteration mantle contains iron-aluminous laterite, haematitic breccia, and ortho-and para-conglomerates [27], and acts as a surface crust on the tops of some ridges regionally represented by the Carajás Ridge [28]. The climate in the region is classified as the Aw type according to Köppen [29]. The region experiences high annual rainfall (~2,000 mm). Peak precipitation occurs during the rainy season between January and March, while the driest season occurs between June and August. Monthly temperatures vary between 25˚C and 26˚C, with the absolute minimum temperature between 16˚C and 18˚C between July and October, and the maximum temperature between 34˚C and 38˚C during all other months [30]. In the CMP, canga vegetation occurs over laterites and haematite breccia and conglomerates on top of some ridges with altitudes that range from 280 m to 904 m and average 670 m (Fig 1). In this paper, we subdivided the study site into seven geomorphic units: North (N1-N9), East (L1-L3), South (S1-S17), Tarzan (S18-S28), Bocaina (S29-S40), Cristalino (S43-S45), and Pium and São Felix (SF1-SF3) ridges. The nomenclature uses the letters N, S, L and SF to indicate North ("Norte"), South ("Sul"), East ("Leste") and São Felix ridges [31], respectively (Fig 1). Mining projects in the CMP began in 1984 with the implantation of the N4-N5 mines. Later, the East Ridge and S11D mines began operating in 2012 and 2016, respectively [32]. It is important to emphasize that the largest iron ore mines N4-N5 and S11D occur inside the CNF. Only the East ridge mine is located outside of the CNF protected area. Remote sensing dataset, digital image processing and field data collection Four Landsat images were used in this study. The 1973 Landsat-1 MMS image, with an 80 m spatial resolution, was only used to provide a visual observation of the canga areas before the first cycle of Amazon settlement. The 1984 and 2001 Landsat-5 TM and 2016 Landsat-8 OLI images were acquired in the Level 1 Terrain (L1T) format. The images were orthorectified with 30 m pixels to the Universal Transverse Mercator (UTM) 22S zone projection and datum WGS84. All 1984, 2001 and 2016 Landsat images were converted to ground reflectance in percentages. For each Landsat image, we derived the Normalized Difference Vegetation Index (NDVI) between vegetated areas and exposed soil [33]. Fieldwork was conducted in 2014 and 2015 to determine the LCLU classes (e.g., canga, forest, and water) using panoramic digital photographs. During the fieldwork, 166 ground control points (GCPs) were collected using a differential global positioning system (DGPS) with reliable real-time positioning through the OmniSTAR mode for decimetre-level accuracy. These GCPs were used to validate the 2016 Landsat-8 OLI image classification. Training and validation samples were defined per class based on the GCPs. These data were also complemented by Google Earth Pro online high-resolution imagery. Regardless of the up to thirtyyear difference among the images and the field data acquisition, all georeferenced field descrip- Measuring land-cover and land-use changes To estimate the canga area at the four snapshots in time, we used a multiresolution segmentation algorithm based on the homogeneity definition [34]. The three-date segmentation was conducted from all of the ground reflectance bands from the 1984 and 2001 Landsat-5 TM and 2016 Landsat-8 OLI images since they had the same spatial resolution (30 m). They were segmented using weight five for the near-infrared band and the NDVI index, and weight one for all other bands. The three-date segment was copied and used to classify the LCLU classes based on the 1973 Landsat-1 MSS, 1984 and 2001 Landsat-5 TM and 2016 Landsat-8 OLI images. This process followed geographic object-based image analysis (GEOBIA), combining the advantage of quality human interpretation and the capacities of quantitative computing [35]. For the purpose of detecting LCLU changes, we carried out a segmentation process from three separate single-date images, such as used by Desclée et al. [36] and Duveiller et al. [37]. Multi-date segmentation allowed for the comparison of three single images based on objects with the same geometry, delineating spatially and spectrally consistent segments and avoiding misclassification, allowing for the most accurate and rapid process and reducing additional processing efforts of outlining polygons [38]. All Landsat TM and OLI bands (ground reflectance values) of images (1984,2001 and 2016) were used as input layers in the segmentation process. Fig 3 illustrates the step-by-step multi-date segmentation and classification process in a small mining site in the study area. The segmentation approach was developed in two levels (Fig 3). Small objects were created in the level 1 segmentation (~1 ha), corresponding to approximately 9 Landsat TM and OLI pixels. Later, objects generated during the process of segmentation at level 1 were grouped to coarser objects in level 2, with a size of 4 ha, equivalent to 36 Landsat TM and OLI pixels. The segmentation in Level 2 aimed to reduce the number of objects and increase the size of polygons to facilitate the visual interpretation and change detection analysis. In regards to the definition of the scale parameter (h sc ), several unsupervised and supervised methods are available to define the optimal scale parameter [39]. However, the selection of appropriate scale parameters is heavily dependent on trial-and-error exploration, which is iterative and time-consuming [40], because there is no obvious mathematical relationship between scale parameters and the success of the segmentation [39]. The segmentation shape (w sp ), compactness (w cp ) and scale (h sc ) parameters were established as being equal to 0.1, 0.5, and 10, respectively, for all images. The h sc parameters at level 1 and level 2 segmentation were 10 and 5, respectively, to image segmentation with minimum object sizes approximately 1 and 4 ha, respectively. During the process of the automated classification of each image, we adopted membership functions to describe specific properties of the objects based on all Landsat spectral bands and elevation data obtained from the Shuttle Radar Topography Mission (SRTM). This process allowed for various features in the description of classes to be integrated by logical operators. The selection of features was assisted by an analysis of separability of the comparable classes. Each class was classified separately in the domain of the image object level using the filter class "unclassified", according to the following order: i) forest; ii) water; iii) pasturelands; iv) mines; and v) canga vegetation. It is important to emphasize that the 1973 Landsat-1 MSS image contains i) the canga areas before the mining project, ii) the 1984 Landsat-5 TM image coinciding with the year of the Carajás Mining Project installation, iii) the 2001 Landsat-5 TM image representing the mid-term age of mining project, and iv) the 2016 Landsat-8 OLI image representing the current condition, when all iron ore exploitation projects (N4-N5, S11D and Serra Leste mines) were already in operation. The LCLU changes were also analysed based on the "from-to" spatiotemporal change detection approach [35,41] to recognize the trajectories of thematic classes from 1973-1984, 1984-2001, 2001-2016, and 1973-2016. We identified five classes that did not change over the period of investigation (forest-forest, savanna-savanna, lake-lake, mine-mine, and pasturelands-pasturelands) to understand their possible change trajectories, related to the conversions "from-to" of forests to pasturelands, forests to mines, canga to mines, canga to pasturelands, pasturelands to mines, and lakes to mines. Assessing the classification accuracy of LCLU classes An object-based accuracy assessment is different from pixel-based validation due to the sampling units, i.e., objects vs. pixels [42]. However, a generally accepted approach is that classified polygons can be validated by GCPs [43]. To assess the classification accuracy of the 2016 Landsat-8 OLI image, 166 GCPs collected during fieldwork along accessible roads were used. As older GCPs and thematic maps were unavailable for the 1973, 1984 and 2001 Landsat-5 TM images, approximately 154, 159 and 137 validation points, respectively, were randomly stratified using the PCI Geomatica 2016 software. Hence, the accuracy of the Landsat image classifications was assessed using non-normalized and normalized confusion matrices [44]. The producer and user accuracies [45], Kappa per class, Kappa index of agreement and overall accuracy were also determined [43]. Results To assess the relationship between the Landsat image interpretations and terrain features, field campaigns were conducted in the study area to improve GEOBIA analysis, aiming to identify and map the different land cover and land use units. The multiresolution classification based on the GEOBIA analysis effectively classified the canga vegetation and its related mines. Fig 5 shows these classes distributed within the study site throughout the years before mine implementation, indicating the pristine area of canga vegetation (Fig 5A), its current extent and the area of mining activities in 2016 (Fig 5B). Based on random samples collected from the Landsat images, the results indicated that the overall accuracies and Kappa indices were 98. Table). The overall accuracies indicate that a large majority of segments were correctly identified according to the reference data (random samples). Some lake pixels were classified as artificial lakes in mines, while some mine samples were classified as forests (e.g., reclaimed areas colonized by grasslands) and canga vegetation, whose spectral responses are very similar to those of outcrops in mines. The highest omission errors occurred in lakes (18.2%) for the 2001 Landsat-5 TM image, while the highest commission errors occurred in the pasturelands class for the 1973 Landsat-1 MSS image (11.8%). The confusion between forests, pasturelands and mines can be explained by the regeneration of small patches of secondary forest in pasture areas and the revegetation of open pit mines with grasses. Misclassification of segments belonging to these classes can be observed in S1 Table. The classification shows that canga vegetation in the CMP occupied an area of 144.2 km 2 in 1973, before the implementation of the Carajás N4-N5 mines and open pit mining exploitation ( Table 1, Fig 5A). Table 1 presents the canga area before the implementation of mining projects (1973) and its extent in 1984, 2001 and currently (2016) in different geographical sites. Canga vegetation was converted to mines on the North, South and East ridges, but to different extents. On the North Ridge, where the N4 and N5 mines are located, 45.6% of the canga vegetation was lost between 1973 and 2016. On the South and East Ridges, where the S11D and SL mines are located, 13% and 8%, of the canga vegetation were lost, respectively. On the Tarzan, Bocaina, Cristalino, Pium and São Felix Ridges, the canga areas remained unchanged (Fig 5C). Inside the CNF protected area, where the three largest iron ore mines are located, there was 105.2 km 2 of canga vegetation before mining implementation, which represents 2.5% of the CNF area and 73% of the total area of canga vegetation in the Carajás region. The N4 (16.6 km 2 ), S11D (16.2 km 2 ), S11A (14.5 km 2 ), N1 (12.1 km 2 ) and N5 (11.8 km 2 ) ridges contained the largest canga areas in the study site. Over the past three decades, mining activities suppressed 28.3 km 2 of the canga area in the CMP, especially on the N4 (13.0 km 2 ), N5 (8.9 km 2 ), and S11D (6.0 km 2 ) ridges within the CNF. This area represents 1.8% of the CNF protected area until July 2016. Outside of the CNF, there was 39 km 2 of canga vegetation on the Bocaina, Cristalino, East and São Felix Ridges. The majority of this area (38.6 km 2 ) remains conserved, and the suppression of 0.4 km 2 of the canga area is associated with the implementation of the SL mine in the East Ridge (Fig 5). Based on the area of canga vegetation suppressed at each site (Table 1), the rates of suppression were calculated from the moment that mining activities began in 1984, 2012 and 2014 in N4-N5, SL and S11D mines, respectively. Hence, in the SL mine, the rate of canga suppression was approximately 0.1 km 2 .yr -1 from 2012 to 2016. In the S11D mine, the suppression rate reached 2 km 2 .yr -1 from 2013 to 2016, while in the N4-N5 mines, the rate decreased from 0.9 km 2 .yr -1 from 1984 to 2001 and to 0.5 km 2 .yr -1 from 2001 to 2016. S2 Table lists 1 km 2 ). Between 1984 and 2001, LCLU changes were most notable in the conversion from canga to mines (8.8 km 2 ) and from forests to mines (5.7 km 2 ). From 2001 to 2016, 15.4 km 2 of canga were converted to mines and 10.6 km 2 of forests were converted to mines in the N4-N5 mines, while 0.4 km 2 and 6 km 2 of canga were converted to mines in SL and S11D, respectively. During the entire period investigated , the main land cover changes were associated with conversions from forests to mines (26.6 km 2 ) and canga to mines (24.3 km 2 ). Table 2 shows all quantifications of LCLU changes between the periods of investigation. Discussion In this paper, we mapped and quantified canga vegetation as well as the changes in LCLU classes in the Carajás region, mainly focusing on the effects of mining operations. Previously, the canga area in Brazil was noted as being approximately 261.6 km 2 , with 102 km 2 in the Iron Quadrangle and 103 km 2 in the Carajás region [23]; however, the methods used for these estimates were not fully described. Our results show that the Carajás region originally included 144.2 km 2 of canga vegetation, a figure 40% higher than the previous estimates [23]. As mentioned before, previous studies do not describe how canga area was estimated. It is probable that canga area was calculated from analogic aerial photographs, whose spatial distortion was not corrected. Mining activities suppressed 28.3 km 2 of canga (19.6%), while large areas of canga were still conserved in the Carajás region (115.9 km 2 ). This area corresponds to 80.4% of the pristine canga area and represents one of the largest conserved canga ecosystem areas in Brazil. In the Iron Quadrangle (Minas Gerais State), the total area covered by canga vegetation is approximately 100 km 2 [46]. However, this estimated extent must be reviewed due to the methods used to estimate the area, based on uncorrected analogic aerial photographs. According to Sonter et al. [10], 17.6 km 2 of canga area has been cleared by mining activities. The percentage of canga suppression in the Iron Quadrangle (approximately 17%) is similar to that in the Carajás region. The high rates of canga vegetation suppression observed in the S11D mine are associated with early stages of mine implementation. According to the S11D project, the useful lifespan of a mine is approximately 30 years (http://www.vale.com/en/initiatives/innovation/s11d/pages/ default.aspx). Hence, the canga vegetation suppression observed over the past three years (~6 km 2 ) will increase less than 10% over the next 30 years, and the suppression rate will reach approximately 0.2 km 2 .yr -1 . In the N4-N5 mines, the main rump-up period is represented by the changes from 1984 to 2001, where canga suppression is almost twice as high as in the open pit phase from 2001 to 2016. However, if an increasing demand for iron ore occurs over the next several years, the useful lifespan of the mine will decrease, and the rates of canga vegetation suppression may increase. A similar process has been observed in terms of Amazon rainforest losses due to the large demands relative to Brazil's natural resources, including land, timber, minerals and hydroelectric potential, demands mainly driven by the high prices of commodities [47,48]. The results of Sonter et al. [49] supported a hypothesis that global demand for steel drives extensive land-use changes in the Iron Quadrangle, where increased steel production was correlated with increased iron ore production and mine expansion. Consequently, this process is also responsible for increasing charcoal production and the Mapping ferruginous outcrop savannas in the Brazilian Amazon expansion of subsistence crops. In a future study, this hypothesis will be evaluated in relation to Carajás mining projects. The accurate mapping of areas is vital to guiding conservation strategies, especially for canga vegetation in the eastern Brazilian Amazon where iron ores are located. Many authors have already described the threats to canga and the challenges of conserving this vegetation [16,17,50,51]. Among the threats, vegetation suppression for open pit mining, fire and invasive plant species have been noted as the major threats [17]. The greatest challenge of conserving canga vegetation is to manage biological invasions and create protected areas to avoid species losses [23,51]. The canga areas in the Carajás region are partially within the CNF (67%). This category of protected area allows for sustainable use, including sustainable mining activities. The CNF has also contributed to forest conservation, with surveillance improving the ability to inspect different anthropogenic impacts associated with fire, human settlements and gold digging. Otherwise, other economic activities such as livestock and agriculture would have threatened the natural land cover, mainly of tropical rainforests, which would have been completely converted to pasturelands or croplands as observed in adjacent areas [21]. To improve the conservation of canga vegetation in this area, the Brazilian Institute for Biodiversity Conservation (ICMBio) created the Campos Ferruginosos National Park (CFNP) in June 2017, an integral conservation unit financed by offset strategies for mining operations. This park includes the Bocaina and Tarzan Ridges, totalling approximately 24 km 2 of canga vegetation that represent 21% of canga area in the CMP, where mining activities will not be allowed. Hence, an additional step was taken towards canga vegetation conservation in the Amazon region. In addition, the recently-published CNF Management Plan defines the areas to be protected and those that can be mined and divides these areas into categories, which include i) preservation areas, with 15% of the total area of the CNF where human activities are not allowed; ii) transition areas, covering 15% of the CNF with mixed areas of conservation and management; iii) Fig 5. Digital elevation model extracted from the Shuttle Radar Topography Mission (SRTM) showing canga areas (yellow polygons) (A) before implementation of mining projects and (B) currently, with mining areas (blue polygons). The limits of the two protected areas are shown, the Carajás National Forest (red line) and the Campos Ferruginosos National Park (black line), created in 1998 and 2017, respectively. The São Felix Ridge is illustrated as a window, 160 km from the South Ridge. (C) The canga areas converted to mining structures between the 1970s and 2016 over the North, South and East Ridges, and over total canga area. Letters N, S and SL represent the geographic locations of North, South, and East ridges, respectively. Other mines: a = Igarapé Bahia, b = Azul, and c = Project 118. The digital elevation model (SRTM, 1 arc-second) was obtained from USGS Earth Explorer (https://earthexplorer.usgs.gov) and the National Forest and National Park shapefiles from ICMBIO (http://mapas.icmbio.gov.br/i3geo/datadownload.htm). All other layers were produced by the authors and are copyright-free. https://doi.org/10.1371/journal.pone.0211095.g005 mining areas where mining activities can be conducted, which is 14% of the CNF area; iv) forest areas designated for sustainable management (50% of the CNF area); and v) special and public use areas, covering 6% and designed for infrastructure and general use of the CNF [52]. The areas of canga vegetation destined for mining consist of mines (see location in Fig 4) that are already installed, such as N4 and N5 on the North Ridge, Azul, Igarapé Bahia, Project 118, and S11D on the South Ridge, among others that will be installed in the near future (N1, N2, N3 in the North Ridge). The Carajás ridges are a clear example of challenges to the conservation and exploitation of natural resources. The growing demands of society and the quality of the Carajás iron ore have encouraged the exploitation of this resource in these areas. However, for sustainable mining, adequate conservation strategies need to be implemented properly, and scientific research is a key aspect of this process. Heavy-metal pollution in water from iron ore exploitation has not yet been detected in the Carajás region [53].The proper determination of areas outside the Carajás region to be protected as offsets may represent an important tool for the protection of species in the region. Conclusion Remote sensing data and GIS tools provided four snapshots in time, permitting the mapping of canga areas and the quantifying of changes in land use. Based on the image analysis, we observed that canga areas in the CMP are 40% higher than previous estimates. The suppression of canga vegetation was associated with the implementation of mining projects, which favours the suppression of forest areas for canga conservation. It is important to emphasize that the most substantial vegetation suppression occurred during the earlier stages of mine implementation, from 1984 to 2001. Later, vegetation suppression was substantially reduced during the open pit mining phase from 2001 to 2016, where a hole is excavated from the earth's surface. After three decades of mineral exploitation, 80.6% of the canga area in the Carajás region remains untouched. Government and mining industries have used offsets to compensate the unavoidable impacts of iron ore exploitation. Hence, the CFNP was created to protect 21% of the canga area in the CMP. We believe that mapping and quantifying the areas of canga vegetation that have already been lost can be considered the first step towards conserving this important rocky environment.
2019-01-22T22:34:48.927Z
2019-01-17T00:00:00.000
{ "year": 2019, "sha1": "31960b5d99516bf27270e004d879e7be10d204db", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0211095&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "31960b5d99516bf27270e004d879e7be10d204db", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
52177403
pes2o/s2orc
v3-fos-license
Deep Learning for Generic Object Detection: A Survey Object detection, one of the most fundamental and challenging problems in computer vision, seeks to locate object instances from a large number of predefined categories in natural images. Deep learning techniques have emerged as a powerful strategy for learning feature representations directly from data and have led to remarkable breakthroughs in the field of generic object detection. Given this period of rapid evolution, the goal of this paper is to provide a comprehensive survey of the recent achievements in this field brought about by deep learning techniques. More than 300 research contributions are included in this survey, covering many aspects of generic object detection: detection frameworks, object feature representation, object proposal generation, context modeling, training strategies, and evaluation metrics. We finish the survey by identifying promising directions for future research. Introduction As a longstanding, fundamental and challenging problem in computer vision, object detection has been an active area of research for several decades.The goal of object detection is to determine whether or not there are any instances of objects from the given categories (such as humans, cars, bicycles, dogs and cats) in some given image and, if present, to return the spatial location and extent of each object instance (e.g., via a bounding box [53,179]).As the cornerstone of image understanding and computer vision, object detection forms the basis for solving more complex or high level vision tasks such as segmentation, scene understanding, object tracking, image captioning, event detection, and activity recognition.Object detection has a wide range of applications in many areas of artificial intelligence and information technologies, including robot vision, consumer electronics, security, autonomous driving, human computer interaction, content based image retrieval, intelligent video surveillance, and augmented reality. Recently, deep learning techniques [81,116] have emerged as powerful methods for learning feature representations automatically from data.In particular, these techniques have provided significant improvement for object detection, a problem which has attracted enormous attention in the last five years, even though it has been studied for decades by psychophysicists, neuroscientists, and engineers. Object detection can be grouped into one of two types [69,240]: detection of specific instance and detection of specific categories.The first type aims at detecting instances of a particular object (such as Donald Trump's face, the Pentagon building, or my dog Penny), whereas the goal of the second type is to detect different instances of predefined object categories (for example humans, cars, bicycles, and dogs).Historically, much of the effort in the field of object detection has focused on the detection of a single Fig. 2 Milestones of object detection and recognition, including feature representations [37,42,79,109,114,139,140,166,191,194,200,213,215], detection frameworks [56,65,183,209,213], and datasets [53,129,179].The time period up to 2012 is dominated by handcrafted features.We see a turning point in 2012 with the development of DCNNs for image classification by Krizhevsky et al. [109].Most listed methods are highly cited and won one of the major ICCV or CVPR prizes.See Section 2.3 for details.category (such as faces and pedestrians) or a few specific categories.In contrast, in the past several years the research community has started moving towards the challenging goal of building general purpose object detection systems whose breadth of object detection ability rivals that of humans. However in 2012, Krizhevsky et al. [109] proposed a Deep Convolutional Neural Network (DCNN) called AlexNet which achieved record breaking image classification accuracy in the Large Scale Visual Recognition Challenge (ILSRVC) [179].Since that time the research focus in many computer vision application areas has been on deep learning methods.A great many approaches based on deep learning have sprung up in generic object detection [65,77,64,183,176] and tremendous progress has been achieved, yet we are unaware of comprehensive surveys of the subject during the past five years.Given this time of rapid evolution, the focus of this paper is specifically that of generic object detection by deep learning, in order to gain a clearer panorama in generic object detection. The generic object detection problem itself is defined as follows: Given an arbitrary image, determine whether there are any instances of semantic objects from predefined categories and, if present, to return the spatial location and extent.Object refers to a material thing that can be seen and touched.Although largely synonymous with object class detection, generic object detection places a greater emphasis on approaches aimed at detecting a broad range of natural categories, as opposed to object instances or specialized categories (e.g., faces, pedestrians, or cars).Generic object detection has received significant attention, as demonstrated by recent progress on object detection competitions such as the PAS-CAL VOC detection challenge from 2006 to 2012 [53,54], the ILSVRC large scale detection challenge since 2013 [179], and the MS COCO large scale detection challenge since 2015 [129].The striking improvement in recent years is illustrated in Fig. 1. There are few recent surveys focusing directly on the problem of generic object detection, except for the work by Zhang et al. [240] who conducted a survey on the topic of object class detection.However, the research reviewed in [69], [5] and [240] is mostly that preceding 2012, and therefore before the more recent striking success of deep learning and related methods. Deep learning allows computational models consisting of multiple hierarchical layers to learn fantastically complex, subtle, and abstract representations.In the past several years, deep learning has driven significant progress in a broad range of problems, such as visual recognition, object detection, speech recognition, natural language processing, medical image analysis, drug discovery and genomics.Among different types of deep neural networks, Deep Convolutional Neural Networks (DCNN) [115,109,116] have brought about breakthroughs in processing images, video, speech and audio.Given this time of rapid evolution, researchers have recently published surveys on different aspects of deep learning, including that of Bengio et al. [12], LeCun et al. [116], Litjens et al. [133], Gu et al. [71], and more recently in tutorials at ICCV and CVPR. Although many deep learning based methods have been proposed for objection detection, we are unaware of comprehensive surveys of the subject during the past five years, the focus of this survey.A thorough review and summarization of existing work is essential for further progress in object detection, particularly for researchers wishing to enter the field.Extensive work on CNNs for specific object detection, such as face detection [120,237,92], pedestrian detection [238,85], vehicle detection [247] and traffic sign detection [253] will not be included in our discussion.Deep Learning [116] 2015 Nature An introduction to deep learning and its typical applications Categorization Methodology The number of papers on generic object detection published since deep learning entering is just breathtaking.So many, in fact, that compiling a comprehensive review of the state of the art already exceeds the possibility of a paper like this one.It is necessary to establish some selection criteria, e.g.completeness of a paper and importance to the field.We have preferred to include top journal and conference papers.Due to limitations on space and our knowledge, we sincerely apologize to those authors whose works are not included in this paper.For surveys of efforts in related topics, readers are referred to the articles in Table 1.This survey mainly focuses on the major progress made in the last five years; but for completeness and better readability, some early related works are also included.We restrict ourselves to still pictures and leave video object detection as a separate topic. The remainder of this paper is organized as follows.Related background, including the problem, key challenges and the progress made during the last two decades are summarized in Section 2. We describe the milestone object detectors in Section 3. Fundamental subproblems and relevant issues involved in designing object detectors are presented in Section 4. A summarization of popular databases and state of the art performance is given in 5. We conclude the paper with a discussion of several promising directions in Section 6. The Problem Generic object detection (i.e., generic object category detection), also called object class detection [240] or object category detection, is defined as follows.Given an image, the goal of generic object detection is to determine whether or not there are instances of objects from many predefined categories and, if present, to return the spatial location and extent of each instance.It places greater emphasis on detecting a broad range of natural categories, as opposed to specific object category detection where only a narrower predefined category of interest (e.g., faces, pedestrians, or cars) may be present.Although thousands of objects occupy the visual world in which we live, currently the research community is primarily interested in the localization of highly structured objects (e.g., cars, faces, bicycles and airplanes) and articulated (e.g., humans, cows and horses) rather than unstructured scenes (such as sky, grass and cloud). Typically, the spatial location and extent of an object can be defined coarsely using a bounding box, i.e., an axis-aligned rectangle tightly bounding the object [53,179], a precise pixel-wise segmentation mask, or a closed boundary [180,129], as illustrated in Fig. 3. To our best knowledge, in the current literature, bounding boxes are more widely used for evaluating generic object detection algorithms [53,179], and will be the approach we adopt in this survey as well.However the community is moving towards deep scene understanding (from image level object classification to single object localization, to generic object detection, and to pixel-wise object segmentation), hence it is anticipated that future challenges will be at the pixel level [129]. There are many problems closely related to that of generic object detection 1 .The goal of object classification or object categorization (Fig. 3 (a)) is to assess the presence of objects from a given number of object classes in an image; i.e., assigning one or more object class labels to a given image, determining presence without the need of location.It is obvious that the additional requirement to locate the instances in an image makes detection a more challenging task than classification.The object recognition problem denotes the more general problem of finding and identifying objects of interest present in an image, subsuming the problems of object detection and object classification [53,179,156,5]. 1 To our best knowledge, there is no universal agreement in the literature on the definitions of various vision subtasks.Often encountered terms such as detection, localization, recognition, classification, categorization, verification and identification, annotation, labeling and understanding are often differently defined [5]. Ideal Detector High Accuracy • Localization Acc. • Recognition Acc.Fig. 5 Changes in imaged appearance of the same class with variations in imaging conditions (a-g).There is an astonishing variation in what is meant to be a single object class (h).In contrast, the four images in (i) appear very similar, but in fact are from four different object classes.Images from ImageNet [179] and MS COCO [129]. High Efficiency Generic object detection is closely related with semantic image segmentation (Fig. 3 (c)), which aims to assign each pixel in an image to a semantic class label.Object instance segmentation (Fig. 3 (d)) aims at distinguishing different instances of the same object class, while semantic segmentation does not distinguish different instances.Generic object detection also distinguishes different instances of the same object.Different from segmentation, object detection includes background region in the bounding box that might be useful for analysis. Main Challenges Generic object detection aims at localizing and recognizing a broad range of natural object categories.The ideal goal of generic object detection is to develop general-purpose object detection algorithms achieving two competing goals: high quality/accuracy and high efficiency, as illustrated in Fig. 4. As illustrated in Fig. 5, high quality detection has to accurately localize and recognize objects in images or video frames, such that the large variety of object categories in the real world can be distinguished (i.e., high distinctiveness), and that object instances from the same category, subject to intraclass appearance variations, can be localized and recognized (i.e., high robustness).High efficiency requires the entire detection task to run at a sufficiently high frame rate with acceptable memory and storage usage.Despite several decades of research and significant progress, arguably the combined goals of accuracy and efficiency have not yet been met. Accuracy related challenges For accuracy, the challenge stems from 1) the vast range of intraclass variations and 2) the huge number of object categories. We begin with intraclass variations, which can be divided into two types: intrinsic factors, and imaging conditions.For the former, each object category can have many different object instances, possibly varying in one or more of color, texture, material, shape, and size, such as the "chair" category shown in Fig. 5 (h).Even in a more narrowly defined class, such as human or horse, object instances can appear in different poses, with nonrigid deformations and different clothes. For the latter, the variations are caused by changes in imaging conditions and unconstrained environments which may have dramatic impacts on object appearance.In particular, different instances, or even the same instance, can be captured subject to a wide number of differences: different times, locations, weather conditions, cameras, backgrounds, illuminations, viewpoints, and viewing distances.All of these conditions produce significant variations in object appearance, such as illumination, pose, scale, occlusion, background clutter, shading, blur and motion, with examples illustrated in Fig. 5 (a-g).Further challenges may be added by digitization artifacts, noise corruption, poor resolution, and filtering distortions. In addition to intraclass variations, the large number of object categories, on the order of 10 4 −10 5 , demands great discrimination power of the detector to distinguish between subtly different interclass variations, as illustrated in Fig. 5 (i)).In practice, current detectors focus mainly on structured object categories, such as the 20, 200 and 91 object classes in PASCAL VOC [53], ILSVRC [179] and MS COCO [129] respectively.Clearly, the number of object categories under consideration in existing benchmark datasets is much smaller than that can be recognized by humans. Efficiency related challenges The exponentially increasing number of images calls for efficient and scalable detectors.The prevalence of social media networks and mobile/wearable devices has led to increasing demands for analyzing visual data.However mobile/wearable devices have limited computational capabilities and storage space, in which case an efficient object detector is critical. For efficiency, the challenges stem from the need to localize and recognize all object instances of very large number of object categories, and the very large number of possible locations and scales within a single image, as shown by the example in Fig. 5 (c).A further challenge is that of scalability: A detector should be able to handle unseen objects, unknown situations, and rapidly increasing image data.For example, the scale of ILSVRC [179] is already imposing limits on the manual annotations that are feasible to obtain.As the number of images and the number of categories grow even larger, it may become impossible to annotate them manually, forcing algorithms to rely more on weakly supervised training data. Progress in the Past Two Decades Early research on object recognition was based on template matching techniques and simple part based models [57], focusing on specific objects whose spatial layouts are roughly rigid, such as faces.Before 1990 the leading paradigm of object recognition was based on geometric representations [149,169], with the focus later moving away from geometry and prior models towards the use of statistical classifiers (such as Neural Networks [178], SVM [159] and Adaboost [213,222]) based on appearance features [150,181].This successful family of object detectors set the stage for most subsequent research in this field. In the late 1990s and early 2000s object detection research made notable strides.The milestones of object detection in recent years are presented in Fig. 2, in which two main eras (SIFT vs. DCNN) are highlighted.The appearance features moved from global representations [151,197,205] to local representations that are invariant to changes in translation, scale, rotation, illumination, viewpoint and occlusion.Handcrafted local invariant features gained tremendous popularity, starting from the Scale Invariant Feature Transform (SIFT) feature [139], and the progress on various visual recognition tasks was based substantially on the use of local descriptors [145] such as Haar like features [213], SIFT [140], Shape Contexts [11], Histogram of Gradients (HOG) [42] and Local Binary Patterns (LBP) [153], covariance [206].These local features are usually aggregated by simple concatenation or feature pooling encoders such as the influential and efficient Bag of Visual Words approach introduced by Sivic and Zisserman [194] and Csurka et al. [37], Spatial Pyramid Matching (SPM) of BoW models [114], and Fisher Vectors [166]. For years, the multistage handtuned pipelines of handcrafted local descriptors and discriminative classifiers dominated a variety of domains in computer vision, including object detection, until the significant turning point in 2012 when Deep Convolutional Neural Networks (DCNN) [109] achieved their record breaking results in image classification.The successful application of DCNNs to image classification [109] transferred to object detection, resulting in the milestone Region based CNN (RCNN) detector of Girshick et al. [65].Since then, the field of object detection has dramatically evolved and many deep learning based approaches have been developed, thanks in part to available GPU computing resources and the availability of large scale datasets and challenges such as Im-ageNet [44,179] and MS COCO [129].With these new datasets, researchers can target more realistic and complex problems when detecting objects of hundreds categories from images with large intraclass variations and interclass similarities [129,179]. The research community has started moving towards the challenging goal of building general purpose object detection systems whose ability to detect many object categories matches that of humans.This is a major challenge: according to cognitive scientists, human beings can identify around 3,000 entry level categories and 30,000 visual categories overall, and the number of categories distinguishable with domain expertise may be on the order of 10 5 [14].Despite the remarkable progress of the past years, designing an accurate, robust, efficient detection and recognition system that approaches human-level performance on 10 4 − 10 5 categories is undoubtedly an open problem. Frameworks There has been steady progress in object feature representations and classifiers for recognition, as evidenced by the dramatic change from handcrafted features [213,42,55,76,212] to learned DCNN features [65,160,64,175,40]. In contrast, the basic "sliding window" strategy [42,56,55] for localization remains to be the main stream, although with some endeavors in [113,209].However the number of windows is large and grows quadratically with the number of pixels, and the need to search over multiple scales and aspect ratios further increases the search space.The the huge search space results in high computational complexity.Therefore, the design of efficient and effective detection framework plays a key role.Commonly adopted strategies include cascading, sharing feature computation, and reducing per-window computation. In this section, we review the milestone detection frameworks present in generic object detection since deep learning entered the field, as listed in Fig. 6 and summarized in Table 10.Nearly all detectors proposed over the last several years are based on one of these milestone detectors, attempting to improve on one or more aspects.Broadly these detectors can be organized into two main categories: A. Two stage detection framework, which includes a pre-processing step for region proposal, making the overall pipeline two stage.B. One stage detection framework, or region proposal free framework, which is a single proposed method which does not separate detection proposal, making the overall pipeline singlestage. Section 4 will build on the following by discussing fundamental subproblems involved in the detection framework in greater detail, including DCNN features, detection proposals, context modeling, bounding box regression and class imbalance handling. Region Based (Two Stage Framework) In a region based framework, category-independent region proposals are generated from an image, CNN [109] features are extracted from these regions, and then category-specific classifiers are used to determine the category labels of the proposals.As can be observed from Fig. 6, DetectorNet [198], OverFeat [183], MultiBox [52] and RCNN [65] independently and almost simultaneously proposed using CNNs for generic object detection. RCNN: Inspired by the breakthrough image classification results obtained by CNN and the success of selective search in region proposal for hand-crafted features [209], Girshick et al. were among the first to explore CNN for generic object detection and developed RCNN [65,67], which integrates AlexNet [109] with the region proposal method selective search [209].As illustrated in Fig. 7, training in an RCNN framework consists of multistage pipelines: SPPNet: During testing, CNN features extraction is the main bottleneck of the RCNN detection pipeline, which requires to extract CNN features from thousands of warped region proposals for an image.Noticing these obvious disadvantages, He et al. [77] introduced the traditional spatial pyramid pooling (SPP) [68,114] into CNN architectures.Since convolutional layers accept inputs of arbitrary sizes, the requirement of fixed-sized images in CNNs is only due to the Fully Connected (FC) layers, He et al. found this fact and added an SPP layer on top of the last convolutional (CONV) layer to obtain features of fixed-length for the FC layers.With this SPPnet, RCNN obtains a significant speedup without sacrificing any detection quality because it only needs to run the convolutional layers once on the entire test image to generate fixed-length features for region proposals of arbitrary size.While SPPnet accelerates RCNN evaluation by orders of magnitude, it does not result in a comparable speedup of the detector training.Moreover, finetuning in SPPnet [77] is unable to update the convolutional layers before the SPP layer, which limits the accuracy of very deep networks. Fast RCNN: Girshick [64] proposed Fast RCNN that addresses some of the disadvantages of RCNN and SPPnet, while improving on their detection speed and quality.As illustrated in Fig. 8, Fast RCNN enables end-to-end detector training (when ignoring Faster RCNN [175,176]: Although Fast RCNN significantly sped up the detection process, it still relies on external region proposals.Region proposal computation is exposed as the new bottleneck in Fast RCNN.Recent work has shown that CNNs have a remarkable ability to localize objects in CONV layers [243,244,36,158,75], an ability which is weakened in the FC layers.Therefore, the selective search can be replaced by the CNN in producing region proposals.The Faster RCNN framework proposed by Ren et al. [175,176] proposed an efficient and accurate Region Proposal Network (RPN) to generating region proposals.They utilize single network to accomplish the task of RPN for region proposal and Fast RCNN for region classification.In Faster RCNN, the RPN and fast RCNN share large number of convolutional layers.The features from the last shared convolutional layer are used for region proposal and region classification from separate branches.RPN first initializes k n × n reference boxes (i.e. the so called anchors) of different scales and aspect ratios at each CONV feature map location.Each n × n anchor is mapped to a lower dimensional vector (such as 256 for ZF and 512 for VGG), which is fed into two sibling FC layers -an object category classification layer and a box regression layer.Different from Fast RCNN, the features used for regression in RPN have the same size.RPN shares CONV features with Fast RCNN, thus enabling highly efficient region proposal computation.RPN is, in fact, a kind of Fully Convolutional Network (FCN) [138,185]; Faster RCNN is thus a purely CNN based framework without using handcrafted features.For the very deep VGG16 model [191], Faster RCNN can test at 5fps (including all steps) on a GPU, while achieving state of the art object detection accuracy on PASCAL VOC 2007 using 300 proposals per image.The initial Faster RCNN in [175] contains several alternating training steps.This was then simplified by one step joint training in [176]. Concurrent with the development of Faster RCNN, Lenc and Vedaldi [117] challenged the role of region proposal generation methods such as selective search, studied the role of region proposal generation in CNN based detectors, and found that CNNs contain sufficient geometric information for accurate object detec- Fig. 8 High level diagrams of the leading frameworks for generic object detection.The properties of these methods are summarized in Table 10.tion in the CONV rather than FC layers.They proved the possibility of building integrated, simpler, and faster object detectors that rely exclusively on CNNs, removing region proposal generation methods such as selective search. RFCN (Region based Fully Convolutional Network): While Faster RCNN is an order of magnitude faster than Fast RCNN, the fact that the region-wise subnetwork still needs to be applied per RoI (several hundred RoIs per image) led Dai et al. [40] to propose the RFCN detector which is fully convolutional (no hidden FC layers) with almost all computation shared over the entire image.As shown in Fig. 8, RFCN differs from Faster RCNN only in the RoI subnetwork.In Faster RCNN, the computation after the RoI pooling layer cannot be shared.A natural idea is to minimize the amount of computation that cannot be shared, hence Dai et al. [40] proposed to use all CONV layers to construct a shared RoI subnetwork and RoI crops are taken from the last layer of CONV features prior to prediction.However, Dai et al. [40] found that this naive design turns out to have considerably inferior detection accuracy, conjectured to be that deeper CONV layers are more sensitive to category semantic and less sensitive to translation, whereas object detection needs localization representations that respect translation variance.Based on this observation, Dai et al. [40] constructed a set of position sensitive score maps by using a bank of specialized CONV layers as the FCN output, on top of which a position sensitive RoI pooling layer different from the more standard RoI pooling in [64,175] is added.They showed that the RFCN with ResNet101 [79] could achieve comparable accuracy to Faster RCNN, often at faster running times. Mask RCNN: Following the spirit of conceptual simplicity, efficiency, and flexibility, He et al. [80] proposed Mask RCNN to tackle pixel-wise object instance segmentation by extending Faster RCNN.Mask RCNN adopts the same two stage pipeline, with an identical first stage (RPN).In the second stage, in parallel to predicting the class and box offset, Mask RCNN adds a branch which outputs a binary mask for each RoI.The new branch is a Fully Convolutional Network (FCN) [138,185] on top of a CNN feature map.In order to avoid the misalignments caused by the original RoI pooling (RoIPool) layer, a RoIAlign layer was proposed to preserve the pixel level spatial correspondence.With a backbone network ResNeXt101-FPN [223,130], Mask RCNN achieved top results for the COCO object instance segmentation and bounding box object detection.It is simple to train, generalizes well, and adds only a small overhead to Faster RCNN, running at 5 FPS [80]. Light Head RCNN: In order to further speed up the detection speed of RFCN [40], Li et al. [128] proposed Light Head RCNN, making the head of the detection network as light as possible to reduce the RoI regionwise computation.In particular, Li et al. [128] applied a large kernel separable convolution to produce thin feature maps with small channel number and a cheap RCNN subnetwork, leading to an excellent tradeoff of speed and accuracy. Unified Pipeline (One Stage Pipeline) The region-based pipeline strategies of Section 3.1 have prevailed on detection benchmarks since RCNN [65].The significant efforts introduced in Section 3.1 have led to faster and more accurate detectors, and the current leading results on popular benchmark datasets are all based on Faster RCNN [175].In spite of that progress, region-based approaches could be computationally expensive for mobile/wearable devices, which have limited storage and computational capability.Therefore, instead of trying to optimize the individual components of a complex region-based pipeline, researchers have begun to develop unified detection strategies. Unified pipelines refer broadly to architectures that directly predict class probabilities and bounding box offsets from full images with a single feed forward CNN network in a monolithic setting that does not involve region proposal generation or post classification.The approach is simple and elegant because it completely eliminates region proposal generation and subsequent pixel or feature resampling stages, encapsulating all computation in a single network.Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. DetectorNet: Szegedy et al. [198] were among the first to explore CNNs for object detection.DetectorNet formulated object detection a regression problem to object bounding box masks.They use AlexNet [109] and replace the final softmax classifier layer by a regression layer.Given an image window, they use one network to predict foreground pixels over a coarse grid, as well as four additional networks to predict the object's top, bottom, left and right halves.A grouping process then converts the predicted masks into detected bounding boxes.One needs to train a network per object type and mask type.It does not scale up to multiple classes.De-tectorNet must take many crops of the image, and run multiple networks for each part on every crop. OverFeat, proposed by Sermanet et al. [183], was one of the first modern one-stage object detectors based on fully convolutional deep networks.It is one of the most successful object detection frameworks, winning the ILSVRC2013 localization competition.OverFeat performs object detection in a multiscale sliding window fashion via a single forward pass through the CNN network, which (with the exception of the final classification/regressor layer) consists only of convolutional layers.In this way, they naturally share computation between overlapping regions.OverFeat produces a grid of feature vectors, each of which represents a slightly different context view location within the input image and can predict the presence of an object.Once an object is identified, the same features are then used to predict a single bounding box regressor.In addition, OverFeat leverages multiscale features to improve the overall performance by passing up to six enlarged scales of the original image through the network and iteratively aggregating them together, resulting in a significantly increased number of evaluated context views (final feature vectors).OverFeat has a significant speed advantage over RCNN [65], which was proposed during the same period, but is significantly less accurate because it is hard to train fully convolutional network at that stage.The speed advantage derives from sharing the computation of convolution between overlapping windows using fully convolutional network. YOLO (You Only Look Once): Redmon et al. [174] proposed YOLO, a unified detector casting object detection as a regression problem from image pixels to spatially separated bounding boxes and associated class probabilities.The design of YOLO is illustrated in Fig. 8. Since the region proposal generation stage is completely dropped, YOLO directly predicts detections using a small set of candidate regions.Unlike region-based approaches, e.g.Faster RCNN, that predict detections based on features from local region, YOLO uses the features from entire image globally.In particular, YOLO divides an image into a S × S grid.Each grid predicts C class probabilities, B bounding box locations and confidences scores for those boxes.These predictions are encoded as an S ×S ×(5B +C) tensor.By throwing out the region proposal generation step entirely, YOLO is fast by design, running in real time at 45 FPS and a fast version, i.e.Fast YOLO [174], running at 155 FPS.Since YOLO sees the entire image when making predictions, it implicitly encodes contextual information about object classes and is less likely to predict false positives on background.YOLO makes more localization errors resulting from the coarse division of bounding box location, scale and aspect ratio.As discussed in [174], YOLO may fail to localize some objects, especially small ones, possibly because the grid division is quite coarse, and because by construction each grid cell can only contain one object.It is unclear to what extent YOLO can translate to good performance on datasets with significantly more objects, such as the ILSVRC detection challenge. YOLOv2 and YOLO9000: Redmon and Farhadi [173] proposed YOLOv2, an improved version of YOLO, in which the custom GoogLeNet [200] network is replaced with a simpler Dark-Net19, plus utilizing a number of strategies drawn from existing work, such as batch normalization [78], removing the fully connected layers, and using good anchor boxes learned with kmeans and multiscale training.YOLOv2 achieved state of the art on standard detection tasks, like PASCAL VOC and MS COCO.In addition, Redmon and Farhadi [173] introduced YOLO9000, which can detect over 9000 object categories in real time by proposing a joint optimization method to train simultaneously on ImageNet and COCO with WordTree to combine data from multiple sources. SSD (Single Shot Detector): In order to preserve real-time speed without sacrificing too much detection accuracy, Liu et al. [136] proposed SSD, which is faster than YOLO [174] and has accuracy competitive with state-of-the-art region-based detectors, including Faster RCNN [175].SSD effectively combines ideas from RPN in Faster RCNN [175], YOLO [174] and multiscale CONV features [75] to achieve fast detection speed while still retaining high detection quality.Like YOLO, SSD predicts a fixed number of bounding boxes and scores for the presence of object class instances in these boxes, followed by an NMS step to produce the final detection.The CNN network in SSD is fully convolutional, whose early layers are based on a standard architecture, such as VGG [191] (truncated before any classification layers), which is referred as the base network.Then several auxiliary CONV layers, progressively decreasing in size, are added to the end of the base network.The information in the last layer with low resolution may be too coarse spatially to allow precise localization.SSD uses shallower layers with higher resolution for detecting small objects.For objects of different sizes, SSD performs detection over multiple scales by operating on multiple CONV feature maps, each of which predicts category scores and box offsets for bounding boxes of appropriate sizes.For a 300 × 300 input, SSD achieves 74.3% mAP on the VOC2007 test at 59 FPS on a Nvidia Titan X. Fundamental SubProblems In this section important subproblems are described, including feature representation, region proposal, context information mining, and training strategies.Each approach is reviewed with respect to its primary contribution. DCNN based Object Representation As one of the main components in any detector, good feature representations are of primary importance in object detection [46,65,62,249].In the past, a great deal of effort was devoted to designing local descriptors (e.g., SIFT [139] and HOG [42]) and to explore approaches (e.g., Bag of Words [194] and Fisher Vector [166]) to group and abstract the descriptors into higher level representations in order to allow the discriminative object parts to begin to emerge, however these feature representation methods required careful engineering and considerable domain expertise. In contrast, deep learning methods (especially deep CNNs, or DCNNs), which are composed of multiple processing layers, can learn powerful feature representations with multiple levels of abstraction directly from raw images [12,116].As the learning procedure reduces the dependency of specific domain knowledge and complex procedures needed in traditional feature engineering [12,116], the burden for feature representation has been transferred to the design of better network architectures. The leading frameworks reviewed in Section 3 (RCNN [65], Fast RCNN [64], Faster RCNN [175], YOLO [174], SSD [136]) have persistently promoted detection accuracy and speed.It is generally accepted that the CNN representation plays a crucial role and it is the CNN architecture which is the engine of a detector.As a result, most of the recent improvements in detection accuracy have been achieved via research into the development of novel networks.Therefore we begin by reviewing popular CNN architectures used in Generic Object Detection, followed by a review of the effort devoted to improving object feature representations, such as developing invariant features to accommodate geometric variations in object scale, pose, viewpoint, part deformation and performing multiscale analysis to improve object detection over a wide range of scales. Popular CNN Architectures CNN architectures serve as network backbones to be used in the detection frameworks described in Section 3. Representative frameworks include AlexNet [110], ZFNet [234] VGGNet [191], GoogLeNet [200], Inception series [99,201,202], ResNet [79], DenseNet [94] and SENet [91], which are summarized in Table 2, and where the network improvement in object recognition can be seen from Fig. 9.A further review of recent CNN advances can be found in [71]. Briefly, a CNN has a hierarchical structure and is composed of a number of layers such as convolution, nonlinearity, pooling etc. From finer to coarser layers, the image repeatedly undergoes filtered convolution, and with each layer the receptive field (region of support) of these filters increases.For example, the pioneering AlexNet [110] has five convolutional layers and two Fully Connected (FC) layers, and where the first layer contains 96 filters of size 11 × 11 × 3.In general, the first CNN layer extracts low level features (e.g.edges), intermediate layers extract features of increasing complexity, such as combinations of low level features, and later convolutional layers detect objects as combinations of earlier parts [234,12,116,157]. As can be observed from Table 2, the trend in architecture evolution is that networks are getting deeper: AlexNet consisted of 8 layers, VGGNet 16 layers, and more recently ResNet and DenseNet both surpassed the 100 layer mark, and it was VGGNet [191] and GoogLeNet [200], in particular, which showed that increasing depth can improve the representational power of deep networks.Interestingly, as can be observed from Table 2, networks such as AlexNet, OverFeat, ZFNet and VGGNet have an enormous number of parameters, despite being only few layers deep, since a large fraction of the parameters come from the FC layers.Therefore, newer networks like Inception, ResNet, and DenseNet, although having a very great network depth, have far fewer parameters by avoiding the use of FC layers. With the use of Inception modules in carefully designed topologies, the parameters of GoogLeNet is dramatically reduced.Similarly ResNet demonstrated the effectiveness of skip connections for learning extremely deep networks with hundreds of layers, winning the ILSVRC 2015 classification task.Inspired by ResNet [79], InceptionResNets [202] combine the Inception networks with shortcut connections, claiming that shortcut connections can significantly accelerate the training of Inception networks.Extending ResNets, Huang et al. [94] proposed DenseNets which are built from dense blocks, where dense blocks connect each layer to every other layer in a feed-forward fashion, leading to compelling advantages such as parameter efficiency, implicit deep supervision, and feature reuse.Recently, Hu et al. [79] proposed an architectural unit termed the Squeeze and Excitation (SE) block which can be combined with existing deep architectures to boost their performance at minimal additional computational cost, by adaptively recalibrating channelwise feature responses by explicitly modeling the interdependencies between convolutional feature channels, leading to winning the ILSVRC 2017 classification task.Research on CNN architectures remain active, and a numer of backbone networks are still emerging such as Dilated Residual Networks [230], Xception [35], DetNet [127], and Dual Path Networks (DPN) [31]. The training of a CNN requires a large labelled dataset with sufficient label and intraclass diversity.Unlike image classification, detection requires localizing (possibly many) objects from an image.It has been shown [161] that pretraining the deep model with a large scale dataset having object-level annotations (such as the ImageNet classification and localization dataset), instead of only image-level annotations, improves the detection performance.However collecting bounding box labels is expensive, especially for hundreds of thousands of categories.A common scenario is for a CNN to be pretrained on a large dataset (usually with a large number of visual categories) with image-level labels; the pretrained CNN can then be applied to a small dataset, directly, as a generic feature extractor [172,8,49,228], which can support a wider range of visual recognition tasks.For detection, the pre-trained network is typically finetuned2 on a given detection dataset [49,65,67].Several large scale image classification datasets are used for CNN pretraining; among them the ImageNet1000 dataset [44,179] with 1.2 million images of 1000 object categories, or the Places dataset [245] which is much larger than ImageNet1000 but has fewer classes, or a recent hybrid dataset [245] combining the Places and ImageNet datasets. Pretrained CNNs without finetuning were explored for object classification and detection in [49,67,1], where it was shown that features performance is a function of the extracted layer; for example, for AlexNet pretrained on ImageNet, FC6 / FC7 / Pool5 are in descending order of detection accuracy [49,67]; finetuning a pretrained network can increase detection performance significantly [65,67], although in the case of AlexNet the finetuning performance boost was shown to be much larger for FC6 and FC7 than for Pool5, suggesting that the Pool5 features are more general.Furthermore the relationship or similarity between the source and target datasets plays a critical role, for example that ImageNet based CNN features show better performance [243] on object related image datasets. Methods For Improving Object Representation Deep CNN based detectors such as RCNN [65], Fast RCNN [64], Faster RCNN [175] and YOLO [174], typically use the deep CNN architectures listed in 2 as the backbone network and use features from the top layer of the CNN as object representation, however detecting objects across a large range of scales is a fundamental challenge.A classical strategy to address this issue is to run the detector over a number of scaled input images (e.g., an image pyramid) [56,65,77], which typically produces more accurate detection, however with obvious limitations of inference time and memory.In contrast, a CNN computes its feature hierarchy layer by layer, and the subsampling layers in the feature hierarchy lead to an inherent multiscale pyramid. This inherent feature hierarchy produces feature maps of different spatial resolutions, but have inherent problems in structure [75,138,190]: the later (or higher) layers have a large receptive field and strong semantics, and are the most robust to variations such as object pose, illumination and part deformation, but the resolution is low and the geometric details are lost.On the contrary, the earlier (or lower) layers have a small receptive field and rich geometric details, but the resolution is high and is much less sensitive to semantics.Intuitively, semantic concepts of objects can emerge in different layers, depending on the size of the objects.So if a target object is small it requires fine detail information in earlier layers and may very well disappear at later layers, in principle making small object detection very challenging, for which tricks such as dilated convolutions [229] or atrous convolution [40,27] have been proposed.On the other hand if the target object is large then the semantic concept will emerge in much later layers.Clearly it is not optimal to predict objects of different scales with features from only one layer, therefore a number of methods [190,241,130,104] have been proposed to improve detection accuracy by exploiting multiple CNN layers, broadly falling into three types of multiscale object detection: 1. Detecting with combined features of multiple CNN layers [75,103,10]; 2. Detecting at multiple CNN layers; 3. Combinations of the above two methods [58,130,190,104,246,239]. (1) Detecting with combined features of multiple CNN layers seeks to combine features from multiple layers before making a prediction.Representative approaches include Hypercolumns [75], HyperNet [103], and ION [10].Such feature combining is commonly accomplished via skip connections, a classic neural network idea that skips some layers in the network and feeds the output of an earlier layer as the input to a later layer, architectures which have recently become popular for semantic segmentation [138,185,75].As shown in Fig. 10 (a), ION [10] uses skip pooling to extract RoI features from multiple layers, and then the object proposals generated by selective search and edgeboxes are classified by using the combined features.HyperNet [103], as shown in (2) Detecting at multiple CNN layers [138,185] combines coarse to fine predictions from multiple layers by averaging segmentation probabilities.SSD [136] and MSCNN [20], RBFNet [135], and DSOD [186] combine predictions from multiple feature maps to handle objects of various sizes.SSD spreads out default boxes of different scales to multiple layers within a CNN and enforces each layer to focus on predicting objects of a certain scale.Liu et al. [135] proposed RFBNet which simply replaces the later convolution layers of SSD with a Receptive Field Block (RFB) to enhance the discriminability and robustness of features.The RFB is a multibranch convolutional block, similar to the Inception block [200], but combining multiple branches with different kernels and convolution layers [27].MSCNN [20] applies deconvolution on multiple layers of a CNN to increase feature map resolution before using the layers to learn region proposals and pool features. (3) Combination of the above two methods recognizes that, on the one hand, the utility of the hyper feature representation by simply incorporating skip features into detection like UNet [154], Hypercolumns [75], HyperNet [103] and ION [10] does not yield significant improvements due to the high dimensionality.On the other hand, it is natural to detect large objects from later layers with large receptive fields and to use earlier layers with small receptive fields to detect small objects; however, simply detecting objects from earlier layers may result in low performance because earlier layers possess less semantic information.Therefore, in order to combine the best of both worlds, some recent works propose to detect objects at multiple layers, and the feature of each detection layer is obtained by combining features from different layers.Representative methods include SharpMask [168], Deconvolutional Single Shot Detector (DSSD) [58], Feature Pyramid Network (FPN) [130], Top Down Modulation (TDM ) [190], Reverse connection with Objectness prior Network (RON) [104], ZIP [122] (shown in Fig. 12), Scale Transfer Detection Network (STDN) [246], RefineDet [239] and StairNet [217], as shown in Table 3 and contrasted in Fig. 11. Table 3 Summarization of properties of representative methods in improving DCNN feature representations for generic object detection.See Section 4.1.2for more detail discussion.Abbreviations: Selective Search (SS), EdgeBoxes (EB), InceptionResNet (IRN).Detection results on VOC07, VOC12 and COCO were reported with mAP@IoU=0.5,and the other column results on COCO were reported with a new metric mAP@IoU=[0.5 : 0.05 : 0.95] which averages mAP over different IoU thresholds from 0.5 to 0.95 (written as [0.5:0.95]).Training data: "07"←VOC2007 trainval; "12"←VOC2012 trainval; "07+12"←union of 07 and VOC12 trainval; "07++12"←union of VOC07 trainval, VOC07 test, and VOC12 trainval; 07++12+CO←union of VOC07 trainval, VOC07 test, VOC12 trainval and COCO trainval.The COCO detection results were reported with COCO2015 Test-Dev, except for MPN [233] As can be observed from Fig. 11 (a1) to (e1), these methods have highly similar detection architectures which incorporate a top down network with lateral connections to supplement the standard bottom-up, feedforward network.Specifically, after a bottom-up pass the final high level semantic features are transmitted back by the top-down network to combine with the bottom-up features from intermediate layers after lateral processing.The combined features are further processed, then used for detection and also transmitted down by the top-down network.As can be seen from Fig. 11 (a2) to (e2), one main difference is the design of the Re- verse Fusion Block (RFB) which handles the selection of the lower layer filters and the combination of multilayer features.The topdown and lateral features are processed with small convolutions and combined with elementwise sum or elementwise product or concatenation.FPN shows significant improvement as a generic feature extractor in several applications including object detection [130,131] and instance segmentation [80], e.g. using FPN in a basic Faster RCNN detector.These methods have to add additional layers to obtain multiscale features, introducing cost that can not be neglected.STDN [246] used DenseNet [94] to combine features of different layers and designed a scale transfer module to obtain feature maps with different resolutions.The scale transfer module module can be directly embedded into DenseNet with little additional cost. (4) Model Geometric Transformations.DCNNs are inherently limited to model significant geometric transformations.An empirical study of the invariance and equivalence of DCNN representations to image transformations can be found in [118].Some approaches have been presented to enhance the robustness of CNN representations, aiming at learning invariant CNN representations with respect to different types of transformations such as scale [101,18], rotation [18,32,218,248], or both [100]. Modeling Object Deformations: Before deep learning, Deformable Part based Models (DPMs) [56] have been very successful for generic object detection, representing objects by component parts arranged in a deformable configuration.This DPM modeling is less sensitive to transformations in object pose, viewpoint and nonrigid deformations because the parts are positioned accordingly and their local appearances are stable, motivating researchers [41,66,147,160,214] to explicitly model object composition to improve CNN based detection.The first attempts [66,214] combined DPMs with CNNs by using deep features learned by AlexNet in DPM based detection, but without region proposals.To enable a CNN to enjoy the built-in capability of modeling the deformations of object parts, a number of approaches were proposed, including DeepIDNet [160], DCN [41] and DPFCN [147] (shown in Table 3).Although similar in spirit, deformations are computed in a different ways: DeepIDNet [161] designed a deformation constrained pooling layer to replace a regular max pooling layer to learn the shared visual patterns and their deformation properties across different object classes, Dai et al. [41] designed a deformable convolution layer and a deformable RoI pooling layer, both of which are based on the idea of augmenting the regular grid sampling locations in the feature maps with additional position offsets and learning the offsets via convolutions, leading to Deformable Convolutional Networks (DCN), and in DPFCN [147], Mordan et al. proposed deformable part based RoI pooling layer which selects discriminative parts of objects around object proposals by simultaneously optimizing latent displacements of all parts. Context Modeling In the physical world visual objects occur in particular environments and usually coexist with other related objects, and there is strong psychological evidence [13,9] that context plays an essential role in human object recognition.It is recognized that proper modeling of context helps object detection and recognition [203,155,27,26,47,59], especially when object appearance features are insufficient because of small object size, occlusion, or poor image quality.Many different types of context have been discussed, in particular see surveys [47,59].Context can broadly be grouped into one of three categories [13,59]: 1. Semantic context: The likelihood of an object to be found in some scenes but not in others; 2. Spatial context: Tthe likelihood of finding an object in some position and not others with respect to other objects in the scene; 3. Scale context: Objects have a limited set of sizes relative to other objects in the scene. A great deal of work [28,47,59,143,152,171,162] preceded the prevalence of deep learning, however much of this work has not been explored in DCNN based object detectors [29,90].The current state of the art in object detection [175,136,80] detects objects without explicitly exploiting any contextual information.It is broadly agreed that DCNNs make use of contextual information implicitly [234,242] since they learn hierarchical representations with multiple levels of abstraction.Nevertheless there is still value in exploring contextual information explicitly in DCNN based detectors [90,29,236], and so the following reviews recent work in exploiting contextual cues in DCNN based object detectors, organized into categories of global and local contexts, motivated by earlier work in [240,59].Representative approaches are summarized in Table 4. Global context [240,59] refers to image or scene level context, which can serve as cues for object detection (e.g., a bedroom will predict the presence of a bed).In DeepIDNet [160], the image classification scores were used as contextual features, and concatenated with the object detection scores to improve detection results.In ION [10], Bell et al. proposed to use spatial Recurrent Neural Networks (RNNs) to explore contextual information across the entire image.In SegDeepM [250], Zhu et al. proposed a MRF model that scores appearance as well as context for each detection, and allows each candidate box to select a segment and score the agreement between them.In [188], semantic segmentation was used as a form of contextual priming. Local context [240,59,171] considers local surroundings in object relations, the interactions between an object and its surrounding area.In general, modeling object relations is challenging, requiring reasoning about bounding boxes of different classes, locations, scales etc.In the deep learning era, research that explicitly models object relations is quite limited, with representative ones being Spatial Memory Network (SMN) [29], Object Relation Network [90], and Structure Inference Network (SIN) [137].In SMN, spatial memory essentially assembles object instances back into a pseudo image representation that is easy to be fed into another CNN for object relations reasoning, leading to a new sequential reasoning architecture where image and memory are processed in parallel to obtain detections which further update memory.Inspired by the recent success of attention modules in natural language processing field [211], Hu et al. [90] proposed a lightweight ORN, which processes a set of objects simultaneously through interaction between their appearance feature and geometry.It does not require additional supervision and is easy to embed in existing networks.It has been shown to be effective in improving object recognition and duplicate removal steps in modern object detection pipelines, giving rise to the first fully end-to-end object detector.SIN [137] considered two kinds of context including scene contextual information and object relationships within a single image.It formulates object detection as a problem of graph structure inference, where given an image the objects are treated as nodes in a graph and relationships between objects are modeled as edges in such graph. In MRCNN [62] (Fig. 13 (a)), in addition to the features extracted from the original object proposal at the last CONV layer of the backbone, Gidaris and Komodakis proposed to extract features from a number of different regions of an object proposal (half regions, border regions, central regions, contextual region and semantically segmented regions), in order to obtain a richer and more robust object representation.All of these features are combined simply by concatenation. Quite a number of methods, all closely related to MRCNN, have been proposed since.The method in [233] used only four contextual regions, organized in a foveal structure, where the classifier is trained jointly end to end.Zeng et al. proposed GBDNet [235,236] (Fig. 13 (b)) to extract features from multiscale contextualized regions surrounding an object proposal to improve detection performance.Different from the naive way of learning CNN features for each region separately and then concatenating them, as in MRCNN, GBDNet can pass messages among features from different contextual regions, implemented through convolution.Noting that message passing is not always helpful but dependent on individual samples, Zeng et al. used gated functions to control message transmission, like in Long Short Term Memory (LSTM) networks [83].Concurrent with GBDNet, Li et al. [123] presented ACCNN (Fig. 13 (c)) to utilize both global and local contextual information to facilitate object detection.To capture global context, a Multiscale Local Contextualized (MLC) subnetwork was pro- posed, which recurrently generates an attention map for an input image to highlight useful global contextual locations, through multiple stacked LSTM layers.To encode local surroundings context, Li et al. [123] adopted a method similar to that in MRCNN [62]. As shown in Fig. 13 (d), CoupleNet [251] is conceptually similar to ACCNN [123], but built upon RFCN [40].In addition to the original branch in RFCN [40], which captures object information with position sensitive RoI pooling, CoupleNet [251] added one branch to encode the global context information with RoI pooling. Detection Proposal Methods An object can be located at any position and scale in an image. During the heyday of handcrafted feature descriptors (e.g., SIFT [140], HOG [42] and LBP [153]), the Bag of Words (BoW) [194,37] and the DPM [55] used sliding window techniques [213,42,55,76,212].However the number of windows is large and grows with the number of pixels in an image, and the need to search at multiple scales and aspect ratios further significantly increases the search space.Therefore, it is computationally too expensive to apply more sophisticated classifiers. Around 2011, researchers proposed to relieve the tension between computational tractability and high detection quality by using detection proposals3 [210,209].Originating in the idea of objectness proposed by [2], object proposals are a set of candidate regions in an image that are likely to contain objects.Detection proposals are usually used as a preprocessing step, in order to reduce the computational complexity by limiting the number of regions that need be evaluated by the detector.Therefore, a good detection proposal should have the following characteristics: 1. High recall, which can be achieved with only a few proposals; 2. The proposals match the objects as accurately as possible; 3. High efficiency. A comprehensive review of object proposal algorithms is outside the scope of this paper, because object proposals have applications beyond object detection [6,72,252].We refer interested readers to the recent surveys [86,23] which provides an in-depth Fig. 13 Representative approaches that explore local surrounding contextual features: MRCNN [62], GBDNet [235,236], ACCNN [123] and CoupleNet [251], see also Table 4. analysis of many classical object proposal algorithms and their impact on detection performance.Our interest here is to review object proposal methods that are based on DCNNs, output class agnostic proposals, and related to generic object detection. In 2014, the integration of object proposals [210,209] and DCNN features [109] led to the milestone RCNN [65] in generic object detection.Since then, detection proposal algorithms have quickly become a standard preprocessing step, evidenced by the fact that all winning entries in the PASCAL VOC [53], ILSVRC [179] and MS COCO [129] object detection challenges since 2014 used detection proposals [65,160,64,175,236,80]. Among object proposal approaches based on traditional lowlevel cues (e.g., color, texture, edge and gradients), Selective Search [209], MCG [7] and EdgeBoxes [254] are among the more popular.As the domain rapidly progressed, traditional object proposal approaches [86] (e.g.selective search [209] and [254]), which were adopted as external modules independent of the detectors, became the bottleneck of the detection pipeline [175].An emerging class of object proposal algorithms [52,175,111,61,167,224] using DCNNs has attracted broad attention. Recent DCNN based object proposal methods generally fall into two categories: bounding box based and object segment based, with representative methods summarized in Table 5. Bounding Box Proposal Methods is best exemplified by the RPC method [175] of Ren et al., illustrated in Fig. 14.RPN predicts object proposals by sliding a small network over the feature map of the last shared CONV layer (as shown in Fig. 14).At each sliding window location, it predicts k proposals simultaneously by using k anchor boxes, where each anchor box 4 is centered at some location in the image, and is associated with a particular scale and aspect ratio.Ren et al. [175] proposed to integrate RPN and Fast 4 The terminology "an anchor box" or "an anchor" first appeared in [175].RCNN into a single network by sharing their convolutional layers.Such a design led to substantial speedup and the first end-to-end detection pipeline, Faster RCNN [175].RPN has been broadly selected as the proposal method by many state of the art object detectors, as can be observed from Tables 3 and 4. Instead of fixing a priori a set of anchors as MultiBox [52,199] and RPN [175], Lu et al. [141] proposed to generate anchor locations by using a recursive search strategy which can adaptively guide computational resources to focus on subregions likely to contain objects.Starting with the whole image, all regions visited during the search process serve as anchors.For any anchor region encountered during the search procedure, a scalar zoom indicator is used to decide whether to further partition the region, and a set of bounding boxes with objectness scores are computed with a deep network called Adjacency and Zoom Network (AZNet).AZNet extends RPN by adding a branch to compute the scalar zoom indicator in parallel with the existing branch. There is further work attempting to generate object proposals by exploiting multilayer convolutional features [103,61,224,122].Generate instance segment proposals efficiently in one shot manner similar to SSD [136], in order to make use of multiscale convolutional features in a deep network; Need segmentation annotations for training. ScaleNet [170] ResNet Concurrent with RPN [175], Ghodrati et al. [61] proposed Deep-Proposal which generates object proposals by using a cascade of multiple convolutional features, building an inverse cascade to select the most promising object locations and to refine their boxes in a coarse to fine manner.An improved variant of RPN, HyperNet [103] designs Hyper Features which aggregate multilayer convolutional features and shares them both in generating proposals and detecting objects via an end to end joint training strategy.Yang et al. proposed CRAFT [224] which also used a cascade strategy, first training an RPN network to generate object proposals and then using them to train another binary Fast RCNN network to further distinguish objects from background.Li et al. [122] proposed ZIP to improve RPN by leveraging a commonly used idea of predicting object proposals with multiple convolutional feature maps at different depths of a network to integrate both low level details and high level semantics.The backbone network used in ZIP is a "zoom out and in" network inspired by the conv and deconv structure [138]. Finally, recent work which deserves mention includes Deepbox [111], which proposed a light weight CNN to learn to rerank proposals generated by EdgeBox, and DeNet [208] which introduces a bounding box corner estimation to predict object proposals efficiently to replace RPN in a Faster RCNN style two stage detector. Object Segment Proposal Methods [167,168] aim to generate segment proposals that are likely to correspond to objects.Segment proposals are more informative than bounding box proposals, and take a step further towards object instance segmentation [74,39,126].A pioneering work was DeepMask proposed by Pinheiro et al. [167], where segment proposals are learned directly from raw image data with a deep network.Sharing similarities with RPN, after a number of shared convolutional layers DeepMask splits the network into two branches to predict a class agnostic mask and an associated objectness score.Similar to the efficient sliding window prediction strategy in OverFeat [183], the trained DeepMask network is applied in a sliding window manner to an image (and its rescaled versions) during inference.More recently, Pinheiro et al. [168] proposed SharpMask by augmenting the DeepMask architecture with a refinement module, similar to the architectures shown in Fig. 11 (b1) and (b2), augmenting the feedforward network with a top-down refinement process.SharpMask can efficiently integrate the spatially rich information from early features with the strong semantic information encoded in later layers to generate high fidelity object masks. Motivated by Fully Convolutional Networks (FCN) for semantic segmentation [138] and DeepMask [167], Dai et al. proposed InstanceFCN [38] for generating instance segment proposals.Similar to DeepMask, the InstanceFCN network is split into two branches, howver the two branches are fully convolutional, where one branch generates a small set of instance sensitive score maps, followed by an assembling module that outputs instances, and the other branch for predicting the objectness score.Hu et al. proposed FastMask [89] to efficiently generate instance segment proposals in a oneshot manner similar to SSD [136], in order to make use of multiscale convolutional features in a deep network.Sliding windows extracted densely from multiscale convolutional feature maps were input to a scale-tolerant attentional head module to predict segmentation masks and objectness scores.FastMask is claimed to run at 13 FPS on a 800 × 600 resolution image with a slight trade off in average recall.Qiao et al. [170] proposed ScaleNet to extend previous object proposal methods like SharpMask [168] by explicitly adding a scale prediction phase.That is, ScaleNet estimates the distribution of object scales for an input image, upon which Sharp-Mask searches the input image at the scales predicted by ScaleNet and outputs instance segment proposals.Qiao et al. [170] showed their method outperformed the previous state of the art on supermarket datasets by a large margin. Other Special Issues Aiming at obtaining better and more robust DCNN feature representations, data augmentation tricks are commonly used [22,64,65].It can be used at training time, at test time, or both.Augmentation refers to perturbing an image by transformations that leave the underlying category unchanged, such as cropping, flipping, rotating, scaling and translating in order to generate additional samples of the class.Data augmentation can affect the recognition performance of deep feature representations.Nevertheless, it has obvious limitations.Both training and inference computational complexity increases significantly, limiting its usage in real applications.Detecting objects under a wide range of variations, and especially, detecting very small objects stands out as one of key challenges.It has been shown [96,136] that image resolution has a considerable impact on detection accuracy.Therefore, among those data augmentation tricks, scaling (especially a higher resolution input) is mostly used, since high resolution inputs enlarge the possibility of small objects to be detected [96].Recently, Singh et al. proposed advanced and efficient data argumentation methods SNIP [192] and SNIPER [193] to illustrate the scale invariance problem, as summarized in Table 6.Motivated by the intuitive understanding that small and large objects are difficult to detect at smaller and larger scales respectively, Singh et al. presented a novel training scheme named SNIP can reduce scale variations during training but without reducing training samples.SNIPER [193] is an approach proposed for efficient multiscale training.It only processes context regions around ground truth objects at the appropriate scale instead of processing a whole image pyramid.Shrivastava et al. [189] and Lin et al. explored approaches to handle the extreme foreground-background class imbalance issue [131].Wang et al. [216] proposed to train an adversarial network to generate examples with occlusions and deformations that are difficult for the object detector to recognize.There are some works focusing on developing better methods for nonmaximum suppression [16,87,207]. 5 Datasets and Performance Evaluation Datasets Datasets have played a key role throughout the history of object recognition research.They have been one of the most important factors for the considerable progress in the field, not only as a common ground for measuring and comparing performance of competing algorithms, but also pushing the field towards increasingly complex and challenging problems.The present access to large numbers of images on the Internet makes it possible to build comprehensive datasets of increasing numbers of images and categories in order to capture an ever greater richness and diversity of objects.The rise of large scale datasets with millions of images has paved the way for significant breakthroughs and enabled unprecedented performance in object recognition.Recognizing space limitations, we refer interested readers to several papers [53,54,129,179,107] for detailed description of related datasets. Earlier datasets, such as Caltech101 or Caltech256, were criticized because of the lack of intraclass variations that they exhibit.As a result, SUN [221] was collected by finding images depicting various scene categories, and many of its images have scene and object annotations which can support scene recognition and object detection.Tiny Images [204] created a dataset at an unprecedented scale, giving comprehensive coverage of all object categories and scenes, however its annotations were not manually verified, containing numerous errors, so two benchmarks (CIFAR10 and CI-FAR100 [108]) with reliable labels were derived from Tiny Images. PASCAL VOC [53,54], a multiyear effort devoted to the creation and maintenance of a series of benchmark datasets for classification and object detection, creates the precedent for standardized evaluation of recognition algorithms in the form of annual competitions.Starting from only four categories in 2005, increasing to 20 categories that are common in everyday life, as shown in Fig. 15.ImageNet [44] contains over 14 million images and over 20,000 categories, the backbone of ILSVRC [44,179] challenge, which has pushed object recognition research to new heights. ImageNet has been criticized that the objects in the dataset tend to be large and well centered, making the dataset atypical of real world scenarios.With the goal of addressing this problem and pushing research to richer image understanding, researchers created the MS COCO database [129].Images in MS COCO are complex everyday scenes containing common objects in their natural context, closer to real life, and objects are labeled using fullysegmented instances to provide more accurate detector evaluation.The Places database [245] contains 10 million scene images, labeled with scene semantic categories, offering the opportunity for data hungry deep learning algorithms to reach human level recognition of visual patterns.More recently, Open Images [106] is a dataset of about 9 million images that have been annotated with image level labels and object bounding boxes.There are three famous challenges for generic object detection: PASCAL VOC [53,54], ILSVRC [179] and MS COCO [129].Each challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardized evaluation software; and (ii) an annual competition and corresponding workshop.Statistics for the number of images and object instances in the training, validation and testing datasets 5 for the detection challenges is given in Table 8. For the PASCAL VOC challenge, since 2009 the data consist of the previous years' images augmented with new images, allowing the number of images to grow each year and, more importantly, meaning that test results can be compared with the previous years' images. ILSVRC [179] scales up PASCAL VOC's goal of standardized training and evaluation of detection algorithms by more than an order of magnitude in the number of object classes and images.The ILSVRC object detection challenge has been run annually from 2013 to the present.7 for summary of these datasets. The COCO object detection challenge is designed to push the state of the art in generic object detection forward, and has been run annually from 2015 to the present.It features two object detection tasks: using either bounding box output or object instance segmentation output.It has fewer object categories than ILSVRC (80 in COCO versus 200 in ILSVRC object detection) but more instances per category (11000 on average compared to about 2600 in ILSVRC object detection).In addition, it contains object segmentation annotations which are not currently available in ILSVRC.COCO introduced several new challenges: (1) it contains objects at a wide range of scales, including a high percentage of small objects (e.g.smaller than 1% of image area [192]).( 2) objects are less iconic and amid clutter or heavy occlusion, and (3) the evaluation metric (see Table 9) encourages more accurate object localization. COCO has become the most widely used dataset for generic object detection, with the dataset statistics for training, validation and testing summarized in Table 8.Starting in 2017, the test set has only the Dev and Challenge splits, where the Test-Dev split is the default test data, and results in papers are generally reported on Test-Dev to allow for fair comparison. 2018 saw the introduction of the Open Images Object Detection Challenge, following in the tradition of PASCAL VOC, Ima-geNet and COCO, but at an unprecedented scale.It offers a broader range of object classes than previous challenges, and has two tasks: bounding box object detection of 500 different classes and visual relationship detection which detects pairs of objects in particular relations. Evaluation Criteria There are three criteria for evaluating the performance of detection algorithms: detection speed (Frames Per Second, FPS), precision, and recall.The most commonly used metric is Average Precision (AP), derived from precision and recall.AP is usually evaluated in a category specific manner, i.e., computed for each object category separately.In generic object detection, detectors are usually tested in terms of detecting a number of object categories.To compare performance over all object categories, the mean AP (mAP) averaged over all object categories is adopted as the final measure of performance 6 .More details on these metrics can be found in [53,54,179,84]. The standard outputs of a detector applied to a testing image I are the predicted detections {(b j , c j , p j )} j , indexed by j.A given detection (b, c, p) (omitting j for notational simplicity) denotes the ACKNOWLEDGMENTS • Li Liu is with the Information System Engineering Key Lab, College of Information System and Management, National University of Defense Technology, China.She is also a post doctor researcher at the Machine Vision Group, University of Oulu, Finland.email: li.liu@oulu.fi• Matti Pietikäinen are with Machine Vision Group, University of Oulu, Finland.email: {matti.pietikainen}@ee.oulu.fi• Wanli Ouyang and Xiaogang Wang are with the Department of Electronic Engineering, Chinese University of Hong Kong, China.email: wanli.ouyang@gmail.com;xgwang@ee.cuhk.edu.hkFig. 16 The algorithm for determining TPs and FPs by greedily matching object detection results to ground truth boxes. predicted location (i.e., the Bounding Box, BB) b with its predicted category label c and its confidence level p.A predicted detection (b, c, p) is regarded as a True Positive (TP) if • The predicted class label c is the same as the ground truth label c g .• The overlap ratio IOU (Intersection Over Union) [53,179] between the predicted BB b and the ground truth one b g is not smaller than a predefined threshold ε.Here area(b ∩ b g ) denotes the intersection of the predicted and ground truth BBs, and area(b ∪ b g ) their union.A typical value of ε is 0.5. Otherwise, it is considered as a False Positive (FP).The confidence level p is usually compared with some threshold β to determine whether the predicted class label c is accepted.Precision The fraction of correct detections out of the total detections returned by the detector with confidence of at least β. Recall The fraction of all Nc objects detected by the detector having a confidence of at least β. AP Average Precision Computed over the different levels of recall achieved by varying the confidence β. VOC AP at a single IOU and averaged over all classes.ILSVRC AP at a modified IOU and averaged over all classes. MS COCO •APcoco: mAP averaged over ten IOUs: {0.5 : 0.05 : 0.95}; • AP small coco : mAP for small objects of area smaller than 32 2 ; • AP medium coco : mAP for objects of area between 32 2 and 96 2 ; • AP large coco : mAP for large objects of area bigger than 96 2 ; AR Average Recall The maximum recall given a fixed number of detections per image, averaged over all categories and IOU thresholds. AR Average Recall MS COCO •AR max=1 coco : AR given 1 detection per image; • AR max=10 coco : AR given 10 detection per image; • AR max=100 coco : AR given 100 detection per image; • AR small coco : AR for small objects of area smaller than 32 2 ; • AR medium coco : AR for objects of area between 32 2 and 96 2 ; • AR large coco : AR for large objects of area bigger than 96 2 ; AP is computed separately for each of the object classes, based on Precision and Recall.For a given object class c and a testing image I i , let {(b ij , p ij )} M j=1 denote the detections returned by a detector, ranked by the confidence p ij in decreasing order.Let B = {b g ik } K k=1 be the ground truth boxes on image I i for the given object class c.Each detection (b ij , p ij ) is either a TP or a FP, which can be determined via the algorithm 7 in Fig. 16.Based on the TP and FP detections, the precision P (β) and recall R(β) [53] can be computed as a function of the confidence threshold β, so by varying the confidence threshold different pairs (P, R) can be obtained, in principle allowing precision to be regarded as a function of recall, i.e.P (R), from which the Average Precision (AP) [53,179] can be found. Table 9 summarizes the main metrics used in the PASCAL, ILSVRC and MS COCO object detection challenges. Performance A large variety of detectors has appeared in the last several years, and the introduction of standard benchmarks such as PASCAL VOC [53,54], ImageNet [179] and COCO [129] has made it easier to compare detectors with respect to accuracy.As can be seen from our earlier discussion in Sections 3 and 4, it is difficult to objectively compare detectors in terms of accuracy, speed and memory alone, as they can differ in fundamental / contextual respects, including the following: [64,80,176] accordingly.The backbone network, the design of detection framework and the availability of good and large scale datasets are the three most important factors in detection. Although it may be impractical to compare every recently proposed detector, it is nevertheless highly valuable to integrate representative and publicly available detectors into a common platform and to compare them in a unified manner.There has been very limited work in this regard, except for Huang's study [96] of the trade off between accuracy and speed of three main families of detectors (Faster RCNN [175], RFCN [40] and SSD [136]) by varying the backbone network, image resolution, and the number of box proposals etc. As can be seen from Tables 3, 4, 5, 6 and Table 10, we have summarized the best reported performance of many methods on three widely used standard benchmarks.The results of these methods were reported on the same test benchmark, despite their differing in one or more of the aspects listed above. Figs. 1 and 17 present a very brief overview of the state of the art, summarizing the best detection results of the PASCAL VOC, ILSVRC and MSCOCO challenges.More results can be found at detection challenge websites [98,148,163].In summary, the backbone network, the detection framework design and the availability of large scale datasets are the three most important factors in detection.Furthermore ensembles of multiple models, the incorporation of context features, and data augmentation all help to achieve better accuracy. In less than five years, since AlexNet [109] was proposed, the Top5 error on ImageNet classification [179] with 1000 classes has dropped from 16% to 2%, as shown in Fig. 9.However, the mAP of the best performing detector [164] (which is only trained to detect 80 classes) on COCO [129] has reached 73%, even at 0.5 IoU, illustrating clearly how object detection is much harder than image classification.The accuracy level achieved by the state of the art detectors is far from satisfying the requirements of general purpose practical applications, so there remains significant room for future improvement. Conclusions Generic object detection is an important and challenging problem in computer vision, and has received considerable attention.Thanks to remarkable development of deep learning techniques, the field of object detection has dramatically evolved.As a comprehensive survey on deep learning for generic object detection, this paper has highlighted the recent achievements, provided a structural taxonomy for methods according to their roles in detection, summarized existing popular datasets and evaluation criteria, and discussed performance for the most representative methods. Despite the tremendous successes achieved in the past several years (e.g.detection accuracy improving significantly from 23% in ILSVRC2013 to 73% in ILSVRC2017), there remains a huge gap between the state-of-the-art and human-level performance, especially in terms of open world learning.Much work remains to be done, which we see focused on the following eight domains: (1) Open World Learning: The ultimate goal is to develop object detection systems that are capable of accurately and efficiently recognizing and localizing instances of all object categories (thousands or more object classes [43]) in all open world scenes, competing with the human visual system.Recent object detection algorithms are learned with limited datasets [53,54,129,179], recognizing and localizing the object categories included in the dataset, but blind, in principle, to other object categories outside the dataset, although ideally a powerful detection system should be able to recognize novel object categories [112,73].Current detection datasets [53,179,129] contain only dozens to hundreds of categories, which is significantly smaller than those which can be recognized by humans.To achieve this goal, new large-scale labeled datasets with significantly more categories for generic object detection will need to be developed, since the state of the art in CNNs require extensive data to train well.However collecting such massive amounts of data, particularly bounding box labels for object detection, is very expensive, especially for hundreds of thousands categories. (2) Better and More Efficient Detection Frameworks: One of the factors for the tremendous successes in generic object detection has been the development of better detection frameworks, both region-based (RCNN [65], Fast RCNN [64], Faster RCNN [175], Mask RCNN [80]) and one-state detectors (YOLO [174], SSD [136]).Region-based detectors have the highest accuracy, but are too computationally intensive for embedded or real-time systems.One-stage detectors have the potential to be faster and simpler, but have not yet reached the accuracy of region-based detectors.One possible limitation is that the state of the art object detectors depend heavily on the underlying backbone network, which have been initially optimized for image classification, causing a learning bias due to the differences between classification and detection, such that one potential strategy is to learn object detectors from scratch, like the DSOD detector [186]. (3) Compact and Efficient Deep CNN Features: Another significant factor in the considerable progress in generic object detection has been the development of powerful deep CNNs, which have increased remarkably in depth, from several layers (e.g., AlexNet [110]) to hundreds of layers (e.g., ResNet [79], DenseNet [94]).These networks have millions to hundreds of millions of parameters, requiring massive data and power-hungry GPUs for training, again limiting their application to real-time / embedded applications.In response, there has been growing research interest in designing compact and lightweight networks [25,4,95,88,132,231], network compression and acceleration [34,97,195,121,124], and network interpretation and understanding [19,142,146]. (4) Robust Object Representations: One important factor which makes the object recognition problem so challenging is the great variability in real-world images, including viewpoint and lighting changes, object scale, object pose, object part deformations, background clutter, occlusions, changes in appearance, image blur, image resolution, noise, and camera limitations and distortions.Despite the advances in deep networks, they are still limited by a lack of robustness to these many variations [134,24], which significantly constrains the usability for real-world applications. (5) Context Reasoning: Real-world objects typically coexist with other objects and environments.It has been recognized that contextual information (object relations, global scene statistics) helps object detection and recognition [155], especially in situations of small or occluded objects or poor image quality.There was extensive work preceding deep learning [143,152,171,47,59], however since the deep learning era there has been only very limited progress in exploiting contextual information [29,62,90].How to efficiently and effectively incorporate contextual information remains to be explored, ideally guided by how humans are quickly able to guide their attention to objects of interest in natural scenes. (6) Object Instance Segmentation: Continuing the trend of moving towards a richer and more detailed understanding image content (e.g., from image classification to single object localization to object detection), a next challenge would be to tackle pixellevel object instance segmentation [129,80,93], as object instance segmentation can play an important role in many potential applications that require the precise boundaries of individual instances. (7) Weakly Supervised or Unsupervised Learning: Current state of the art detectors employ fully-supervised models learned from labelled data with object bounding boxes or segmentation masks [54,129,179,129], however such fully supervised learning has serious limitations, where the assumption of bounding box annotations may become problematic, especially when the number of object categories is large.Fully supervised learning is not scalable in the absence of fully labelled training data, therefore it is valuable to study how the power of CNNs can be leveraged in weakly supervised or unsupervised detection [15,45,187]. (8) 3D Object Detection: The progress of depth cameras has enabled the acquisition of depth information in the form of RGB-D images or 3D point clouds.The depth modality can be employed to help object detection and recognition, however there is only limited work in this direction [30,165,220], but which might benefit from taking advantage of large collections of high quality CAD models [219]. The research field of generic object detection is still far from complete; given the massive algorithmic breakthroughs over the past five years, we remain optimistic of the opportunities over the next five years. Table 10 Summarization of properties and performance of milestone detection frameworks for generic object detection.See Section 3 for detail discussion.The architectures of some methods listed in this table are illustrated in Fig. 8.The properties of the backbone DCNNs can be found in Table 2. Fig. 1 Fig. 1 Recent evolution of object detection performance.We can observe significant performance (mean average precision) improvement since deep learning entered the scene in 2012.The performance of the best detector has been steadily increasing by a significant amount on a yearly basis.(a) Results on the PASCAL VOC datasets: Detection results of winning entries in the VOC2007-2012 competitions (using only provided training data).(b) Top object detection competition results in ILSVRC2013-2017 (using only provided training data). • Thousands of real-world object classes structured and unstructured • Thousands of object categories in real world • Requiring localizing and recognizing objects • Large number of possible locations of objects • Large-scale image/video data Fig. 4 Fig. 4 Summary of challenges in generic object detection. Fig. 6 Fig. 6 Milestones in generic object detection based on the point in time of the first arXiv version. Fig. 9 Fig. 9 Performance of winning entries in the ILSVRC competitions from 2011 to 2017 in the image classification task. Fig. 11 Fig. 12 Fig. 11 Hourglass architectures: Conv1 to Conv5 are the main Conv blocks in backbone networks such as VGG or ResNet.Comparison of a number of Reverse Fusion Block (RFB) commonly used in recent approaches. Input: {(b j , p j )} M j=1 : M predictions for image I for object class c, ranked by the confidence p j in decreasing order; B = {b g k } K k=1 : ground truth BBs on image I for object class c; Output: a ∈ R M : a binary vector indicating each (b j , p j ) to be a TP or FP.Initialize a = 0; for j = 1, ..., M do Set A = ∅ and t = 0; foreach unmatched object b g k in B do if IOU(b j , b g k ) ≥ ε and IOU(b j , b g k ) > t then A = {b g k }; t = IOU(b j , b g k ); end end if A ̸ = ∅ then Set a(i) = 1 since object prediction (b j , p j ) is a TP; Remove the matched GT box in A from B, B = B − A. end end positive detection, per Fig. 16.β Confidence Threshold A confidence threshold for computing P (β) and R(β).(h+10) ); w × h is the size of a GT box.MS COCO Ten IOU thresholds ε ∈ {0.5 : 0.05 : 0.95} P (β) Fig. 17 Fig.17Evolution of object detection performance on COCO (Test-Dev results).Results are quoted from[64,80,176] accordingly.The backbone network, the design of detection framework and the availability of good and large scale datasets are the three most important factors in detection. Fig.17Evolution of object detection performance on COCO (Test-Dev results).Results are quoted from[64,80,176] accordingly.The backbone network, the design of detection framework and the availability of good and large scale datasets are the three most important factors in detection. Table 1 Summarization of a number of related surveys since 2000. Table 2 DCNN architectures that are commonly used for generic object detection.Regarding the statistics for "#Paras" and "#Layers", we didn't consider the final FC prediction layer."TestError" column indicates the Top 5 classification test error on ImageNet1000.Explanations: OverFeat (accurate model), DenseNet201 (Growth Rate 32, DenseNet-BC), and ResNeXt50 (32*4d).The first DCNN; The historical turning point of feature representation from traditional to CNN; In the classification task of ILSVRC2012 competition, achieved a winning Top 5 test error rate of 15.3%, compared to 26.2% given by the second best entry. [173]]Design dense block, which connects each layer to every other layer in a feed forward fashion; Alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.−[173]Similar to VGGNet, but with significantly less parameters due to the use of fewer filters at each layer.Proposing a novel block called Squeeze and Excitation to model feature channel relationship; Can be flexibly used in all existing CNNs to improve recognition performance at minimal additional computational cost. which reported with COCO2015 Test-Standard. [200]ptive Field Block, RBF); Proposed RFB to improve SSD; RBF is a multibranch convolutional block similar to the Inception block[200], but with dilated CONV layers. Table 4 Summarization of detectors that exploit context information, similar to Table3. Table 5 Summarization of object proposal methods using DCNN.The numbers in blue color denote the the number of object proposals.The detection results on COCO is mAP@IoU[0.5,0.95], unless stated otherwise.CVPR14 Among the first to explore DCNN for object proposals; Learns a class agnostic regressor on a small set of 800 predefined anchor boxes; Does not share features with the detection network.Introduced a classification Network (i.e. two class Fast RCNN) cascade that comes after the RPN.Not sharing features extracted for detection. Table 6 Representative methods for training strategies and class imbalance handling.Results on COCO are reported with Test-Dev. Table 7 Popular databases for object recognition.Some example images from MNIST, Caltech101, CIFAR10, PASCAL VOC and ImageNet are shown in Fig.15. Table 8 Statistics of commonly used object detection datasets.Object statistics for VOC challenges list the nondifficult objects used in the evaluation (all annotated objects).For the COCO challenge, prior to 2017, the test set had four splits (Dev, Standard, Reserve, and Challenge), with each having about 20K images.Starting in 2017, test set has only the Dev and Challenge splits, with the other two splits removed. !1 DATASETS AND PERFORMANCE EVALUATIONAlgorithm 1: The algorithm for greedily matching object detection results (for an object category) to ground truth boxes. Table 9 Summarization of commonly used metrics for evaluating object detectors.
2018-09-10T13:02:14.071Z
2018-09-06T00:00:00.000
{ "year": 2019, "sha1": "7536bce1007a765fd097a7cc8ea62208a8c89b85", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11263-019-01247-4.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "1b7d240e7b8a0455e47780a46a0f5d12b352f444", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
53716271
pes2o/s2orc
v3-fos-license
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack Recent development in the field of Deep Learning have exposed the underlying vulnerability of Deep Neural Network (DNN) against adversarial examples. In image classification, an adversarial example is a carefully modified image that is visually imperceptible to the original image but can cause DNN model to misclassify it. Training the network with Gaussian noise is an effective technique to perform model regularization, thus improving model robustness against input variation. Inspired by this classical method, we explore to utilize the regularization characteristic of noise injection to improve DNN's robustness against adversarial attack. In this work, we propose Parametric-Noise-Injection (PNI) which involves trainable Gaussian noise injection at each layer on either activation or weights through solving the min-max optimization problem, embedded with adversarial training. These parameters are trained explicitly to achieve improved robustness. To the best of our knowledge, this is the first work that uses trainable noise injection to improve network robustness against adversarial attacks, rather than manually configuring the injected noise level through cross-validation. The extensive results show that our proposed PNI technique effectively improves the robustness against a variety of powerful white-box and black-box attacks such as PGD, C&W, FGSM, transferable attack and ZOO attack. Last but not the least, PNI method improves both clean- and perturbed-data accuracy in comparison to the state-of-the-art defense methods, which outperforms current unbroken PGD defense by 1.1 % and 6.8 % on clean test data and perturbed test data respectively using Resnet-20 architecture. Introduction Deep Neural Networks (DNNs) have achieved great success in a variety of applications, including but not limited to image classification [1], speech recognition [2], machine translation [3], and autonomous driving [4]. Despite the remarkable accuracy imrovement [5], recent studies [6,7,8] have shown that DNNs are vulnerable to adversarial examples. In image classification task, an adversarial example is a natural image intentionally perturbed by visually imperceptible variation, but can cause drastic classification accuracy degradation. Fig. 1 provides an illustration of adversarial example and its original counterpart. In addition to image classification, attacks to other DNN-powered tasks have also been actively investigated, such as visual question answering [9,10], image captioning [11], semantic segmentation [12,10] and etc [13,14,15]. the clean test data on Resnet-20 compared to Vanilla Resnet-20 with adversarial training. Along with the improvement on clean test data, our defense shows 6.8% improvement on the test accuracy under PGD wite box attack. Additionally, Our result shows improved robustness under FGSM, C & W attack and various black-box attack. .1 Adversarial Attack Recently, various powerful adversarial attack methods have been proposed to totally fool a trained deep neural network through introducing barely visible perturbation upon input data. Several state-of-the-art white-box (i.e., PGD [32], FGSM [7] and C&W [8]) and black-box (i.e., Substitute [33] and ZOO [18]) adversarial attack method are briefly introduced as follows. FGSM Attack: Fast Gradient Sign Method (FGSM) [6] is a single-step efficient adversarial attack method, which alters each element x of nature sample x along the direction of its gradient w.r.t the loss function ∂L/∂x. The generation of adversarial examplex can be described as: where the attack is followed by a clipping operation to ensure thex ∈ [0, 1]. The attack strength is determined by the perturbation constraint . PGD Attack: Projected Gradient Descent (PGD) [32] is the multi-step variant of FGSM, which is one of the strongest L ∞ adversarial example generation algorithm. Withx 1 = x as the initialization, the iterative update of perturbed datax can be expressed as: x t+1 = Π P (x) x t + a · sgn ∇ xL (g(x t ; θ), t) where P (x) is the projection space which is bounded by x ± , t is the step index up to N step , and a is the step size. Madry et al. [32] proposed that PGD is a universal adversary among all the first-order adversaries (i.e., attacks only rely on first-order information). C & W Attack: In C & W attack method, Carlini and Wagner [8] consider the generation of adversarial example as an optimization problem, which optimize the L p -norm of distance metric δ w.r.t the given input data x, which can be described as: where δ is taken as the perturbation added upon the input data, and a proper loss functionL is chosen in [8] to to solve the optimization problem via gradient descent method. c is a constant set by attacker. In this work, we use L 2 -norm based C&W attack and take ||δ|| p=2 as the evaluation metric to measure the network's robustness, where a higher value of ||δ|| p=2 indicates a more robust network or potential failure of the attack. Black-box Attacks: The most popular black-box attack is conducted using a substitute model [33], where the attacker trains a substitute model to mimic the functionality of target model, then use the adversarial example generated from the substitute model to attack target model. In this work, we specifically investigate the transferable adversarial attack [16], which is a variant of substitute model attack. In transferable adversarial attack, the adversarial example is generated from one source model to attack another target model. The source model and target can own the absolutely different structure but trained on the identical dataset. Moreover, Zero-th Order Optimization (ZOO) attack [18] is also considered. Rather than training a substitute model, it directly approximates the gradient of target model just based on the input data and output scores using stochastic gradient coordinate. Adversarial Defenses: Improving network robustness by training the model with adversarial examples [6,32] is the most popular defense approach now-a-days. Most of later works have followed this path to supplement their defense with adversarial training [34,35]. [32]. Additionally, among many recent defense methods, only PGD based adversarial training can sustain state-of-the-art accuracy under attacks [8,6,23]. The reported DNN accuracy in CIFAR10 dataset remains a major success to defend very strong adversarial attacks [23]. Recent works have merged the concept of improving model robustness through regularization to defend adversarial examples. Among them, an unified perspective of regularization and robustness was presented by [24]. Again, randomly pruning some activation during the inference [36] or randomizing the input layer [37] serve the purpose of injecting randomness to somehow prevent the attacker from accessing the gradient. However, these approaches achieve good success against gradient based attacks at the cost of obfuscated gradient [23]. In order to make the model more robust to adversarial attack, several works have adopted the concept of adding a noise layer just before convolution layer during both training and inference phases [29,38]. Even though we agree with the core idea of these works as they certainly makes the model more robust, but there are some fundamental advantages of our work compared to theirs. PNI improves the model robustness by regularizing the model while training more effectively. As classical machine learning demonstrated weight noise performs the regularization even better [25]. We also show experimentally that particularly adding noise to the weights improves the robustness even more. While these works [29,30] have chosen level of noise to be injected manually, we propose to inject different level of noise at different layers using trainable parameters. As choosing the level of noise manually for different layers even by validation set is not practically feasible. Approach In this section, we first introduce the proposed Parametric Noise Injection (PNI) function and will investigate the impact of noise injection on input (to the whole DNN), weight and activation. Parametric Noise Injection Definition. The method that we propose to inject noise to different components or locations within DNN can be described as:ṽ where v i is the element of noise-free tensor v, and such v can be input/weight/inter-layer tensor in this work. η is the additive noise term which follows the Gaussian distribution with zero mean and standard deviation σ, and α i is the coefficient scales the magnitude of injected noise η. We adopt the scheme that η shares the identical standard deviation of v as in Eq. (6), thus the injected additive noise is correlated to the distribution of v and α simultaneously. Moreover, rather than manually configuring α i to restrict the noise level, we set α i as learnable parameter which can be optimized for network robustness improvement. We name such method as Parametric Noise Injection (PNI). Considering the over-parameterization and the convergence of training α i , we make the element-wise noise term (α i · η) shares the same scaling coefficient across the entire tensor. Assuming we performs the proposed PNI on the weight tensors of convolution/fully-connected layers throughout entire DNN, for each parametric layer there is only one layer-wise noise scaling coefficient to be optimized. We takes such layer-wise configuration as default in this work. Optimization In this work, we treat the noise scaling coefficient as a model parameter which can be optimized through back-propagation training process. For f PNI (·) configuration which shares the noise scaling coefficient layer-wise, the gradient computation can be described as: where the i takes the summation over the entire tensor v, and ∂L/∂f PNI (v i ) is the gradient back-propagated from the followed layers. The gradient calculation of the PNI function is: It is noteworthy that even though η is a Gaussian random variable, each sample of η is taken as a constant during the back-propagation. Using the gradient descent optimizer with momentum, the optimization of α at step j can be written as: where m is the momentum, is the learning rate, and V is the updating velocity. Moreover, since weight decay tends to make the learned noise scaling coefficient converge to zero, there is no weight decay term on the α during the parameter updating in this work. We set α = 0.25 as default initialization. Robust Optimization. We expect to utilize the aforementioned PNI technique to improve the network robustness. However, directly optimizing the noise scaling coefficient normally leads α to converge at a small close-to-zero value, owing to the model optimization tends to over-fit the training dataset (referring to Table 1). In order to succeed in adversarial defense, we jointly use the PNI method with robust optimization (a.k.a. Adversarial Training) which can boost the inference accuracy for the perturbed data under attack. Given inputs-x and target labels-t, the adversarial training is to obtain the optimal solution of network parameter θ for the following min-max problem: where the inner maximization tends to acquire the perturbed datax, and P (x) is the input data perturb set constrained by . While the outer minimization is optimized through gradient descent method as regular network training. L ∞ PGD attack [32] is adopted as the default inner maximization solver (i.e., generatingx). Note that, in order to prevent the label leaking during adversarial training, the perturbed datax is generated through taking the predicted result of x as the label (i.e. t in Eq. (2)). Moreover, in order to balance the clean data accuracy and perturbed data accuracy for practical application, rather than performing the outer minimization solely on the loss of perturbed data as in Eq. (10), we minimize the ensemble loss L which is the weighted sum of losses for clean-and perturbed-data. The ensemble loss is described as: where w c and w a are the weights for clean data loss and adversarial data loss. w c = w a = 0.5 is the default configuration in this work. Optimizing the ensemble loss L with gradient decent method leads to successful training of f PNI (θ) for both the model's inherent parameter (e.g. weight, bias) and the add-on noise scaling coefficient α from PNI. Experiment setup Datasets and network architectures. The CIFAR-10 [39] dataset is composed of 50K training samples and 10K test samples of 32×32 color image. For CIFAR-10, the classical Residual Networks [40] (ResNet-20/32/44/56) architecture are used, and ResNet-20 is taken as the baseline for most of the comparative experiments and ablation studies. A redundant network ResNet-18 is also used to report the performance for CIFAR-10, since large network capacity is helpful for adversarial defense. Moreover, rather than including the input normalization within the data augmentation, we place a non-trainable data normalization layer in front of the DNN to perform the identical function, thus attacker can directly add the perturbation on the nature image. Note that, since both PNI and PGD attack [32] include randomness, we report the accuracy in the format of mean±std% with 5 trials to alleviate error. Adversarial attacks. To evaluate the performance of our proposed PNI technique, we employ multiple powerful white-box and black-box attacks as introduced in Section 2.1. For PGD attack on MNIST and CIFAR-10, is set to 0.3/1 and 8/255, and N step is set to 40 and 7 respectively. FGSM attack adopt the same setup as PGD. The attack configurations of PGD and FGSM are identical as the setup in [34,32]. For C&W attack, we set the constant c as 0.01. ADAM [41] is used to optimize the Eq. (4) with learning rate as 5e −4 . We choose 0 for the confidence coefficient k, which is defined inL used by C&W L 2 attack in [8]. The binary search steps for the attack is 9, while number of iteration to perform the gradient descent is 10. Moreover, We also conduct the PNI defense against several state-of-the-art black-box attacks (i.e. substitute [33], ZOO [18] and transferable [16] attack) in a Section 4.2.2 to examine the robustness improvement resulted from the proposed PNI technique. Competing methods for adversarial defense. As far as we know, the adversarial training with PGD [32] is the only unbroken defense method [23], which is labeled as vanilla adversarial training and taken as the baseline in this work. Beyond that, several recent works also utilize similar concept as ours in their defense method are discussed as well, including certified robustness [30] and random self-ensemble [29]. Optimization method of PNI As the aforementioned discussion in Section 3.1, the noise scaling coefficient will not be properly trained without utilizing the adversarial training (i.e., solving the min-max problem). We conduct the experiments for training the layer-wise PNI on weight (PNI-W) of ResNet-20, to compare the convergence of trained noise. As tabulated in Table 1, simply performing the vanilla training using momentum SGD optimizer totally fails the adversarial defense, where the noise scaling coefficients α are converged to the negligible values. On the contrary, with the aid of adversary training (i.e., optimization of Eq. (11)), convolution layers in the network's front-end has obtained relatively large α which are the bold values in Table 1, and the corresponding evolution curve are shown in Fig. 2. Since the PGD attack [32] is taken as the inner maximization solver, the generation of adversarial examplê x in Eq. (2) is reformatted as:x where the difference between Eq. (2) and Eq. (12) is with/without PNI withinx generation. It is noteworthy that, keeping the noise term in the model for both adversarial example generation (Eq. (12)) and model parameter update is the critical factor for the PNI optimization with adversarial training, since the optimization of α is also a min-max game. Increasing noise level enhances the defense strength, but hampers the network inference accuracy for natural clean image. Lowering α, however, makes the network vulnerable to adversarial Only front 5 layers (bold in Table 1) of ResNet-20 [40] are shown. The learning rate of SGD optimizer is reduced at 80 and 120 epoch. Table 1, without PNI-W inx generation indeed leads to the failure of PNI optimization, and the large value (α = 5.856 in Table 1) is not converged due to the probable gradient explosion. attack. As listed in Effect of PNI on weight, activation and input. In this work, even though the scheme of injecting noise on the weight (PNI-W) is taken as the default PNI setup, more results about PNI on activation (PNI-A-a/b), input (PNI-I) and hybrid-mode (e.g. PNI-W+A) are provided in Table 2 for a comprehensive study. PNI-A-a/PNI-A-b denotes injecting noise on the output/input tensor of the convolution/fully-connected layer respectively. Moreover, PNI-A-b scheme intrinsically includes the PNI-I, since PNI-I is applying the noise on the input tensor of first layer. Note that, all models with PNI variants are jointly trained with PGD-based adversarial training [32] as discussed above. Then, with the same trained model, we report the accuracy with/without the trained noise term (left/right in Table 2) during the test phase. As shown in Table 2, with the noise term enabled during test phase, PNI-W on ResNet-20 gives the best performance to defend PGD and FGSM attack, in comparison to PNI on other locations. Although it is elusive to fully understand the mechanism that PNI-W outperforms other counterparts, the intuition is that PNI-W is the generalization of PNI-A in each connection instead of each output unit, similar as relation between the regularization technique DropConnect [42] and Dropout [25]. Furthermore, we also observe that disabling PNI during test phase leads to significant accuracy drop for defending PGD and FGSM attack, while the clean-data accuracy maintains the same level as PNI enabled. Such observation raises two concerns about our PNI techniques: 1) Does the improvement of clean-/perturbeddata accuracy with PNI mainly comes from the attack strength reduction caused by the randomness (potential gradient obfuscation [23])? 2) Is PNI just an negligible trick or it performs the model regularization to construct a more robust model? Our answers to both questions are negative, where the explanations are Effect of network capacity. In order to investigate the relation between network capacity (i.e., number of trainable parameters) and robustness improvement by PNI, we examine various network architectures in terms of both depth and width. For different network depths, experiments on ResNet 20/32/44/56 [40] are conducted under vanilla adversarial training [32] and our proposed PNI robust optimization method. For different network widths, we adopt the original ResNet-20 as baseline and expand its input&output channel of each layer by 1.5×/2×/4× respectively. Same as Table 2, we report clean-and perturbed-data accuracy with/without PNI term during the test phase. The results in Table 3 indicates that increasing the model's capacity indeed improves network robustness against white-box adversarial attacks, and our proposed PNI outperforms vanilla adversary training in terms of both clean-data accuracy and perturbed data accuracy for PGD and FGSM attack. Such observation demonstrates that the perturbed-data accuracy improvement does not come from trading off clean-data accuracy as reported in [34,43]. Through increasing the network capacity, the drop perturbed-data accuracy, when disabling the PNI noise term during test phase, also becomes less significant. Although both adversarial training and PNI techniques perform regularization, the network structure still needs careful construction to prevent the over-fitting resulted from over-parameterization. Robustness evaluation with C&W attack. Improved robustness does not necessarily mean improving the test data accuracy against any particular attack method. Typically L 2 norm based C & W attack [8] should reach 100 % success rate against any defense. Thus average L 2 norm required to fool the network gives more insight about a network's robustness in general [8]. The result presented in Table 4 represents the overall performance of our model against C & W attack. Our method of training the noise parameter becomes more effective for more redundant network. We demonstrate this phenomena by performing comparison study between Resnet-20 and Resnet-18 architecture. Clearly Resnet-18 shows the improvement in robustness from Vanilla adv. training much more than Resnet-20 against C & W attack. PNI against black-box attack In this section, we test our proposed PNI technique against transferable adversarial attack [16] and ZOO attack. Following the transferable adversarial attack [16], two trained neural network are taken as the source model (S) and target model (T ). The adversarial examplesx s is generated from the source model then attack the target model usingx s , which is denoted as S ⇒ T . We take ResNet-18 on CIFAR-10 as an example. We train two ResNet-18 model (model-A and B) on CIFAR-10 dataset to attack each other, where model-A is optimized through vanilla adversarial training, while model-B is trained using our proposed PNI variants (i.e., PNI-W/A-a/W+A-a) robust optimization method. Table 5 shows almost equal perturbed-data accuracy for A ⇒ B and B ⇒ A under various PNI scenarios, which indicates that our PNI technique does not reduce the attack strength. For ZOO attack [18], we test our defense on 200 randomly selected test samples for un-targeted attack. The Attack success rate denotes the percentage of test sample change their classification to a wrong class after attack. ZOO attack success rate for vanilla Resnet-18 with adversarial training is close to 80 %. The robustness of PNI is more evident from Table 5 as the attack success rate drops significantly for PNI-W+A-a and PNI-W. However, PNI-A-a fails to resist ZOO attack even though it still maintains a lower success rate than baseline. The failure of PNI-A-a shows that just adding noise in-front of the activation does not necessarily achieves the desired robustness as claimed by some of the previous defenses [30,29]. Comparison to competing methods As discussed in Section 2.2, a large number of adversarial defense works have been proposed recently, however most of them are already broken by stronger attacks proposed in [44,23]. As a result, in this work we choose to compare with the most effective one till date -PGD based adversarial training [32]. Additionally, we compare with other randomness-based works [29,30] in Table 6 for examining the effectiveness of PNI. Previous defense works [43,34] have shown a trade-off between clean-data accuracy and perturbed-data accuracy, where the perturbed-data accuracy improvement normally at the cost of lowering the clean-data accuracy. It is worthy to highlight that our proposed PNI improves both clean-and perturbed data accuracy under white-box attack, in comparison to PGD-based adversarial training [32]. Differential Privacy (DP) [30] is a similar method of utilizing noise injection at various locations within the network. Although their defense guarantees a certified defense it does not perform well against L ∞ -norm based attack (e.g., PGD and FGSM). In order to achieve a higher level of certified defense, DP significantly sacrifices the clean-data accuracy as well. Another randomness-based approach is Random Self-ensemble (RSE) [29], which inserts noise-layer before all the convolution layer. Even though their defense performs well against C & W attack but poor against strong PGD attack. In our black-box attack simulation Table 5 we demonstrate that adding activation noise may not be as effective as weight noise. Beyond that, both DP and RSE manually configure the noise level which is extremely difficult to find the optimal setup. Whereas, in our proposed PNI method, the noise level is determined by a trainable layer-wise noise scaling coefficient and distribution of noise injected location. Discussion The defense performance improvement led by our proposed PNI does not come from the stochastic gradients. The stochastic gradient is considered to incorrectly approximate the true gradient based on a single sample. We try to show that PNI is not relying on the gradient obfuscation from two perspectives: 1) Our proposed PNI method passes each inspection item proposed by [23] to identify gradient obfuscation. 2) Under PGD attack, through increasing the attack steps, our PNI robust optimization method still outperforms vanilla adversarial training (certified as non-obfuscated gradients in [23]). Inspections of gradient obfuscation. The famous gradient obfuscation work [23] enumerates several characteristic behaviors as listed in Table 7 which can be observed when the defense method owns gradient obfuscation. Our experiments show that PNI passes each inspection item in Table 7. For item.1, all the experiments in Table 2 and Table 3 report that FGSM attack (one-step) performs worse than PGD attack (iterative). For item.2, our black-box attack experiment in Table 5 shows that the black-box attack strength is worse than white-box attack. For items.3, as plotted in Fig. 3, we run experiments through Fig. 3 reveals that our method still can be broken when increasing the distortion bound. It just increases the resistance against the adversarial attacks, in comparison to the vanilla adversarial training. For item.5, again as shown in Fig. 3, increasing the distortion bound increase the attack success rate. PNI does not rely on stochastic gradients. As shown in Fig. 3, gradually increasing the PGD attack steps N step raises the attack strength [32], thus leading to perturbed-data accuracy degradation for both vanilla adversary training and our PNI technique. However, for both cases the perturbed-data accuracy start saturating and do not degrade any further when N step = 40. If our PNI's success comes from the stochastic gradient which gives incorrect gradient owing to the single sample, increasing the attack steps suppose to eventually break the PNI defense which is not observed here. Our PNI method still outperforms vanilla adversarial training even when N step is increased up to 100. Therefore, we can draw the conclusion that, even if PNI does include gradient obfuscation, the stochastic gradient is not the dominant role in PNI for the robustness improvement. Conclusion In this paper, we present a parametric noise injection technique where the noise intensity can be trained through solving the min-max optimization problem during adversarial training. Through extensive experiments, the proposed PNI method can outperforms the state-of-the-art defense method in terms of both clean-data accuracy and perturbed-data accuracy.
2018-11-22T21:10:52.000Z
2018-11-22T00:00:00.000
{ "year": 2018, "sha1": "3636d3f0562f3ab5f5df68c1c9a23530c0fbce64", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1811.09310", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "66f77616968410bac1cf8385ec43bb9b0db6db3c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
644720
pes2o/s2orc
v3-fos-license
Redox Regulation of Cysteine-Dependent Enzymes in Neurodegeneration Evidence of increased oxidative stress has been found in various neurodegenerative diseases and conditions. While it is unclear whether oxidative stress is a cause or effect, protein, lipid, and DNA have all been found to be susceptible to oxidant-induced modifications that alter their function. Results of clinical trials based on the oxidative-stress theory have been mixed, though data continues to indicate that prevention of high levels of oxidative stress is beneficial for health and increases longevity. Due to the highly reactive nature of the sulfhydryl group, the focus of this paper is on the impact of oxidative stress on cysteine-dependent enzymes and how oxidative stress may contribute to neurological dysfunction through this selected group of proteins. Introduction It is clear that while oxygen is essential for life in order to produce chemical energy in the form of ATP, paradoxically, the byproduct of its metabolism generates multiple reactive oxygen species (ROS) that are associated with cellular toxicity. Specifically, in regards to neurodegeneration, there is substantial evidence that ROS are a major component of diseases including Alzheimer's, Parkinson's, and amyotrophic lateral sclerosis [1][2][3][4]. While clinical trials aimed at decreasing the burden of oxidative stress have not clearly demonstrated effectiveness, genetic research has found that high levels of antioxidant enzymes prolong life and decrease pathology. In addition, animal models have also indicated that oxidative stress is an important and consistent characteristic of many forms of neurodegeneration. One particular group of proteins that appear to be intimately involved in the neurodegenerative processes is the cysteine-dependent proteins. This group includes various proteases, antioxidant enzymes, kinases, phosphatases, and other types of enzymes as well as other nonenzymatic proteins such as those that use cysteine as a structural component rather than as part of a catalytic site. More research will be needed to firmly establish the extent to which oxidative stress is causal in these diseases, but based on current understanding, therapies to reverse the oxidant-induced modifications of proteins, lipids or, DNA are expected to be beneficial. This paper will highlight some selected, yet significant cysteine-dependent enzymatic systems that rely on a proper redox environment for their activity and provide evidence for their redox control in neurodegenerative disease. Potential relationships to cancers will also be discussed. Redox Sensitivity of Cysteine The aminoacid cysteine is highly sensitive to redox state. This is largely due to the reactivity of anionic sulfur to various oxidizing agents that can form multiple types of oxidized species (see Figure 1). However, not all cysteines are equally sensitive, and such sensitivity has been utilized throughout evolution to provide protection against oxidative stress. A close examination of the variety of physiologically occurring antioxidant systems, that use cysteine as a major component of their antioxidant activity or as part of a redox "sensor," clearly demonstrates the sensitivity and evolutionary significance of cysteine as part of a protein's Figure 1: Diagrammatic representation of major oxidation states of cysteine that have been found in vivo. Circles represent a protein that contains a cysteine within its primary structure. In its most reduced state, the sulfur group of cysteine is found in the form of -SH. The sulfur can become modified in a number of ways including S-nitrosylated by nitric oxide or S-glutathionylated by glutathione, which are being increasingly recognized for their importance in regulated many cysteine-containing enzymes. In addition, the sulfur group can be oxidized to sulfenic, sulfinic, and sulfonic acids or it may form an intra-or inter-molecular disulfide bond. active center [5]. For example, glutathione (GSH) consists of glutamate, glycine, and cysteine and is the major antioxidant found in brain. It is found at millimolar levels and is a major determinant of intracellular redox conditions. Cysteine itself has been shown to be the major extracellular antioxidant. Further examples of cysteines' critical role in redox balance can be found in other enzymatic systems including the multiple enzymes involved in the maintenance of peroxiredoxins, glutaredoxins, and thioredoxins among others. The natural role of cysteines as redox sensors is further observed by the observation that throughout evolution, cysteines are found in transcriptional regulators that are modulated by oxidative stress such as oxyR and Nrf2/Keap [6]. Due to the varying microenvironments that exist for cysteine within a given protein structure, cysteines are not equally reactive. For example, as discussed further below, the Parkinson's disease-linked protein, DJ-1 cysteine at position 106, appears to be highly sensitive to oxidative attack, while two other cysteines within its structure are not as easily modified [7]. Such apparent specificity of cysteines within the same protein is also observed among many other proteins [8,9]. In terms of the macroenvironment, cysteinedependent enzymes require a reducing environment for activity, which is the condition maintained in the cytoplasm in contrast to the extracellular space that is oxidizing. However, the lysosomal compartment is variable, and changes in redox state have been shown to modulate enzymatic activities located within it. For example, cathepsin activity was found to be altered through redox state as detected by a change in cleavage pattern produced under varying redox conditions [10]. Thus, the location of the cysteine within the overall tertiary/quaternary structure as well as its macroenvironment (e.g., intracellular versus extracellular or organelle) plays major roles in the extent to which a cysteine is stabilized in the anionic transition state, thereby affecting its reactivity to a change in the redox state. Sources of Oxidants in Brain Environmental toxins are thought to be a significant contributor to neuronal-related disorders including AD and particularly PD. These include a variety of naturally occurring and synthetic compounds, which results in the production of reactive species through well-characterized chemical pathways including the Fenton reaction and others [11]. In addition to direct chemical means, many of these environmental molecules, such as rotenone or paraquat, target mitochondria and disrupt the efficient production International Journal of Cell Biology 3 of energy, leading to abnormal increases in free radical production such as superoxide [12]. The identification of these toxins and their mechanisms of action is the subject of extensive research with a major emphasis on how their toxicity relates to the production of free radicals. Besides environmental toxins, there are also important cellular sources of oxidants localized within cytosolic and mitochondrial compartments. Cellular sources include NADPH oxidases, which are enzymes associated with both signal transduction and the killing of foreign organisms through the production of superoxide. Monoamine oxidase (MAO) located at the mitochondrial surface is also a source of hydrogen peroxide. Due to MAO-B's role in the metabolism of dopamine, MAO activity is linked to PD in part due to the production of reactive oxygen species resulting from MAO-mediated metabolism of dopamine [13]. Endogenous superoxide production is strongly associated with the mitochondria and can occur within the matrix, the intermembrane space, and at the outer membrane of mitochondria. For example, reactive species are formed as part of electron transport including complex I (NADHubiquinone oxidoreductase). Complex I is considered to be an important source of free radical generation [14,15] and does so during either forward electron flow or reverse electron transport [16]. Though debate exists about the mechanisms involved (one-site versus two-site model), the importance of superoxide and hydrogen peroxide formation through the various mitochondrial pathways should not be underestimated as oxidants produced through the mitochondria are considered highly relevant to aging and neurodegeneration. Oxidative Stress in Neurodegenerative Disease Over the last few decades it has become increasingly clear that the human brain is more sensitive to various forms of oxidative attack damage compared to other organs in the body. This is due in large part to the high metabolic activity found in brain and the seemingly limited capacity for the repair of damage to neurons as a result of injury. Many types of oxidizing molecules have been observed in the human brain, and their presence is associated with selective damage to brain regions linked with neurodegenerative disease. While it is uncertain as to the extent in which the increase in reactive species causes the visible pathological hallmarks, the formation of reactive oxygen, nitrogen, or sulfur species is nevertheless generally recapitulated in animal models of each disease, strongly suggesting a potential causal link. Observed biomarkers of increased oxidative stress include 4-hydroxynonenal, thiobarbituric acid-reactive substances, free fatty acid release, and acrolein formation for lipid peroxidation; 8-hydroxy-2-deoxyguanosine for DNA; protein carbonyls, 3-nitrotyrosine, and glutathionylation for proteins. In regards to proteins, along with cysteine, multiple aminoacids are found to be modified including lysine, methionine, histidine, and others. Alzheimer's Disease. AD is an age-associated progressive neurodegenerative disease that affects behavior, cognition, and memory and is characterized by two major pathological hallmarks: extracellular plaques composed primarily of Aβ and intracellular inclusions of tau protein known as tangles. It currently has no known cause or cure and remains the most common form of irreversible dementia affecting approximately 20 million people worldwide. Oxidative damage is one of the earliest detectable changes observed in both genetic and sporadic forms of Alzheimer's disease [17]. While there are several theories about the source of the various oxidizing molecules, Aβ has been a prime candidate. Indeed, treatment of various model systems with different Aβ forms typically results in increased oxidative stress. Recent work has shown that extracellular Aβ treatment results in atypical redox effects in astrocytes compared to treatment with other oxidizing molecules, suggesting that Aβ possesses unique oxidizing properties [18]. In addition to the potential for Aβ to stimulate increased oxidative stress, there is also evidence that major antioxidant systems such as superoxide dismutase, catalase, and others have decreased activity associated with AD progression [19]. Examples of Oxidized Enzymes in AD. Peroxiredoxins (Prxs) are a family of peroxidases that reduce peroxynitrite and a variety of other hydroperoxides. They use a redoxsensitive cysteines within their active site reducing the peroxide substrates either through the formation of an intramolecular disulfide bond or oxidation to sulfinic acid or sulfonic acid [20]. Proteomic studies for subjects with early AD found that Prx-2 was oxidized in a brain region containing significant AD-related pathology compared to age-matched controls [21]. In another study by Cumming and colleagues [22], it was not only shown that Prx-2 was more oxidized in AD brains, but also treatment of cultured primary neurons with Aβ resulted in Prx oxidation that was reversible by addition of a cysteine-specific antioxidant, Nacetylcysteine. In addition, Fang and colleagues found that Prx-2 was S-nitrosylated at active-site cysteines Cys 51 and Cys 172 [23]. Protein disulfide isomerase (PDI) is a multifunctional enzyme with several family members. These enzymes include chaperone activity mediated by catalyzing the reduction, oxidation, and isomerization of protein disulfides to maintain proper protein folding. PDI redox activity is based on the presence of two thioredoxin-like motifs (CXXC) (human PDI: Cys 36/39 and Cys380/383). It has been found to be oxidized in AD and colocalizes with neurofibrillary tangles [24]. Though no changes in the amounts of PDI have been noted in AD brain, Uehara and colleagues [25] did show that PDI was S-nitrosylated at multiple cysteines in AD brain and that such oxidation resulted in enzyme inactivation. Since PDI is important for the folding of proteins by catalyzing cysteine-disulfide exchange, its inactivation increased the levels of misfolded of proteins, leading to the activation of the unfolded protein response. Calpains are calcium and cysteine-dependent endoproteases whose active-sites are sensitive to oxidative inactivation. In addition to AD, calpains play a role in multiple 4 International Journal of Cell Biology disease states, including cancer [26,27]. As a putative physiological regulator of key proteins associated with AD such as amyloid precursor protein and tau among others, understanding calpains' potential dysregulation by redox status is important. Calpain's active site cysteine (Cys105) was found to be oxidized, in vitro and in cultured cells only in the presence of calcium [28,29]. Presumably, this is because the active site is otherwise inaccessible to oxidative attack when the enzyme is in an inactive conformation. Evidence suggests that calpain-like enzymatic activity is also inhibited in brain regions of AD associated with high pathology [30]. Parkinson's Disease. Parkinson's disease is the second most common neurodegenerative disease characterized by loss of dopaminergic neurons, glutathione depletion, oxidative stress, and the formation of intracellular inclusions of alpha-synuclein called Lewy bodies. Similar to AD, the vast majority of PD cases are sporadic with only 5-10% of cases due to genetic causes [31]. Much of what we understand has been gained through the use of animal models of PD that involve the administration of exogenous compounds such as 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), rotenone, and paraquat. Investigation of these compounds has strongly linked them to mitochondrial dysfunction and the abnormal production of free radicals, which generally reflects what is observed in the human disease. In the course of these studies, several highly relevant cysteine-dependent enzymes known to contribute to PD have been observed to be modified at key cysteine residues by these reactive species. As discussed above, PDI has also been found to be oxidized in samples of PD brain [25]. The potential impact of PDI in PD brain is evidenced by experiments suggesting that PDI plays a role in the folding of both synphilin-1 and alpha-synuclein [25,32], two proteins closely linked to PD. Other studies have also found links between PDI oxidation and PD [33,34]. DJ-1 activity is also altered by oxidative stress. DJ-1 is a 20 kDa that has multiple putative activities as a protease and an antioxidant among others [35]. DJ-1 is strongly associated with PD because mutations in DJ-1 result in autosomal recessive early-onset form of Parkinson's disease. DJ-1 contains three cysteine residues that each has been evaluated in response to oxidative stress. Based upon multiple studies, it is clear that only Cys-106 oxidation is an important regulatory component for DJ-1 activity. For example, Waak and colleagues [36] found that the formation of a mixed disulfide, created under oxidizing conditions between DJ-1 and apoptosis signal-regulating kinase 1 (ASK1), contributes to DJ-1 s neuroprotective effects. Such protective effects are predicted to be lost in cases that may occur with aging or exposure to oxidative toxins such as those used in animal models of PD including MPTP or rotenone. Another example of a cysteine-containing enzyme that is modified in PD is parkin. Parkin is an ubiquitin E3 ligase that serves to ubiquitinate a series of proteins and contains multiple cysteines that are required for full activity. Mutation of parkin is responsible for early-onset autosomal recessive juvenile Parkinsonism. Several studies have convincingly demonstrated that parkin is S-nitrosylated in cases of PD as well as in model systems [37,38]. Such oxidation inhibits parkin's ubiquitin E3 ligase activity and therefore prevents proper ubiquitination of its substrates leading to accumulation of misfolded proteins. In addition, it also appears to be sensitive to covalent modification by dopamine itself [39]. Tyrosine hydroxylase (TH) is the initial and rate-limiting step in the biosynthesis of the dopamine (DA) and norepinephrine. This enzyme contains seven cysteines some of which have been found to be important for full TH activity. Kuhn and colleagues [40] found that 4-5 cysteines were modified by quinone derivatives of DOPA, dopamine, and N-acetyldopamine that were prevented by various thiolreducing agents. Further, they found that such oxidations resulted in inhibition of TH enzymatic activity. Other evidence of TH redox sensitivity comes from Sadidi et al., [41] who found that peroxnitrite and nitrogen dioxide both inhibited TH through nitration of cysteines or through Sthiolation in the presence of GSH or cysteine. Additional discussion of redox regulation of TH can be found in a recent, excellent review [42]. Amyotrophic Lateral Sclerosis. More commonly known as Lou Gehrig's disease, ALS is the most common degenerative disease of the motor neuron system that results in the death of motor neurons, causing muscle weakness and eventually death. Despite an annual incidence rate of oneto-two cases per 100,000, the etiology of the disease remains largely unknown [43]. Although multiple theories have been presented, research focusing on neurotoxicity has revealed excessive entry of glutamate into the neurons damages cell metabolism, resulting in pathologic changes [43]. It has been offered that ALS develops when vulnerable persons are exposed to a neurotoxin at times of strenuous physical activity [44]. Examples of Oxidized Enzymes in ALS. Mutations in Cu/Zn superoxide dismutase gene (SOD1) are associated with familial amyotrophic lateral sclerosis. Recent work has found that oxidative modification of SOD1 results in the formation of an epitope consistent with misfolding of SOD1 that is observed in ALS [45]. Subsequently, Redler and colleagues [46] evaluated the effects of specific modification of Cys-111 on this conformation change and found oxidation of Cys-111 via glutathionylation (see Figure 1) resulted in the destabilization of the SOD1 dimer. This destabilization increases the potential for unfolding of the monomer and subsequent aggregation, leading to loss of SOD1 activity and promoting cell death. Other Cysteine-Dependent Enzymes Affected by Oxidative Stress Beyond cases clearly associated with specific disease pathology, there are other physiologically regulated or pathologically modified cysteine-dependent enzymes that are equally important to consider and have been suggested to play a role in neurodegeneration. For example, Janus kinase 2 (JAK2) Epidermal growth factor receptor Tyrosine hydroxylase * For some enzymes above, data suggests that they are oxidized and that the cysteine is essential for activity but may or may not be considered part of the catalytic site in all species. Note: Other enzymes are not listed here although they depend upon cysteine for activity; often such cysteines are linked to a structural requirement such as a disulfide bond rather than as part of a catalytic domain. is part of the JAK2/signal transducers and activators of Transcription pathway that plays a role in synaptic plasticity, cell proliferation, migration, and apoptosis. JAK2 contains a pair of cysteine residues (Cys866 and Cys917) that act as a redox-sensitive switch for its activity [47] and was shown to be inactivated by treatment of human BE (2)-C neuroblastoma cells with rotenone, a chemical used to model PD in animals [48]. Members of the caspase family are also found to be regulated by redox state. Caspases are involved in the initiation and execution of certain forms of programmed cell death and are therefore linked to multiple neurodegenerative conditions. Several studies have confirmed that members of this group can be oxidized at their active-site cysteine through S-nitrosylation, resulting in enzyme inhibition [49][50][51]. However, there are other examples in which nitric oxide (NO) may activate these caspases [52]. Such discrepancies are likely due to duration and dose of NO as well as other indirect effects of NO on other activation mechanisms [53]. Phosphatase and tensin homolog (PTEN) dephosphorylates phosphatidylinositol (3,4,5)-trisphosphate (PIP 3 ) to phosphatidylinositol (4,5)-bisphosphate (PIP 2 ), serving to antagonize the kinase activity of phosphatidylinositide-3kinase. As part of this pathway that includes the Akt cascade, PTEN activity is relevant to apoptosis. Numajiri and colleagues [54] reported that S-nitrosylation of PTEN at Cys-83 inhibited PTEN activity resulting in increased AKT activity downstream, promoting cell survival. Interestingly, they also found that at higher levels of NO, AKT itself could be S-nitrosylated and therefore inhibited, resulting in a proapoptotic environment [54]. Finally, the active-site cysteine of PTEN, Cys124, has also been found to be oxidized in the presence of high concentrations of hydrogen peroxide [55]. Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) is a glycolytic enzyme that also has other recently discovered roles including participating in apoptosis. Cumming and Schubert [56] showed that GAPDH is sensitive to oxidative stress in affected brain regions of AD. They reported an increase in GAPDH intermolecular disulfide formation that was reversed by addition of the cysteine-specific reducing agent, dithiothreitol (DTT), that included the active-site Cys149. Treatment of cultured neuronal and neuronal-like cells with Aβ also resulted in GAPDH oxidation in addition to nuclear translocation and aggregation that may contribute to apoptosis [56]. As GAPDH's role is more fully elucidated, its oxidation is likely to be discovered to reach beyond what is presently known. See Table 1 for examples of cysteinedependent enzymes that have been found to be regulated by redox state within the various categories of enzymes. Cysteine-Dependent Enzymes and Their Link to Cancer Given the linkage between age and cancer, there are likely important connections between redox regulation of the enzymes associated with neurodegeneration discussed above and tumor formation. Indeed, there are many examples of cysteine-dependent enzymes playing important roles in the various aspects of cancer progression including the impact of cancer therapies on these enzymes. The following is a brief summary of examples of this overlap. Wang and colleagues [57] found in their model using MCF-7 human breast cancer cells that became resistant to radiation, that Prx2 is upregulated and may be a contributing factor to resistance to radiation. They hypothesized that this may be due to the antioxidant function of Prx2, resulting in attenuating radiation-induced oxidative stress effects. Goplen and colleagues [58] found that PDI is highly expressed during glioma invasion and that treatment with bacitracin, or a monoclonal antibody to PDI, inhibited tumor migration and invasion. Calpain-2 has been shown to play a role in calcium-dependent glioblastoma invasion, but not migration, which may be related to calpain-2 s function in invadopodial dynamics mediated by its regulation of matrix metalloproteinase 2 [59,60]. DJ-1 appears to be upregulated in multiple forms of cancer and is considered a ras-dependent oncogene [61,62]. DJ-1 upregulation, as with Prx2, is likely due to its antioxidant properties and 6 International Journal of Cell Biology the protective effects it would convey upon tumor cells. Alterations of parkin, observed in multiple cancer types, with genetic or other causes of decreased parkin levels are linked to increased tumorigenesis. As recently reviewed, and epidemiological studies suggest, parkin, along with DJ-1 and other genetically linked proteins are under investigation with respect to increasing risk of melanoma in PD [63][64][65][66][67]. SOD1, by virtue of its potent antioxidant activity, has been identified as a potential drug target to induce cell death in certain cancers. Somwar and colleagues [68], using a lung adenocarcinoma cell line, found that inhibition of SOD-1 led to increased apoptosis in these cells. Finally, Joshi and colleagues [69] treated mice with adriamycin, a chemotherapeutic agent, resulting in increased oxidation of Prx1, a cysteine-dependent peroxiredoxin, in brain. From these data, two observations can be made. First, there are multiple cysteine-dependent enzymes that are sensitive to oxidative stress linked to tumor formation or migration. Second, the oxidizing effects of chemotherapeutic agents such as adriamycin should be considered when evaluating the potential effects of these compounds in terms of both their therapeutic and pathological effects. Summary Clearly, the redox regulation of cysteine-dependent enzymes is an important area of study. This is particularly evident in neurodegenerative conditions due to their strong association with increases in oxidative stress. Many of these same enzymes are also associated with tumorigenesis, invasion, or migration. The selected enzymes for this paper appear not only sensitive to oxidation, but also key players in the underlying pathologies and in some cases, genetic causes of disease. It would appear that while nature has taken advantage of the reactivity of the sulfur group within cysteine to help regulate the response to oxidative stress, it also leaves these enzymes vulnerable to chronic conditions that promote prolonged exposure to an oxidizing environment. Thus, as our antioxidant defenses decline over time and cellular exposure to oxidizing conditions is increased, either through metabolic activity of the mitochondria, or by exposure to oxidizing environmental agents, this subset of cysteine-dependent enzymes become increasingly inhibited. Such inhibition is expected to contribute to and promote neurodegeneration, with variable effects on cancer. This paper has only highlighted some of the significant cysteinedependent enzymes that have been shown to be related to neurodegenerative diseases and not all of the tremendous efforts of the many researchers that have contributed have been referenced here.
2018-04-03T06:19:00.558Z
2012-07-05T00:00:00.000
{ "year": 2012, "sha1": "a012e2cddfb93f63f108e7bafa6b10475d4dcad1", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijcb/2012/703164.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac0a0e168c82ae3fb9b28de5fa3f161eadb72ba9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234847707
pes2o/s2orc
v3-fos-license
Residual Deep Monocular 3D Human Pose Estimation using CVAE synthetic data Estimating 3D human pose from a monocular RGB image is a challenging task in computer vision because annotating a large number of 3D pose ground-truth data is costly. To address the problem of lack of 3D data, many methods have been proposed. In this paper, we propose to address this issue by synthesizing data. Firstly we exploit Conditional Variational Autodecoder to generate 3D human skeletons data. In CVAE network, we acquire generous 3D pose samples through the predicted 2D pose and the existing 3D ground truth. Secondly, base the generated 3D samples, we obtain the corresponding 2D pose by projection, thus augmenting the data of 2D-3D network. Finally, we train these synthetic data on 3D pose residual estimation network. Extensive experiments show that our approach achieves state-of-the-art accuracy on standard benchmark datasets. Introduction Human pose estimation is a hot-spot research in computer vision, and has widely application, such as movie animation, virtual reality, human-computer interaction, intelligent monitoring, athlete assisted training. With the development of neural network and deep learning, the end-to-end prediction network has been applied on human pose estimation and made remarkable progress [1][2] [3][4] [5]. Particularly, 3D human pose estimation from single image has attracted extensive attention and achieved unexpected success [6] [7][8] [9]. Although these methods have achieved noteworthy accuracy, 3D human pose estimation is still a huge challenge since 3D data sets are limited and collecting 3D annotations is costly and time-consuming. To deal with this problem, many recent works adopt a two-stage network framework, which first predicts 2D joint points, and then lifts to 3D [10][11] [12]. 3D human pose estimation task can obtain more extra features after 2D pose detector since 2D annotation is easier to obtain and more diverse. However, there is still a big difference between the results of the test set and the training set due to the data bias and the limitation of 3D data. To solve this problem, we use CVAE to synthesize 3D joint samples. The corresponding 2D joint points are obtained by using the mapping relation between 3D and 2D bone. We performed ablation experiments to demonstrate the effectiveness of our method. Related Works Two-step pose estimation. 3D coordinates are obtained by direct regression from 2D images [13][14] [15]. Due to the high degree of non-linearity of 3D space and large output space, the features that need to be learned by a single model are too complex. Usually a satisfactory result is not achieved. Thanks to the maturity of 2D pose estimation, lifting 2D to 3D achieved unexpected accuracy [16][17] [18][19]. Two-step pose estimation has two branches. One is joint 2D and 3D training, the other is directly using the pre-trained 2D pose network to input the 2D coordinates into the 3D pose estimation network. Our work belongs to the second branch. Since HRNet can effectively extract appearance information [20] [21]. In this paper, we use HRNet as 2D pose detector to get 2D joint points as input for next stage. HRNet maintains high-resolution representation by connecting high-resolution convolutions to low resolution convolutions in parallel, and enhances high-resolution representation by repeatedly performing multi-scale fusion across parallel convolutions. Self-supervised/weaklymethods. Network training needs a large number of data, otherwise it is easy to overfit, but 3D annotation is often difficult to obtain. To address this problem, Self-supervised and weakly methods were proposed [22]. Chen x, et al [23]. proposed a novel weakly supervised encoder decoder framework to learn geometric perceptual 3D representation of human posture. In order to improve the robustness of representation geometry, the loss of representation consistency is introduced to constrain the learning process of encoder decoder. Our method also uses decoder and encoder. But the difference is that we use it to generate 3D bone samples. Recently, multi-perspective self-supervised approach has been proposed [24]. 3D pose is obtained by 2D pose estimation and epipolar geometry to reduce the dependence on 3D ground-truth. Considerable results are obtained by two views. Monocular 3D Human Pose Estimation. In a real world scenario, there are many cases where only one view of the data set is available. In the early stage, the paper [25] is the first one to apply deep learning to 3D pose estimation based on a single image. It designs a deep convolution network to directly regress 3D coordinates from RGB images, and combines the regressed key point coordinates with the detected key point bounding box by means of multi task learning. In the supervision, instead of using 3D coordinates directly, they use the backbone length and the key point bounding box to supervise. Pavlakos G, et al [14]. used the method of 2d pose in 3D to regress a 3D Heatmap. Considering a large range of z-axis depth, they adopted a coarse to fine structure to regress step by step. It is similar to the stacked hourglass structure in 2D pose estimation. Unlike these methods, we focus on data enhancement. Models and Methods As illustrated in Figure. 1, our overall architecture consists of three parts: CVAE data generator, 3D-2D projection and 3D pose estimation network. CVAE data generator is a encoder-decoder network that generate 3D joint samples. 3D-2D projection maps the generated 3D joint points to corresponding 2D joint points. 3D pose estimation network predicts 3D pose through 2D joint points. CVAE data generator CVAE is one of the most advanced methods to generate models, which has achieved remarkable results [26]. CVAE is widely used in computer vision. It includes an encoder and a decoder. The encoder attempts to learn the hidden representation or probabilistic encoder representation of the data. The decoder attempts to learn to hide the representation input space. We use CVAE to generate N 3D joint samples. The 3D ground truth( D 3 p ) and the 2D joint points( D by decoding the hidden representation. We optimize CVAE network by back propagation. The loss function consists of two parts: one is reconstruction loss, the other is KL divergence between prior distribution and conditional data distribution. We set the corresponding weights for the two losses( ,  ). We compute loss as follows: 3D-2D projection Perspective projection is a method of drawing or rendering on two-dimensional image or canvas in order to obtain the visual effect close to real three-dimensional objects. Perspective projection maps the generated 3D joint points to corresponding 2D joint points. The coordinates of each 3D joint point and each 2D joint point are respectively expressed as ) , , ( .The specific format is as follows 3D pose estimation network After obtaining a large number of 2D and 3D joint points, the prediction network is trained to obtain 3D pose. The network consists of several block. These block is composed of full connection and residual network as illustrated in Figure. 2. The input of the first layer is 2D joint points, and the input of the subsequent layer is the contact of 2D joint predicted by HRNet and predicted 3D joint points( D 3 p ). Datasets&Evaluation Metrics We conduct experiments on Human3.6M that is the largest dataset for 3D human estimation. We use subject 1 to 5 as training set, 9 and 11 as verification set. We evaluate our model by mean per joint Results We performed extensive experiments to demonstrate the effectiveness of our method. The results showed that the average error rate and the error rate of part of actions under P1 and P2 . Table 1 and Table 2 show the comparison of our experimental results with other methods. Conclusions In this paper, we propose to use CVAE to generate 3D skeleton data and use perspective projection to get the corresponding 2D joint points, so as to increase the data of 3D prediction network. We effectively reduce the limitation of dataset. Extensive experiments show that the proposed method achieves close to state-of-the-art results on benchmark datasets.
2021-05-21T16:57:47.190Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "e561232b4df359b003163e920b668235d40ff975", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1873/1/012003", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9677b9da74d5cb414f7f79c84f10437eaf12ea52", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
262385677
pes2o/s2orc
v3-fos-license
Magnitude of tuberculosis lymphadenitis, risk factors, and rifampicin resistance at Adama city, Ethiopia: a cross-sectional study Mycobacterium tuberculosis complex has an impact on public health and is responsible for over one million deaths per year. Substantial numbers of people infected with M. tuberculosis can develop tuberculosis lymphadenitis; however, there is a limited study in Adama, Ethiopia. The aim of this study was to determine the magnitude of Tuberculosis lymphadenitis, its predictors, and rifampicin-resistance gene-positive M. tuberculosis. A total of 291 patients with enlarged lymph nodes were recruited from May 2022 to August 30 at Adama Comprehensive Specialized Hospital Medical College (ACSHMC). GeneXpert, Ziehl–Neelsen staining, and cytology were used for the diagnosis of TB lymphadenitis from the Fine Needle Aspirate (FNA) specimen. Rifampicin-resistant gene was detected using GeneXpert. For data entry and analysis, Epi Data version 3.0 and SPSS version 25 were used respectively. A binary logistic regression model was used to identify predictors of TB lymphadenitis. A p < 0.05 with a 95% confidence interval (CI) was taken as a cut point to determine the significant association between dependent and independent variables. The prevalence of TB lymphadenitis using GeneXpert, Ziehl–Neelsen staining, and cytology were 138 (47.4%) (95% CI 41.70–53.10), 100 (34.4%) (95% CI 28.94–39.85), and 123 (42.3%) (95% CI 36.63–47.00) respectively. Nine (3.1%) participants were infected with rifampicin-resistant gene-positive M. tuberculosis. Out of the total M. tuberculosis detected by GeneXpert (n = 138), 9 (6.5%) were positive for rifampicin resistance-gene. Participants with a chronic cough had 2 times odds of developing TB lymphadenitis (AOR: 2.001, 95% CI 1.142–3.508). Close to half of patients with enlarged lymph nodes were positive for M. tuberculosis by the GeneXpert method in the study area. Chronic cough was significantly associated with TB lymphadenitis. Rifampicin-resistant gene-positive M. tuberculosis was relatively prevalent among patients with enlarged lymph node in the study area. Tuberculosis is one of the leading causes of death, killing 2 million people each year 1 .TB lymphadenitis is the most common type of extrapulmonary tuberculosis that occurs outside of the lungs 2 .According to Stop TB Partnership, Ethiopia is among 30 countries suffering from high tuberculosis burden.In Ethiopia, about onethird of cases of tuberculosis are attributed to TB lymphadenitis 3 . Lymphadenitis refers to lymph nodes that are abnormal in size, number, or consistency and can have etiologic agents ranging from infectious processes to malignant disease.The cause of lymphadenitis is difficult to diagnose based on history and physical examination alone.Fine needle aspiration cytology (FNAC) is a widely accepted, cost-effective, and safe method for the diagnosis of TB lymphadenitis and other lymph node abnormalities 4 . TB lymphadenitis is a chronic specific granulomatous inflammation of the lymph node with caseous necrosis caused by infection with Mycobacterium tuberculosis or related bacteria.T cell lymphocytes and fibroblasts granulomatous tuberculosis eventually lead to the development of central caseous necrosis and a tendency to merge with the replacement of lymphoid tissue 5 .TB lymphadenitis (cervical) can be caused by the spread of M. tuberculosis from a lung infection 6 . Currently, lymphadenitis is a common pathological problem in the world.Many studies have been conducted to determine the magnitude and etiology of lymphadenitis.The pattern of the disease varies across different ethnic backgrounds and countries.Information regarding the magnitude and etiology of lymphadenitis in a specific geographic area is essential for a better diagnosis, treatment, and control of the disease 7 . According to a study conducted in Ethiopia, 196 M. tuberculosis isolated from enlarged lymph nodes belongs to different lineages 8 .In a review and meta-analysis conducted in Africa, a total of 6746 TB lymphadenitis cases were identified.The majority of the cases (70.6%) were from Ethiopia.Over 77% and 88% of identified TB lymphadenitis involved the cervical region and did not receive anti-TB drugs 9 . In addition to TB lymphadenitis, there are various causes of pathologies of lymph nodes; the most common are malignant tumor, reactive hyperplasia, Hodgkin lymphoma, non-Hodgkin lymphoma, purulent abscess, and other chronic inflammation.Lymph nodes have a characteristic presentation depending on the etiology and can present as acute painful swelling due to infection or chronic painless swelling 7,10,11 . Even though Ethiopia is among high TB burden countries, there is limited data of TB lymphadenitis from Adama, East Shoa zone, Oromia Regional State.In this study, we aimed to determine the magnitude of TB lymphadenitis among patients with enlarged lymph nodes at ACSHMC using GeneXpert, Ziehl-Neelsen (ZN) staining, and cytology. Study design and area An institutional-based prospective cross-sectional study was conducted from May, 2022, to August, 2022.The study was conducted in the East Shoa zone, Oromia Regional State, Ethiopia, ACSHMC, Adama, Ethiopia.Adama city is located in the East-South direction about 100 km from the capital city of Ethiopia.Based on the 2021/2022 TB data, Adama Comprehensive Specialized Hospital Medical College (ACSHMC) has suspected a total of 1458 all forms of TB cases, from this 456 were confirmed TB Lymphadenitis in cytological diagnosis methods only.Gene X pert and AFB are not in use in the study site for the diagnosis of the disease. Study population All patients with enlarged lymph nodes who visited ACSHMC, the pathology department for FNAC examination during the study period were considered for the study.The required sample size was determined using a single population proportion formula by considering prevalence reported from Bahirdar, Ethiopia (22.1%) 12 and 95% confidence level.After considering 10% for the non-response rate, the total sample size was 291.We have used convenient sampling technique to recruit study participants. Inclusion criteria All patients who had enlarged lymph nodes and visited the pathology department at ACSHMC were included. Exclusion criteria The patient who had enlarged lymph node but the FNA sample is not sufficient for the laboratory test and patents not voluntary for this study were excluded. Study variables Dependent variable: TB Lymphadenitis diagnosed by GeneXpert. Independent variables: Socio-demographic and clinical characteristics. Data collection For the collection of demographic and clinical characteristics, we used an interviewer-based structured questionnaire prepared after reviewing similar studies [12][13][14] .About 1.5-3 ml Fine Needle Aspirate (FNA) specimens were collected from enlarged lymph nodes to be diagnosed using GeneXpert 16 , ZN staining 17 , and Cytology 18 . Data quality control The questionnaire was pre-tested on a population representing 5% of the sample size to check its consistency before the actual data collection.Each day, after data collection, collected data were checked and evaluated for completeness, accuracy, and clarity.The sample processing control contains non-infectious spores in the form of a dry spore cake that is included in each cartridge to verify adequate processing of TB.To control bias that could arise from laboratory investigation, smear microscopy were read by three professionals before the final issuance of the result. Statistical analysis Data were entered into the computer using Epi Data version 3.1 and exported to SPSS version 25 (Statistical Package for Social Science) software for analysis.Bivariable and multivariable binary logistic regression were used to determine the association between dependent and independent variables.Initially, data were analysed using bivariable logistic regression; variables with p < 0.25 were further analyzed by multivariable logistic retrogression.The statistical significance was considered at a p < 0.05. www.nature.com/scientificreports/ Ethical consideration Ethical clearance was obtained from the nationally registered ethical institutional review board of Hawassa University medical college (Ref No. IRB\148\14).All participants were informed of the purpose, risk, and benefit of the study and their participation was voluntary.Written informed consent was obtained from all participants.For minors, consent from the parents or legal guardians and assent from minors were obtained.The identifier of study participants were removed from all formats.All methods were carried out in accordance with relevant guidelines and regulation as mentioned by Declaration of Helsinki. Socio demographic characteristics A total of 291 study participants with enlarged lymph nodes were included in this study, of which 121 (41.8%) were males.The mean age of study participants was 28 (SD ± 14.8).Most of the patients did not have history of treatment for TB (Table 1). Factors associated with TB lymphadenitis Variables such as place of residence, marital status, educational level, consumption of raw milk, and chronic cough showed a p < 0.25 in bivariable analysis and were selected for multivariable analysis.In multivariable analysis, TB lymphadenitis was significantly associated with chronic cough (Table 3). Discussion According to cytological studies, TB lymphadenitis is reported to be the common problem in Ethiopia 12,13,18 .However, these studies are mostly of retrospective type in nature and used cytology for the diagnosis of TB lymphadenitis 12,13,17 .The prevalence of TB lymphadenitis may vary depending on the method used for the diagnosis and the clinical background of the patients. Many studies including our study revealed high prevalence of TB lymphadenitis among patients with enlarged lymph nodes as compared to other clinical conditions.In the current study, the overall prevalence of TB lymphadenitis using GeneXpert was 47.4%.Our finding is in line with the prevalence of TB lymphadenitis reported from Hawassa, Ethiopia (48.8%) 7 and Addis Ababa, Ethiopia (49.3%) 15 .However, higher prevalence TB lymphadenitis was reported from Gondar, Ethiopia (65.7%) 13 , Jimma, Ethiopia (58.0%) 14 , Butajira, Ethiopia (72.8%) 19 and Tanzania (69.5%) 20 .In contrast to our study, low prevalence of TB lymphadenitis was reported from Nigeria (24.45%) 21and Pakistan (44%) 22 .The variation might be due to socio-demographic factors and the laboratory method used.Some of the studies used cytological methods whereas others used molecular method which might be responsible for the differences observed. The prevalence of TB lymphadenitis using ZN staining method was 34.4%.This finding is high compared to study conducted in Addis Ababa, Ethiopia (14.5%) which used a similar laboratory methods-ZN staining method 16 .A study conducted in a Sudan also reported low AFB positivity (9%) of lymphadenitis 23 . According to the cytological examination (FNAC) of the present study, the prevalence of TB lymphadenitis was 42.3%.Reactive lymphadenitis was the second most frequent cause of enlarged lymph nodes.Lymphoma and the chronic abscess was the third and fourth frequent cause of enlarged lymph nodes respectively.The prevalence of TB lymphadenitis identified by cytological methods correlates with a study conducted in different parts of Ethiopia 7,12 .Another study from South Ethiopia showed a higher prevalence of TB lymphadenitis (68.7%) using cytological diagnosis 15 . Overall, the performance of GeneXpert in diagnosing TB lymphadenitis was superior (47.4%) compared to the cytological method (42.3%) and ZN staining (34.4%).From the total study participants (N = 291), 29.2%, 35%, and 39.9% were positive for MTB by all three methods, AFB and GeneXpert, GeneXpert, and cytology respectively.Bacteria other than MTB were detected from 11.3% of participants with enlarged lymph nodes; the predominant bacteria were Gram-positive cocci.This indicates the role of other bacteria in lymphadenitis or it can be a super infection and need further study. In contrast to a study conducted in Gondar 13 , in the current study male participants were more affected by lymphadenitis (48.8%) than their female counterparts (46.5%).This could be due to differences in lifestyle among males and females in different localities. The majority of participants with enlarged lymph nodes were adults.46% of participants within the age group of 25-34 years had enlarged lymph nodes whereas those within 15-24 years old accounted for 25%.This finding is inconsistent with other studies done in Ethiopia; where 25-34 years old are the commonest age group affected 14,15 .Review and Meta-analysis conducted in Africa indicated a high frequency of enlarged lymph nodes among individuals within the 20-40 years age group 24 . Chronic cough was the only independent variable significantly associated with TB lymphadenitis.Patients who had a history of chronic cough had 2 times odds of developing TB lymphadenitis as compared to those who had no history of cough.This finding is in line with the finding reported from Jimma, Ethiopia 14 ; however, it is not comparable to a study conducted in Gondar, Ethiopia 13 . In this study, the cervical lymph node was found to be the most commonly affected site (54.6%).Similar finding was reported from northern parts of Ethiopia (57.1%) 12 .Hawassa, Ethiopia (57.1%) 7 , and Gondar, Ethiopia (47.5%) 13 .In addition, a report from India indicated a high prevalence of lymphadenitis involving cervical lymph nodes 4 . The history of drinking raw milk among TB lymphadenitis cases was (21.0%).However, the history of drinking uncooked milk was not significantly associated with the occurrence of TB lymphadenitis.This finding is in line with the studies conducted in Addis Ababa, Ethiopia 15 . In this study, 3.1% of participants were infected with rifampicin-resistant gene positive M. tuberculosis which is in line with the study conducted in Gondar, Ethiopia 3.9% 24 , India (2.9%) 25 , and Kenya (3.7%) 26 .A study conducted in Hawassa, Ethiopia among patients suspected of pulmonary TB revealed a prevalence of rifampicin resistance of 1.24% which is lower than the current study 27 .Out of 138 M. tuberculosis detected in this study, 9 were positive for the rifampin-resistant gene giving a proportion of 6.5%.Additionally, several studies reported a high prevalence of rifampicin resistance gene positive M. tuberculosis such as study from Nigeria (14.7%) 28 , India (5.4%) 4 , Addis Ababa, Ethiopia (9.9%) 29 , and Adigrat, Ethiopia (9.1%) 30 Limitation of the study As this study is a cross-sectional study, it will not establish the cause and effect.The participants might forget some variables which asks about their previous experience and might have led to recall bias.Use of convenient
2023-09-26T06:17:06.263Z
2023-09-24T00:00:00.000
{ "year": 2023, "sha1": "93ff4da2a391e274832b0cb56b0aba55433aaaa9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-43206-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "adb457ccc3b854c74d49ad09a1b2df92aaa3da4d", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
202159724
pes2o/s2orc
v3-fos-license
Toward the molecular understanding of the action mechanism of Ascophyllum nodosum extracts on plants The importance of biostimulants, defined as plant growth-promoting agents that differ notably from fertilizers, is increasing steadily because of their potential contribution to a worldwide strategy for securing food production without burdening the environment. Based on folkloric evidence and ethnographic studies, seaweeds have been useful for diverse human activities through time, including medicine and agriculture. Currently, seaweed extracts, especially those derived from the common brown alga Ascophyllum nodosum, represent an interesting category of biostimulants. Although A. nodosum extracts (abbreviated ANEs) are readily used because of their capacity to improve plant growth and to mitigate abiotic and biotic stresses, fundamental insights into how these positive responses are accomplished are still fragmentary. Generally, the effects of ANEs on plants have been attributed to their hormonal content, their micronutrient value, and/or the presence of alga-specific polysaccharides, betaines, polyamines, and phenolic compounds that would, alone or in concert, bring about the observed phenotypic effects. However, only a few of these hypotheses have been validated at the molecular level. Transcriptomics and metabolomics are now emerging as tools to dissect the action mechanisms exerted by ANEs. Here, we provide an overview of the available in planta molecular data that shed light on the pathways modulated by ANEs that promote plant growth and render plants more resilient to diverse stresses, paving the way toward the elucidation of the modus operandi of these extracts. Introduction The historically excessive use and misuse of agrochemicals have resulted in environmental pollution, health concerns, and the development of resistant plant pathogens. As a consequence of increased environmental awareness, the application of synthetic agents to guarantee optimal crop yields is now regarded as less favorable and has been translated into a policy adjustment. For instance, the European government has an EU directive to limit nitrate application (91/676/EEC) and a new directive has been established to ban all persistent, bio-accumulative, or toxic pesticides by implementing a more integrated approach by 2020 (2009/128/EC). However, despite the known adverse environmental effects of inorganic fertilizers, the need to increase the efficiency of agricultural practices to meet the world's food demand propels the global application of fertilizers at an anticipated rate of 1.9% per year to reach up to 200 million tonnes by the end of 2020 (FAO 2017). Although organic farming has been proposed as an environmentally friendly alternative for industrial agriculture, the major disadvantage of this low-input production system is that the yields are significantly lower (estimated 5 to 34%) than those of conventional agriculture, mainly due to a high biotic pressure combined with nutrient limitation (Seufert et al. 2012). Thus, any method that can improve plant nutrient Jonas De Saeger, Stan Van Praet and Danny Vereecke contributed equally to this work. capture efficiency in a reduced input system and simultaneously ameliorate the resilience against abiotic and biotic stresses will increase the currently defective agroenvironmental balance and positively contribute to crop productivity and agricultural sustainability. One approach that could meet these requirements and that is steadily gaining interest is the implementation of biostimulants. In 2016, the global biostimulants market was valued at 1.79 billion USD and is projected to reach 3.29 billion USD by 2022 at a compound annual growth rate of 10.43% from 2017 to 2022 (https://www.marketsandmarkets. com/Market-Reports/biostimulant-market-1081.html). However, the first report on "biogenic stimulators" that affect metabolic and energetic processes in humans, animals, and plants dates already from 1933 and since then the terminology and the meaning of this concept have evolved (du Jardin 2015; Yakhin et al. 2017). The European Biostimulant Industry Council (EBIC) states that "Plant biostimulants contain substance(s) and/or microorganisms whose function when applied to plants or the rhizosphere is to stimulate natural processes to enhance/benefit nutrient uptake, nutrient efficiency, tolerance to abiotic stress, and crop quality." Furthermore, according to EBIC, biostimulants are distinguished from traditional crop inputs based on the following characteristics: (i) they operate through mechanisms that are distinct from those of fertilizers, regardless of the presence of nutrients in the products; (ii) they differ from crop protection products because they act only on the plant's vigor and do not directly act against pests or diseases; and (iii) they complementarily stimulate crop production besides nutrition and protection (http://www.biostimulants.eu). To clearly distinguish biostimulants from the existing legislative product categories, the following definition of a biostimulant has been proposed: "a formulated product of biological origin that improves plant productivity as a consequence of the novel or emergent properties of the complex of constituents, and not as a sole consequence of the presence of known essential plant nutrients, plant growth regulators, or plant protective compounds." (Yakhin et al. 2017). The most commonly used biostimulants include microorganisms, humic acids, fulvic acids, protein hydrolysates, amino acids, and seaweed extracts (Calvo et al. 2014), with seaweed extracts as the fastest growing biostimulant product on the market (Sharma et al. 2014). Seaweeds, well known for their applications in food and medicine (Dillehay et al. 2008), have been utilized for centuries in their unprocessed form as soil conditioners in agricultural settings and their benefits as sources of organic matter and nutrients have been valued for a long time (Craigie 2011). Currently, approximately 28.5 million tonnes of seaweed products are produced annually (FAO 2016), a small portion of which is processed to seaweed formulations applied as plant nutrient supplements and biostimulants. The majority of these seaweeds are commercially harvested in 35 countries, with China, Indonesia, the Philippines, Korea, and Japan as dominant players. In Europe, macroalgae are collected from natural habitats in France, Ireland, Norway, Portugal, and Spain, with small-scale cultivation in France (Sharma et al. 2014;Buschmann et al. 2017). Brown algae (Phaeophyta), including Fucus spp., Laminaria spp., Sargassum spp., Ecklonia spp., Durvillaea spp., and Turbinaria spp., are the most commonly used species for agriculture and for commercial biostimulant production, because they can reach high biomass levels and are widespread (Khan et al. 2009;Craigie 2011;Sharma et al. 2014;Bulgari et al. 2015;Yakhin et al. 2017). Unprocessed seaweeds and their extracts can influence plant growth indirectly by affecting the physical and chemical soil properties, by acting as chelators, and by modifying the soil microbiota, resulting in improved soil texture, water holding capacity and overall soil health (Khan et al. 2009;Craigie 2011;Calvo et al. 2014;De Pascale et al. 2017;Abbott et al. 2018). The direct benefits of seaweed applications on plants are increased germination rate; enhanced root growth; extra shoot biomass; improved nutrient use efficiency; early flowering; delayed senescence; increased chlorophyll, flavonoid, and nutrient contents; improved tolerance to abiotic (drought, salinity and freezing) and biotic (nematodes, fungi, viruses, bacteria and insects) stresses; superior fruit yield; and enhanced postharvest quality (Khan et al. 2009;Craigie 2011;Quilty and Cattle 2011;Vera et al. 2011;Calvo et al. 2014;Sharma et al. 2014;Battacharyya et al. 2015;Bulgari et al. 2015;De Pascale et al. 2017;Van Oosten et al. 2017;Abbott et al. 2018). Among the brown algae, Ascophyllum nodosum (L.) Le Jolis, also known as rockweed, has attracted a lot of attention and is sustainably harvested along the North Atlantic coastline (Ugarte and Sharp 2012). Ascophyllum nodosum extracts (hereafter designated ANEs) are not only implemented in food and biotechnological applications but also used in agricultural practices. Indeed, because most of the commercially available alga-based products are ANEs, they are the best extracts to decipher the action mechanism of plant growth stimulation and stress mitigation. Currently, nearly 47 companies are engaged in producing commercial ANEs for agricultural applications (Van Oosten et al. 2017). Based on a wealth of physiological data gathered over close to 70 years of research, the biostimulant activity on a wide variety of plants and crops, including trees, cereals, fruits, vegetables, and ornamentals, herbaceous and woody species alike (Sharma et al. 2014;Battacharyya et al. 2015;Bulgari et al. 2015;Abbott et al. 2018), has been attributed to the different inherent biochemical characteristics of these algae. Nevertheless, the pathways triggered by the identified bioactive compounds are often unknown and, therefore, synergistic activities are predicted due to their low concentrations. Known bioactive compounds in ANEs include poly-and oligosaccharides absent in plants, including laminaran, fucan, and alginate; betaines; sterols; vitamins; amino acids; macro-and micronutrients; phytohormones, such as abscisic acid, cytokinins, and auxins; and unidentified compounds with hormone-like activities (Khan et al. 2009;Craigie 2011;Quilty and Cattle 2011;González et al. 2013;Calvo et al. 2014;Sharma et al. 2014;Yakhin et al. 2017). As the ANE composition is determined by location, season, and physiological status at harvest, and the often proprietary production procedure, the diversity of these commercial seaweed extracts is consequently broad. Moreover, application rate, frequency, and timing vary with plant species, geographic location, and environmental conditions (Craigie 2011;Quilty and Cattle 2011;Sharma et al. 2014;Bulgari et al. 2015). Additionally, the information on the composition of commercial ANE biostimulants is based on total solid and ash contents only that are insufficient quality parameters. Therefore, to predict their performance, the level of key components should be provided as well (Goñi et al. 2016). Thus, to stimulate the adoption of ANEs in mainstream agricultural management practices, consistency and magnitude of the ANE responses need to be normalized and it has to be specified which product will meet which specific need. Besides standardization of the extraction procedure (Sharma et al. 2014;Povero et al. 2016), a complementary approach to attain robustness of the biostimulant claim of ANEs is to unravel their action mechanism and to identify markers that can be employed to test their performance. As the positive growth effects of seaweed extracts or formulated seaweed products on higher plants are seemingly partly independent of their manurial value and of their micronutrient and phytohormone contents, endogenous in planta processes are thought to be altered upon treatment. In the following sections, we will give a chronological, experimentally detailed, and plant-centered overview of the molecular studies that have been published to get insight into the pathways that are involved in ANE-induced plant growth promotion and in resilience against abiotic and biotic stresses. Overall, the methods used to unravel the molecular basis of these effects include the development of fast biosassays based on hormone-responsive promoters or genes fused to a reporter, such as β-glucuronidase (GUS), the expression analysis of marker genes specific for particular pathways, the use of mutants in specific pathways of interest, and genome-wide expression analysis in treated versus untreated plants. Although many of these methods have been established in the model plant Arabidopsis thaliana, they have also been implemented to analyze diverse ANE-induced responses in crops, including oilseed rape (Brassica napus), soybean (Glycine max), spinach (Spinacia oleracea), carrot (Daucus carota), tomato (Solanum lycopersicum), and cucumber (Cucumis sativus). We also provide an extensive table giving an overview of differentially expressed genes discussed in the reviewed papers of diverse phenotypic responses upon ANE treatment (Table 1). To conclude, we allude to bottlenecks in the analysis of the molecular action mechanism of ANEs and end with perspectives on future research that could stimulate the development of these renewable biostimulants as efficient and sustainable agricultural products. Molecular analysis of ANE-induced plant growth promotion in Arabidopsis thaliana One method to assess the plant growth-stimulating performance of ANEs is to validate their activity in fast and reproducible bioassays that can easily be implemented for large screens of commercial products. In three assays on Arabidopis thaliana, accession Columbia-0 (Col-0), the plant growth-promoting activity of aqueous and methanolic extracts of two ANEs, designated ANE1 and ANE2 (Acadian Seaplants Ltd., Darmouth, NS, Canada) (Rayorath et al. 2008a) was evaluated by means of (i) measuring the root tip elongation of 5-day-old seedlings grown for 7 days on halfstrength Murashige and Skoog (½MS) medium on vertical plates with 10 and 100 mg L −1 of the aqueous extracts and 2 g L −1 of the MeOH extract; (ii) determining the fresh weight (FW) increase of 7-day-old seedlings grown for 7 days in liquid ½MS cultures with 1 g L −1 of MeOH extract; and (iii) recording plant height and leaf number of 2-week-old plants grown on Jiffy-7 peat pellets for 4 weeks under greenhouse conditions and irrigated once per week with 1 g L −1 of the aqueous extracts. Additionally, auxin accumulation and signaling were assayed in 7-day-old seedlings of a transgenic Col-0 line that carried a DR5::GUS fusion (Ulmasov et al. 1997). The seedlings were grown in liquid ½MS cultures supplemented with 1% (w/v) sucrose and histochemically stained for GUS activity after a 24-h treatment with 2 g L −1 of the MeOH extracts. Both the aqueous and methanolic fractions of ANE1 and ANE2 stimulated in vitro primary root growth at all concentrations tested, but ANE2 performed better. In contrast, exogenous auxin (10 μM and 100 μM indole-3-acetic acid [IAA]) suppressed primary root growth, but induced lateral root formation. The MeOH fractions of ANE2, but especially of ANE1, increased the FW of the plantlets grown in liquid cultures. Similarly, the aqueous extracts of ANE2 stimulated plant height under greenhouse conditions, but those of ANE1 additionally increased the number of leaves. Finally, the MeOH extracts induced DR5::GUS expression in the root and hypocotyl tissues, but to a much lower extent than 25 μM IAA, suggesting that auxin signaling is moderately activated by ANE treatments. Based on these data, transgenic plants grown in liquid cultures were concluded to be useful as fast biosensors to assess growth promotion and to ensure uniform bioactivity in seaweed formulations and/or extract fractions (Rayorath et al. 2008a). Furthermore, it was proposed that the development of other transgenic lines to study the gene expression of marker genes in particular hormone pathways and phenotype-genotype interactions would be valuable. A transgenic Arabidopsis Col-0 reporter line was also used to develop a bioassay for screening cytokinin-like activity in ANEs (Khan et al. 2011) that was faster than the traditional assays, such as the soybean or tobacco (Nicotiana tabacum) callus assay (Sanderson and Jameson 1986;Stirk and Van Staden 1997). Seven-day-old ARABIDOPSIS RESPONSE REGULATOR5 (ARR5)::GUS seedlings (D' Agostino et al. 2000) were grown in liquid ½MS cultures supplemented with 1, 3, or 5 mL L −1 of the commercial alkaline liquid extract Stimplex (Acadian Seaplants Ltd.). Additionally, plants were grown on Jiffy-7 pellets for 3 weeks under greenhouse conditions and sprayed with 1 mL of the 1, 3, and 5 mL L −1 Stimplex solution supplemented with 0.02% (v/v) Tween. In both assays, after 48 h of incubation, the plants treated with 3 and 5 mg L −1 extract exhibited an increased GUS activity in the root, but especially in the shoot, hinting at the activation of the cytokinin signaling. Although the assay in liquid cultures was more sensitive, the ARR5 gene was also induced in the greenhouse assay, but less upon the ANE treatment than with 10 μM 6-benzylaminopurine (BAP) used as a positive control. Plant developmental parameters were not measured, hence the observed cytokinin response and the plant growth modulation could not be correlated. For a long time, the involvement of hormonal pathways in growth improvement of plants upon treatment with seaweed extracts has been postulated. Numerous reports indicate that seaweeds and their extracts contain plant hormones, including abscisic acid (ABA), gibberellins, brassinosteroids, ethylene, auxins, cytokinins, and even strigolactones, leading to the hypothesis that these algal hormones could directly steer growth of terrestrial plants. In support of this assumption, the effectiveness of seaweed extracts depends on the amount applied, with a dose-response curve typical for phytohormones. However, the hormone concentration in the seaweed extracts is low and only relatively small amounts of extracts are applied to plants, questioning this proposition (Stirk and Van Staden 1996;Stirk et al. 2004;Craigie 2011). Although Rayorath et al. (2008a) and Khan et al. (2011) observed an increase in auxin and cytokinin signaling in ANE-treated plants, implying enhanced levels of these hormones, no information on the hormone composition of the ANEs used was provided (Rayorath et al. 2008a;Khan et al. 2011), thus precluding the conclusion that the gene activation is a direct consequence of the hormones in these ANEs. To take this caveat into account, Wally et al. (2013) combined phytohormone profiling, plant growth bioassays in wild-type and biosynthetic Arabidopsis phytohormone mutants, GUS expression in transgenic reporter lines, and reverse-transcription quantitative polymerase chain reaction (RT-qPCR) analysis of phytohormone marker genes. All the experiments were done with commercial aqueous alkaline- extracted ANE powder solution (Acadian Seaplants Ltd.) and the hormone profiling was carried out on 12 commercially available seaweed (Ascophyllum, Ecklonia, Macrocystis, Durvilea, and Sargassum) extracts. The experiments revealed that the hormone composition and concentration varied significantly. Generally, the hormone levels were very low (pmol g −1 dry weight [DW]) and also the range of detected metabolites for the extracts not supplied by Acadian Seaplants Ltd. was limited (maximally 4/16 versus 8/16). It would have been informative to test the effect of these 12 extracts on the root development of Arabidopsis, because composition and activity might have been correlated. As the Canadian Atlantic ANEs all contained IAA, ABA, and 2-isopentenyl adenosine (2-iP), the focus was on these three hormone classes. The effect on plant development of the aqueous ANE powder solution was evaluated in a root assay comparable with that of Rayorath et al. (2008a): 4-day-old seedlings were grown on vertical plates on ½MS supplemented or not with 100 mg L −1 ANE. The root length was measured after 3, 5, and 7 days of treatment and the lateral root development after 7 days. Besides the analysis of wild-type Col-0 plants, the responses were also assessed of abi4-1 seedlings that are insensitive to ABA and cytokinin and of the quadruple cytokinin biosynthesis mutant ipt1,3,5,7. Additionally, GUS activity was histochemically visualized in 4-day-old seedlings of transgenic DR5::GUS and ARR5::GUS reporter lines of Col-0 and Wassilewskija-0 (Ws-0), respectively, grown on ½MS and treated with 100 mg L −1 ANE for 48 h or 5 days. The ANE treatment of Col-0 plants inhibited the primary root elongation and reduced the number and length of the lateral roots when compared with control plants, illustrating that the ANE application had a significant impact on the outcome of the treatment (Wally et al. 2013). The mutants, however, responded differently when compared with the wild type: the primary root of abi4-1 seedlings was insensitive to the ANE treatment, whereas the lateral root initiation, but not the outgrowth, was inhibited. ANE treatment of the ipt1,3,5,7 mutant still resulted in the inhibition of the primary root elongation, but without obvious effect on the lateral root initiation or elongation. The partial responsiveness of the mutants suggests that some components in the ANE itself cause particular aspects of the root phenotype and indicate that ABA and cytokinins are involved. Curiously, the conclusion has not been drawn that the increased endogenous cytokinin and ABA levels might be responsible for the root phenotype in ANE-treated Col-0. Nevertheless, consistent with the root phenotype in wild-type plants, the GUS activity in the DR5::GUS reporter line was reduced in the roots (and shoots) upon the ANE treatment and much fewer lateral root initiation sites were visualized, h i nt i ng at a n a ux i n s i g na l i n g do w n r e gu l at i on . Additionally, the ANE treatment strongly and persistently induced the GUS activity in the roots (and shoots) of the ARR5::GUS transgenic line, indicating a significant upregulation of the cytokinin response. As further validation of the hypothesis that endogenous hormone levels are at the basis of the observed root phenotypes, phytohormone profiling and RT-qPCR analysis of selected genes of ANE-treated plants were carried out (Wally et al. 2013). Unfortunately, the experimental setup differed significantly from the previous assays, namely 14-day-old plants treated for 24, 96, and 144 h with ANE were harvested and shoot, instead of root, tissue was analyzed. The phytohormone profiling showed that the IAA levels were lower in the ANE-treated than in the control shoots up to 96 h after the treatment, but were comparable after 144 h. Surprisingly, no cytokinin bases were detected in any of the samples, but zeatin (Z) and 2-iP precursors and total cytokinin content (especially trans-zeatin [tZ] and cis-zeatin [cZ]) were transiently higher in ANE-treated shoots until 96 h of treatment, whereafter the levels were comparable with those of the controls. Finally, ABA and its catabolites were higher in ANE-treated shoots, but the differences with the control were not significant (except for phaseic acid and dihydrophaseic acid at 144 h). Although these results suggest that the ANE treatment indeed affects endogenous hormone levels, the hypothesis could have been supported when roots had been analyzed, so as to correlate the observed plant responses and the ANE composition (Wally et al. 2013). Additionally, the hormone profiling of the abi4-1 and the ipt1,3,5,7 mutants might have provided strong supportive data on the statement that hormone levels in ANEs are too low to cause a direct response in plants. In line with the histochemical results, the RT-qPCR analysis showed that the ARR5 expression was upregulated in shoots upon the ANE treatment and that the SENESCENCE ASSOCIATED GENE13 (SAG13) expression was downregulated, both in support of an ANE-enhanced cytokinin response. Although an up to threefold induction was recorded f o r t h e c y t o k i n i n b i o s y n t h e s i s AT P / A D P ISOPENTENYLTRANSFERASE3 (IPT3), IPT4, and IPT5 genes in response to the ANE treatment, their expression was considered too low to account for the cytokinin signaling activation. Additionally, neither the expression of the tRNA IPT2 and IPT9 genes nor of the cytokinin-activating LONELY GUY1 (LOG1), LOG7, and LOG8 genes were significantly altered upon the ANE treatment, further implying that cytokinin biosynthesis in the shoots might probably not play a role in the enhanced cytokinin response. In contrast, the expression of the CYTOKININ OXIDASE4 (CKX4) gene, encoding cytokinin degradation, was strongly repressed after ANE treatment, indicating that the modulation of the cytokinin homeostasis might be part of the action mechanism of this particular ANE. Indeed, the induction of cytokinin biosynthesis genes and the simultaneous downregulation of cytokinin degradation genes are inconsistent with cytokinin addition (Motte et al. 2013). Wally et al. (2013) concluded that the very modest upregulation of IPT gene expression could explain the root phenotype, but they did not consider the possibility that the ANE contained compounds that target CKX expression directly lead to the modest accumulation of the endogenous cytokinin levels. In addition to the analysis of the expression of marker genes of the cytokinin response, the expression of genes implicated in ABA and auxin metabolism was measured as well. The moderate accumulation of ABA and ABA catabolites in the shoot was supported by the ANE-induced expression of the ABA biosynthesis genes, ABA2 and especially 9-CIS-EPOXYCAROTENOID DIOXYGENASE3 (NCED3), the ABA-responsive gene RD29a, and the ABA degradation gene CYP707A3, but could not explain the inhibition of the primary root growth and lateral root formation, because the data were obtained for shoots only and not for roots. For the RT-qPCR analysis of auxin-related genes, the TRYPTOPHAN AMINOTRANSFERASE1 and YUCCA genes were not selected (Wally et al. 2013) that represent the indole-3-pyruvic acid route, the main IAA biosynthesis pathway in Arabidopsis (Malka and Cheng 2017) and TRYPTOPHAN SYNTHASE α1 were chosen that are involved in tryptophan synthesis, an IAA precursor, and CYP79B2, CYP79B3, NITRILASE1 (NIT1), and NIT2 implicated in the indole-3-acetaldoxime pathway (Malka and Cheng 2017). The expression of all seven selected genes was repressed in the shoot upon ANE treatments. The reduced DR5::GUS expression and the decreased IAA level in the shoot suggest downregulation of the auxin pathway. Although the RT-qPCR data concur, they are inadequate to draw conclusions, because the key genes have not been analyzed. Additionally, given that IAA-amino acid conjugates were abundantly present in the used ANE extracts, it would have been interesting to test the expression of the IAAamidohydrolase genes that hydrolyze the storage forms into the active auxin (Ludwig-Müller 2011). Altogether, in our opinion, the data presented by Wally et al. (2013) are not sufficient to infer that the hormone effects can be attributed entirely to endogenous hormone modulations in the plant. Furthermore, the opposing effects on root development recorded in the studies of Rayorath et al. (2008a) and Wally et al. (2013) are difficult to interpret, because the plant developmental status varied strongly, hinting at different growth conditions (for instance, the root length of 7-day-old control plants was 15 mm vs 60 mm). Furthermore, only the MeOH fraction and not the aqueous extract of the ANE was tested on the DR5::GUS activity (Rayorath et al. 2008a). Additionally, the composition of the ANEs used was not specified, therefore, the difference in ANEs utilized may account for the discrepancy in the results. A genome-wide expression analysis with GeneChip ATH1 Affimetrix microarrays was carried out on shoot material of Arabidopsis Col-0 plants treated for 7 days with two different extracts of a commercial liquid ANE, namely ANE A and ANE B, a neutral and alkaline aqueous extract, respectively (Goñi et al. 2016). The Col-0 plants were grown axenically on MS for 14 days, whereafter they were transplanted to pots with compost/vermiculite/perlite (5:1:1). One day later, the plants were treated with the ANEs with a foliar spray (0.2% v/v) and after 7 days the plant height was measured and the leaf number was counted. Samples were taken for microarray analysis. The chemical composition analysis of ANE A and ANE B revealed that the pH change during the extraction process significantly affected the concentration of all key components of the ANEs, including polyphenols, fucoidan, uronics (alginate), laminarin, and mannitol. The largest difference was the 4-fold higher polyphenol level in ANE B than in ANE A, but no information on the hormone levels was provided. Despite these differences in composition, the morphological changes induced with both ANEs consisted of early flowering, longer floral stalks (increased plant height), and increased leaf number, with a stronger response for ANE B than for ANE A. Surprisingly, the genome-wide transcriptional analysis with ANE A revealed that 1011 genes (4.47% of the microarray) were differentially expressed with 599 upregulated, 412 downregulated, and 849 unique to ANE A, whereas only 196 differential genes (0.87% of the microarray) were recorded with ANE B, of which 127 upregulated, 69 downregulated, and 34 unique to ANE B (2-fold change cutoff). Both treatments had 168 differentially expressed genes in common, potentially representing pathways implicated in the shared phenotypical responses. Although no comprehensive list of these genes was made available, based on the MapMan ontology, 32 genes were involved in metabolism (amino acid metabolism, transport, lipid metabolism, and secondary metabolism), 35 gene in development (cell wall, development, and photosynthesis), 14 genes in stress (redox and stress), 5 genes in hormone metabolism, and 82 genes in other pathways (miscellaneous enzymes, protein, and RNA) (Goñi et al. 2016). Metabolism-associated genes upregulated by ANE A and ANA B included transport of amino acids, calcium (CAX3), ammonium, and copper (COPT2), whereas upregulated development-associated genes comprised cell wall organization (ATCSLE1, UGE1, and PAE8), cell cycle/ organization (AtPP2-A11, and FIB), and plant development (LEA14, LEA3, NAM, and TET3). Although no effect was seen on the expression of auxin biosynthesis genes, the expression of SMALL AUXIN UPREGULATED RNA59 (SAUR59), known to be responsive to auxin, was upregulated by both ANEs, implying a role for auxin signaling in the shoot response in agreement with the results of Rayorath et al. (2008a). No data were presented on the differential expression of cytokinin-related genes. Finally, Goñi et al. (2016) looked for correlations between the ANE composition and gene expression, but direct relationships between the observed expression patterns and particular component in the ANEs were difficult to find. The upregulation of the cold-induced gene COR47 was attributed to mannitol present in the ANEs, but could just as well have been caused by, for example, a high sodium concentration in these particular ANEs. They concluded that "the ANE composition −biostimulant activity relationship is complex, and progress in unraveling this relationship will require more comprehensive experiments assessing the effect of the major and minor components of ANE biostimulants singly and in combination." (Goñi et al. 2016). Molecular analysis of ANE-induced plant growth promotion in crop plants The effect of ANEs on germination and seedling vigor was tested in several greenhouse bioassays on barley (Hordeum vulgare) 'AC Sterling' and 'Himalaya' and a derived gibberellic acid (GA 3 )-responsive dwarf mutant grd2 (Rayorath et al. 2008b). Four fractions were prepared from an alkaline ANE (Acadian Seaplants Ltd): an aqueous solution, a MeOH fraction that was further subfractioned first with chloroform and then with ethyl acetate (EA); all fractions were tested at three concentrations (100 mg L −1 , 500 mg L −1 , and 1 g L −1 ). For the evaluation of the ANE effect on seed emergence, seeds were placed in sterile vermiculite and irrigated with the different concentrations of the four fractions. Emergence was recorded every 12 h for 96 h after the first seed emerged. Additionally, seedling growth was measured by determining the shoot and root lengths and their DW 14 days after the ANE treatment. Finally, the impact of ANEs on the mobilization of food reserves from the endosperm to support embryo growth and differentiation was estimated by means of a starch zone-clearing assay for the αamylase activity. In this process, the α-amylase activity is stimulated by GA 3 and repressed by ABA, thus allowing, by inclusion of the grd2 mutant, the assessment of whether the ANE fractions contain compounds with GA-like effects. Both the aqueous and the organic subfractions of the ANE strongly stimulated seed emergence, shoot length, and shoot DW at 1 g L −1 , which are very important for seedling establishment, growth, and development in barley, but they did not affect the root development (Rayorath et al. 2008b). To assess which type of component present in the ANE was responsible for these activities, the α-amylase assay was performed under different conditions. For both the wild-type barley and the grd2 mutant, the organic fractions, but especially the aqueous solution, induced the α-amylase activity, suggesting that the active component in the ANE had GA-like functions. In support to this finding, this capacity was lost or strongly reduced in the organic fractions when activated charcoal was added or when the ANEs were autoclaved, respectively, hinting at an organic and thermolabile nature of the active component. When the α-amylase activity was stimulated by exogenous GA 3 , this effect could be completely nullified upon addition of ABA. However, the combination of ANEs with ABA still induced the α-amylase activity, albeit to a lesser extent than ANE alone. As liquid chromatography-tandem mass spectroscopy did not reveal any detectable GA 3 in the ANEs, these data indicate that the ANEs contained compounds with GAlike effects that acted in a GA-independent manner. The effect of a soluble alkaline ANE powder (Acadian Seaplants Ltd) on the yield and nutritional quality of spinach (Spinacia oleracea) 'Unipack 12' was studied by assaying diverse parameters of 6-day-old seedlings grown on ½MS supplemented with 1% (w/v) sucrose and treated with 100 or 500 mg L −1 ANE (Fan et al. 2013). After 21 days of treatment, the effect on growth was assessed by measuring FW, DW, dry matter content (DMC), total soluble protein, and leaf pigments. Total antioxidant capacity, total phenolics, total flavonoids, and phenylalanine ammonia-lyase (PAL) and chalcone isomerase (CHI) activities were scored as well in the spinach shoots. Finally, leaf tissues of 30-day-old plants were analyzed by semi-qRT-PCR to evaluate the expression of genes involved in antioxidant activities. Treatment with the lowest ANE concentration only resulted in an increase in the plant biomass (FW, DW, and DMC), in the contents of total protein, chlorophyll, phenolics, and flavonoids, and in the antioxidant activity. Whereas the PAL activity was not affected by the ANE treatment, the CHI activity was induced. Of the antioxidant genes tested, the expression remained unchanged of the sucrose phosphate synthase (SPS), plastid glutamine synthetase (GS2), dehydroascorbate reductase (DHAR), and stromal ascorbate peroxidase (sAPX) genes. The increased biomass and total protein content might be associated with the enhanced cytosolic GS1 expression involved in nitrogen assimilation. In contrast, the augmented chlorophyll content could be correlated with an increase in betaine content in the plants caused by the ANE-induced expression of betaine aldehyde dehydrogenase (BADH) and choline monooxygenase (CMO) that could act additively to the betaine and cytokinin-like ANE components. Finally, the increased total phenolic and flavonoid contents and enhanced antioxidant capacity might be related to the high CHI activity and upregulated expression of glutathione reductase (GR), thylakoid-bound APX (tAPX), and monodehydroascorbate reductase (MDHAR). Thus, the ANE application at the early growth stage of spinach induces particular physiological responses, probably through the phenylpropanoid and flavonoid pathways that not only stimulate plant growth, but also contribute to an enhanced nutritional quality (Fan et al. 2013). The molecular mechanism of the growth-stimulating effect of AZAL5, an aqueous solution prepared by microrupture under acidic conditions from freshly harvested A. nodosum plants was studied by means of a transcriptomics approach on rapeseed (Brassica napus) 'Capitol', grown hydroponically under greenhouse conditions (Jannin et al. 2013). Oneweek-old plants were treated with 67 mg L −1 AZAL5 (day 0) for 30 days (refreshed every 2 days) and different parameters of root and shoot tissues were measured after 1, 3, and 30 days. The determination of the elementary and hormone composition of AZAL5 revealed that C, H, and O were the main elements, but Ca, K, Mg, Na, and S, but not N, were present as well. Although the contribution of AZAL5 to the mineral supply of the hydroponics solution was negligible, 30 days of AZAL5 treatment resulted in a significant increase in the total DW that was attributed more to the root DW than to that of the shoot. A transcriptomics analysis on a rapeseed Gene Expression Microarray 4 × 44 K containing 31,561 genes, of which 60% unidentified (5-fold change cutoff), did not uncover differential gene expression after 1 day of treatment. After 3 days of treatment, 724 genes were differentially expressed in the shoot and 298 genes in the root, and after 30 days of treatment, 612 differential genes were recorded in the shoot and 439 genes in the root. Classification of the differential genes with the DFCI Gene index annotation tool (http://compbio.dfci.harvard.edu/ tgi/tgipage.html) revealed that almost the entire plant metabolism was modified by the AZAL5 treatment. General cell metabolism, carbon metabolism and photosynthesis, stress responses, and nitrogen and sulfur metabolism were affected, but a few differential genes were allocated to metabolic pathways involved in fatty acids, phytohormones, senescence, plant development, and ion transport. To correlate the differential gene expression with physiological changes in the AZAL5-treated plants leading to growth stimulation, mineral and ion analyses were done (total N, total S, nitrate, and sulfate) and nitrate reductase activity, chlorophyll concentration and net photosynthetic rate were determined. After 30 days of AZAL5 treatment, the N content was significantly increased in shoots and roots, coinciding with an enhanced nitrate uptake, resulting from the improved growth (same DW increase rate) and an upregulation of BnNTR1.1 and especially BnNTR2.1 expression. Nitrate reductase activity was also induced by AZAL5, but only in the shoots. Similarly, the S content and sulfate level significantly increased in shoots and roots, but the increase rates were much higher than those of DW, hinting at the AZAL5-induced activation of S uptake. In agreement with these findings, the sulfate transporter genes BnSULTR1.1, BnSULTR1.2, BnSULTR4.1, BnSULTR4.2, and a serine acetyltransferase were all higher expressed in both tissues. Additionally, genes encoding an ATP sulfurylase and glutathione S-transferases of the Tau and Phi classes were only upregulated in the shoot. AZAL5 increased the chlorophyll content, but not the net photosynthetic rate, in line with the downregulation of genes involved in photosynthetic pathways in the shoot. Fluorescence confocal and transmission electron microscopic analyses revealed that the AZAL5 treatment augmented the number of chloroplasts and starch granules, the latter of which had an increased size, implying that the dark photosynthesis reactions leading to C assimilation and starch synthesis were enhanced. This conclusion was supported by the increased expression of ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco) and carbonic anhydrase. Together all the data indicate that in response to the AZAL5 treatment, the N uptake by the rapeseed roots is assimilated directly for growth and is not stored. In contrast, the AZAL5stimulated S uptake exceeded the growth demand, resulting not only in assimilation, but also in sulfate accumulation (Jannin et al. 2013). Finally, although Jannin et al. (2013) focused on photosynthesis and nitrogen/sulfur metabolism, they also carried out a hormone profiling of the AZAL5 solution and the AZAL5treated plants. In the AZAL5 solution, low levels of IAA, ABA, and 2-iP were detected. The IAA level was 20-fold lower than that of the Canadian ANEs, whereas the amounts of ABA, 2-iP, and isopentenyl adenosine (iPR) were threefold higher, threefold lower, and 30-fold lower (Wally et al. 2013). Hormone profiling of the shoots and roots, at 1, 3, and 30 days after AZAL5 treatment revealed that the IAA and ABA contents did not differ from the controls. These findings are in agreement with the downregulation of a putative auxin response factor gene and a putative 9-cis-epoxycarotenoid dioxygenase and neoxanthin cleavage enzyme-like protein gene implicated in ABA biosynthesis in shoots. However, the unaltered auxin levels in the roots were not consistent with the large increase in the root biomass and the upregulation of a putative adventitious rooting-related oxygenase and an auxininduced protein in root tissues, hinting at an increased auxin response. In contrast to the data obtained for Arabidopsis (Wally et al. 2013), the bioactive cytokinin bases Z and 2-iP were found in rapeseed shoot tissues, but the concentration of their biosynthetic precursors occurred at over 100-fold lower concentrations (Jannin et al. 2013). The Z content in AZAL5treated shoots was higher only after 3 days, whereas the transzeatin riboside (tZR) content was lower for all three time points measured, and the amounts of cis-ZR (cZR), 2-iP, and iPR did not differ. Interestingly, at the gene expression level, the cytokinin perception seemed to be upregulated, whereas a putative cytokinin degradation CKX gene and several senescence-associated proteins were downregulated, implying an enhanced cytokinin response in the shoot, consistent with the increased chlorophyll content and chloroplast number and the improved growth. In roots, the Z levels were comparable with those of control plants, but the ZR and cZR levels were lower upon the AZAL5 treatment. The iPR level was higher after 1 day of AZAL5 treatment, remained unchanged after 3 days, and was lower after 30 days than that of the control. However, the 2-iP levels were extremely enhanced upon the AZAL5 treatment at all three time points (up to almost 600% after 3 days of treatment), but none of the differentially expressed genes pointed toward an increased cytokinin response, in agreement with the root system expansion upon the AZAL5 treatment. As a continuation, the impact of AZAL5 on biofortification in hydroponically grown rapeseed was analyzed in the same experimental setup by measuring the level of micronutrients in root and shoot tissues, especially Cu, Fe, Mg, and Zn, combined with a microarray transcriptomics and RT-qPCR analysis (Billard et al. 2014). Accordingly, after 30 days of treatment with AZAL5, the significantly increased total DW was attributed mainly to the root and less to the shoot DW and the number of chloroplasts was higher in the treated than in the untreated shoots. Nutrient analysis revealed that the relative amounts of macronutrients (N, K, S, P, and Mg) and micronutrients (Fe, Na, Mn, B, Si, Zn, and Cu) increased upon AZAL5 treatment. The uptake of Si, P, and N (designated group II elements) was in accordance with the growth rate, whereas that of Na, Mn, Cu, and S (group I nutrients) exceeded that required for growth, thus leading to accumulation. The concentrations of K, Fe, and Zn (group III elements) did not change for the whole plant, but differed for shoots and roots, suggesting that the AZAL5 treatment affected the rootto-shoot translocation within the plants. In contrast, the microarray analysis revealed 200 differentially expressed genes 1 day after the AZAL5 treatment, increasing to 1630 genes after 3 days and 1717 genes after 30 days, hinting at a stronger impact in this experiment (Billard et al. 2014). In agreement with the increased N and S contents in AZAL5-treated plants, the transporter genes BnNRT1.1, BnNRT2.1, BnSULTR1.1 and BnSULTR1.2 were upregulated. Similarly, the expression of the Cu 2+ transporter gene COPT2 was upregulated both in root and shoot tissues, especially after 3 days of AZAL5 treatment, corresponding with the increased Cu concentrations. Furthermore, the downregulation of COPT2 after 30 days of treatment illustrates the establishment of a negative feedback to prevent Cu 2+ toxicity. The expression of the Fe 2+ transporter gene IRT1 did not change in response to AZAL5 in roots or shoots at any time point, supporting a steady-state root uptake. Despite the increased Mg 2+ concentrations, no differential expression was recorded for the Mg 2+ transporter gene MRS-10. The transient upregulation of the less specific efflux transporter NRAMP3 in roots only after 1 day of AZAL5 treatment might be the reason for the enhanced root-to-shoot translocation of Fe 2+ and possibly Zn 2+ . As the hormone and nutrient levels in AZAL5 were too low to account for the observed effects on rapeseed, Billard et al. (2014) concluded that macromolecules, such as the polysaccharides laminaran or fucoidan, or a synergistic activity of various compounds might trigger the responses. In that context, it is interesting to note that the transcriptional modification induced by the humic acid extract HA7 was 50% in common with that triggered by AZAL5 in rapeseed (Billard et al. 2014), whereas only a 13% overlap in the differentially expressed genes was found when the effect of two ANEs was compared on the Arabidopsis gene expression (Goñi et al. 2016). Molecular analysis of ANE-induced freezing stress tolerance in Arabidopsis and tobacco Although the positive impact of the application of seaweed extracts to alleviate the detrimental effects associated with diverse abiotic stresses has been documented for a long time in many plant species (Van Oosten et al. 2017), only a few studies aimed to unravel the molecular mechanisms underlying this effect. To get insight into the ANE-induced freezing tolerance, Rayirath et al. (2009) and later Nair et al. (2012) used an in vivo peat pellet-freezing assay with Arabidopsis Col-0 plants. An aqueous solution and diverse organic fractions (MeOH fraction and sequential subfractions with hexane, chloroform, and EA) of a powdered alkaline ANE (Acadian; Acadian Seaplants Ltd) were utilized. Threeweek-old plants grown on Jiffy-7 pellets under greenhouse conditions were irrigated with 20 mL of the aqueous solution and organic extracts (1 g L −1 ) and 48 h later the temperature was lowered by 1°C per day until the desired low temperature. Two days after the plants had been returned to the regular temperature regime (22°C/18°C day/night), the freezing damage was scored by means of diverse approaches. In the initial study ), the degree of chlorosis and leaf damage was visually evaluated, the chlorophyll content and electrolyte leakage were measured, the macroscopic tissue damage was visualized by Trypan Blue staining, and membrane integrity was assessed with Nile Red staining via fluorescence microscopy. Treatment with the ANE, but especially with the lipophilic component-containing EA fraction induced systemic physiological responses that provided a considerable protection against freezing damage at the whole plant level, namely viability at −4.5°C was increased by 40-60%, the damaged tissue area was reduced by 30-40%, the plasma membrane integrity and tissue organization were maintained, and the temperature that caused 50% electrolyte leakage was lowered by 3°C. A two-step RT-PCR revealed that the threefold higher chlorophyll content in the ANE-treated plants correlated with a reduced expression of two chlorophyllase genes (AtCLH1 and AtCLH2), involved in chlorophyll degradation. Additionally, by means of a RT-qPCR analysis, a twofold upregulation was detected of the transcription factor DREB1A/CBF3 and its target cold response genes COR15A encoding a chloroplast stromal protein with cryoprotective activity and COR78/RD29A, a key regulator of drought, salinity, and low temperature. This signal transduction cascade could possibly actively induce downstream genes implemented in freezing tolerance. The follow-up study revealed an important role for osmoprotectants, such as proline and soluble sugars, in the improved freezing tolerance of ANE-treated plants (Nair et al. 2012). ANE application and particularly of its lipophilic fraction resulted in the accumulation of proline and soluble sugars, but could not protect the proline biosynthesis-deficient mutant p5cs-1 and the sugar accumulation-defective mutant sfr4 against freezing damage. Metabolite analysis with twodimensional nuclear magnetic resonance confirmed the increase in sugar and sugar alcohols in plants treated with the lipophilic ANE fraction prior to freezing and, additionally, indicated an accumulation of unsaturated fatty acids, possibly related to the altered membrane fluidity that enhances membrane integrity and cellular function during freezing. The subsequent assessment of global transcriptional changes (≥1.5fold) elicited by the lipophilic ANE fraction by means of the GeneChip ATH1 Affimetrix microarrays revealed that approximately 5% (1113 genes) of the genome was affected in plants harvested during freezing stress and only 1.65% (398 genes) after freezing treatment, with a small overlap between both sets of transcripts. Overall, the trend in the different functional up-and downregulated classes during and after freezing was opposite, suggesting a specific ANE mode of action during freezing stress. The accumulation of proline was supported by the upregulation of the proline biosynthesis genes P5CS1 and P5CS2 and the downregulation of the proline degradation gene ProDH. Furthermore, the increased levels of soluble sugars seemed to be achieved by several mechanisms, including the upregulation of polysaccharide degradation genes (such as SEX1 and SEX4/DSP4 and MUR4, involved in starch and galactose degradation, respectively), and soluble carbohydrate biosynthesis (glucose, fructose, raffinose/stachyose, such as GOLS2 and GOLS3) and the downregulation of pathways involved in sucrose degradation (At1g12240). In agreement with the metabolome data, also the lipid metabolism was among the major pathways that were affected by the ANE treatment during freezing, such as the upregulation of the digalactosyldiacylglycerol synthaseencoding gene DGD1 involved in galactolipid biosynthesis. Additionally, 40 differentially expressed genes were induced, related to low temperature stress tolerance, osmotic stress, biotic stress and balance control of diverse hormones, namely the genes coding for salicylic acid (SA) (At1g18870), spermine/spermidine biosynthesis (At5g15950), and cytokinin conjugation (UGT73B2, UGT76C1/2, At2g43820, and At1g24100), whereas the GA2ox1 gene involved in gibberellin inactivation was repressed. Zamani-Babgohari et al. (2019) tested the effect of ANEs in mitigating freezing stress in Nicotiana tabacum L. (tobacco) cv. Bright Yellow-2 (BY-2) suspension cells. BY-2 cells grown at 27°C in the presence of the alkaline-extracted ANE Acadian (Acadian Seaplants Ltd), at concentrations of 0, 0.01, 0.05, and 0.1 g L −1 were subjected to 0, −3, or − 5°C conditions for 24 h and then allowed to recover. Importantly, when no freezing stress condition was imposed, no positive effect on the biomass was observed. In contrast, at the highest concentration (0.1 g L −1 ), the biomass was even significantly reduced. However, under freezing stress the DW of ANEtreated cells had increased after 3 days of recovery, although extensive variations in the measurements and the levels of obtained growth, it was noted to be up to 5 times smaller than in the control experiment without freezing stress. Furthermore, for the −5°C condition, the cell viability was estimated to be 77% for cultures treated with 0.1 g L −1 ANE, whereas it was less than 20% in the control group. In cells treated with 0.1 g L −1 ANE the ion leakage was 37% after recovery at day 7, and 53 and 65% when treated with 0.05 and 0.01 g L −1 , respectively. qRT-PCR analysis showed a lower expression of Activating protein 2, betaine aldehyde dehydrog en as e ( B A D H ) , g l u t ath i o ne S -t r a n s f e r as e, an d fucosyltransferase in ANE-treated cells compared with the control, indicating that these cells experience less stress than under the control condition. In contrast, the expression of galactinol synthase 2 was upregulated under freezing stress in the ANE-treated groups, indicative of an enhanced defense against stress, whereas DGD1 was upregulated only after 6 h of recovery in the control plants and pyrroline 5-carboxylate synthase (P5CS), involved in proline biosynthesis, was induced 2 and 6 h after cold stress in ANE-treated and control plants, respectively. The acetyl-CoA carboxylase gene was induced early in cultures under ANE treatment (after 4 h), but it was also upregulated after two additional hours of recovery in the control group. For the sake of comparison (cf. Rayirath et al. 2009 andNair et al. 2012), it would have been interesting to investigate whether the lipophilic fraction of the ANE might have been efficient in mitigating cold stress too in this experimental setup. Altogether, these studies illustrated that the ANE-induced physiological changes lead to protection against freezing damage and provided a foundation to the underlying molecular basis, but without functional analyses, it is difficult to draw strong conclusions from these data. Additionally, although both the aqueous and lipophilic fraction clearly contain components responsible for these changes, the identity of the elicitors and the subsequent signal transduction response still await elucidation. Molecular analysis of ANE-induced drought stress tolerance in Arabidopsis and crops To investigate the molecular basis of the positive impact of ANE applications on the photosynthetic performance during drought stress Santaniello et al. (2017) grew Arabidopsis Col-0 plants in a hydroponics system. After 20 days the plants were treated for 5 days with 3 g L −1 of ANE (soluble acidic extract powder provided by Algea, Kristiansund, Norway), where after they were removed from the hydroponic solution and placed on filter paper for 4 days to induce water stress. Several physiological parameters were measured during the dehydration period, including the relative foliar water content, gas exchange and chlorophyll fluorescence. At the same time the expression of 14 relevant marker genes involved in these processes was determined by real-time qPCR. Whereas 90% of the untreated plants died within 4 days of treatment, almost all ANE-treated plants survived, maintained a 90% relative water content, and exhibited an improved water use efficiency. Interestingly, even prior to the water stress application, the ANE treatment resulted in a partial stomatal closure, decreasing the stomatal conduction by 55% and the transpiration rate by 53%. An ANE pretreatment also an overall reduced expression of NCED3 (At3g14440), involved in ABA biosynthesis, and of MYB60 (At1g08810), a transcription factor involved in stomata regulation, and an increased expression of the ABAresponsive genes RAB18 (At5g66400) and RD29A (At5g52310). Altogether, these data indicate that an ANE pretreatment primed the plants for an improved survival. Indeed, after 2 to 3 days after the water stress imposition, the expression of NCED3, RAB18, and RD29A was highly induced in the absence of ANE. In contrast, due to the ANE pretreatment, the expression of the four marker genes remained largely unchanged. Whereas the photosynthetic CO 2 uptake was not modified by the ANE pretreatment, the intercellular CO 2 concentration was reduced. Upon water stress, the CO 2 assimilation rate and the mesophyll CO 2 conduction in untreated plants decreased sharply from 3 days onward, together with a significant increase in the intercellular CO 2 concentration. These processes were accompanied by a decrease in the gene expression of the photosynthesis-related RBCS1A (At1g67090) and RCA (At2g39730) and of PIP1;2 (At2g45960) and βCA1 (At3g01500) that are implicated in the regulation of CO 2 diffusion in the mesophyll. These physiological and molecular responses were strongly attenuated in the ANE-treated plants, suggesting that the drought-induced damage to the photosynthetic apparatus was prevented. In line with these findings, the ANE pretreatment resulted in the maintenance of a nearly optimal potential efficiency of the photosystem II (PSII) photochemistry (F v /F m ) and an enhanced nonphotochemical quenching. Additionally, an increased expression of PsbS (At1g44575) and VDE (At1g08550) in the ANE-treated plants during dehydration hinted at an efficient energy dissipation and enhanced DRF (At5g42800) and SOD (At1g8830) expression at the activation of the antioxidant defense system that prevented oxidative damage to PSII. Although the modification of molecular pathways associated with improved drought tolerance upon ANE treatment has been demonstrated (Santaniello et al. 2017), as long as the ANE metabolites triggering these changes are not identified, the exact mechanisms involved remain elusive. The microarray analysis carried out by Goñi et al. (2016) to unravel the underlying molecular basis of ANE-induced plant growth stimulation in Arabidopsis had revealed an upregulation of the glutaredoxin family genes (At1g0320 and GRXC2) and the cold-regulated gene COR15A. In agreement with the priming effect uncovered by Santaniello et al. (2017), ANE A and ANE B probably affected the enzymatic antioxidant system in plants, preparing them for future abiotic stress conditions. ANE treatment-improved drought tolerance was also reported in soybean (Shukla et al. 2017). Germinated soybean 'Savana' plants grown in an environmental chamber were pretreated for 3 weeks with 7 mL L −1 Acadian (two applications of 100 mL in the first 2 weeks and a final application until soil saturation in the third week), where after the irrigation was stopped. Three stages were analyzed: before stress (22 h after soil saturation), during stress (75 h after soil saturation), and during recovery (89 h after soil saturation, the plants were irrigated and analysis was done 8 h later). The ANE application resulted in a reduced wilting during the drought stress and in an enhanced recovery ability. Additionally, both under the drought conditions and during recovery, the ANE-treated plants exhibited a 46% higher stomatal conductance and a 20-27% higher reactive oxygen scavenging activity than control plants. Real-time qPCR of drought-associated marker genes supported a role for ABA in the improved stomatal conductance. Indeed, the expression of the GmCYP707A1a and GmCYP707A3b genes, both involved in ABA catabolism, was induced during drought stress and in the recovery phase, respectively. Furthermore, the expression of the ABA-inducible GmDREB1B and the BURP domain protein-encoding GmRD22 had increased especially during drought stress, whereas the expression of the ABA-independent stress-responsive gene GmRD20 remained unaltered. Similarly, the induced expression of the ABA-responsive fibrillin gene FIB1a, both during and after drought imposition, was in line with the improved photoprotection and the increased expression of the aquaporin gene GmPIP1b during the rehydration phase in the ANE-treated plants was correlated with the internal water movement maintenance and the consequently improved recovery. Besides the modified expression of these ABA-related genes, the ANE treatment also induced the expression of the glutathione S-transferase gene GmGST, the molecular chaperone GmBIP, and the antiquitin-like GmTP55, all implicated in reactive oxygen species detoxification. In contrast to the findings in Arabidopsis (Santaniello et al. 2017), no evidence for a priming effect of the ANE pretreatment was obtained in soybean (Shukla et al. 2017). Noteworthy, in both studies, the nature of the extracts used was different (soluble extract powder prepared at an acidic pH (Algea) vs alkalineextracted Acadian), as was the experimental setup (hydroponics vs soil) and the ANE concentration (3 g L −1 vs 0.5 g L −1 ), making comparisons problematic. ANE-mediated drought stress alleviation in tomato plants has been studied with three different ANEs: ANE A was extracted at neutral pH, whereas ANE B and ANE C were alkaline extracts (Goñi et al. 2018). All assessed components of their chemical composition (such as solids, ash, sulfate, uronic acid, fucose, polyphenol, laminarin, and mannitol) varied significantly. The amount of unidentified organic components was below 20% for ANE A and ANE B, but close to 30% for ANE C. For the experimental setup, 35-day-old tomato plants cv. Moneymaker were used. The first sampling of leaf tissue was before the first ANE application (T0) and 24 h after spraying with 0.33% ANE, drought stress was induced by withholding water for 7 days, whereafter the leaf tissue was sampled (T1). For the 2-week recovery phase, plants were rewatered and 24 h later the ANE treatment was applied for the second time, followed by the third sampling after 48 h (T2). The final leaf sample was taken at the end of the recovery stage (T3). At the end of the growth period, the effect of the ANEs on the alleviation of drought stress was assessed. ANE A and ANE C increased the FW and DW by 25 to 30%, but plants treated with ANE B were almost identical in size when compared with the untreated stressed control plants. Lipid peroxidation was assessed with malondialdehyde (MDA), a lipid peroxidation marker. Under drought stress the MDA content was approximately 30% lower in plants treated with ANE B and ANE C than that in the control, whereas this decrease was slightly smaller, but still significantly different from the control, with ANE A. For the different time points, the chlorophyll content was the highest with ANE A at T1 and T3, whereas the same chlorophyll level as the control was attained with ANE C over the different time points and even decreased by 7.5% at T2. Additionally, the amount of proline, glucose, and sucrose also significantly increased under drought stress (T1) with the ANE A treatment and, in the post-drought stress period (T2), the proline level in the plants treated with ANE A and ANE C was higher than that in the control. Finally, molecular data were obtained on the impact of the ANE treatment on plant dehydrins, which are important players in adaptation to abiotic stress, using a polyclonal serum raised against the K segment of the dehydrins and qRT-PCR analysis of the dehydrin tas14 gene. With the serum eight different polypeptide bands ranging from 15 kDa to 38 kDa were recognized and only ANE A treatment increased the 32, 18 and 15 kDa dehydrin levels in drought-stressed plants. The expression of the dehydrin tas14 gene on the other hand was upregulated by all ANEs, but whereas ANE A treatment resulted in an 8-fold increase, plants treated with ANE B and ANE C only exhibited a little over 2-fold induction. Altogether, these studies illustrate the difficulties of aligning phenotypical data with the composition of ANEs, which is aggravated by the fact that a relatively large amount of components, which may or may not play a role in the observed phenotypes, cannot be characterized. Furthermore, even though some ANEs may show a very similar phenotype, such as drought stress alleviation, the specifics of the stressrelated parameters and, hence, the underlying mechanisms, may differ. Likewise, although some of the observed stressrelated parameters may have similar values, the phenotypical outcome may be very different. Molecular analysis of ANE-induced salinity stress tolerance in Arabidopsis Based on undisclosed results of a microarray analysis of Arabidopsis Col-0 plants subjected to 125 mM NaCl in the presence or absence of the organic fraction of an alkaline ANE, genes not previously linked to salinity stress tolerance, of which the expression was downregulated in the presence of ANEs, were functionally analyzed by means of knockout mutants. As such, At1g62760, encoding a putative pectin methylesterase inhibitor, was identified as a novel negative regulator of salt tolerance (Jithesh et al. 2012). In a subsequent study, 2-week-old Arabidopsis Col-0 plants subjected to 100 and 150 mM NaCl for 24 h were treated with a 1 g L −1 equivalent of a methanolic ANE fraction (MEA) (Jithesh et al. 2018). As compared with the control, the plants treated with 150 mM NaCl and MEA had a larger leaf area (+37.3%), an increased plant height (+33%), more leaves (+33%), and a higher biomass (+57%), whereas the increases were slightly lower in the plants treated with 100 mM NaCl and MEA. The methanolic fraction was further subfractionated into watersoluble, chloroform, and EA fractions. Again when compared with the control, plants treated with 150 mM NaCl and EA had a larger leaf area (+ 62%), an increased plant height (+ 48%), a higher number of leaves (+ 45%), and an increased biomass (+ 52%). As EA mitigated the NaCl stress better than any other subfraction, a microarray analysis was conducted with plants treated with this extract. In the EA treatment, 184 and 257 genes were upregulated and 91 and 262 genes were downregulated on day 1 and day 5, respectively. On day 1, the transcripts for late embryogenesis abundant 3 family (LEA3, At1g02820) and the transcription factor CIRCADIAN CLOCK ASSOCIATED 1 (CCA1, At2g46830) were the most strongly induced. The finding that LEA3 and LEA1 (At5g06760) were induced upon treatment is in agreement with the results of Goñi et al. (2016). CCA1 and by extension the circadian rhythm mechanism in plants has been linked before to abiotic stress (Grundy et al. 2015). Additionally, dehydration-responsive or DRE-binding proteins were also induced, as well as the glutathione S-transferase-encoding gene At5g172200. Downregulated transcripts included cellulose synthase (At1g55850), the auxin-responsive gene At2g23170, the pectin methylesterase inhibitor protein At1g62760 and several heat shock proteins. Also on day 5, transcripts of LEA1 and the LEA2 group were induced. Additionally, the genes AtHVA22b (At5g62490) and Di21 (At4g15910), involved in ABA signaling, were upregulated. Furthermore, several lipid transfer proteins (LTPs), which are known to play a role in (a)biotic stress(es), were also induced. (Xu et al. 2018). Downregulated genes included WAK1, pectin esterases (At1g53840 and At3g10720), and pyruvate dehydrogenase (PDH, At3g30755), the latter playing a role in proline accumulation under osmotic stress. The putative role of micro RNAs (miRNAs) in the regulation of ANE-triggered changes during salinity stress has also been studied . miRNAs silence genes postranscriptionally by targeting mRNA degradation or by interfering with translation and their expression is often induced by various abiotic stresses (Shriram et al. 2016). Arabidopsis Col-0 plants were grown on Jiffy-7 W peat pellets for 3 weeks prior to the application of four conditions (20 mL per plant, once a week for 3 weeks): water (irrigated control), ANE treatment, NaCl-supplemented ANE treatment, and NaCl treatment. The ANE used was the commercially available Acadian Marine Plant Extract Powder (a KOH extract) prepared at 0.1% (w/v). The salinity stress was achieved with 150 mM NaCl. The ANE-treated plants under saline stress had higher FW and DW than plants subjected to saline stress without ANEs, illustrating an improved stress tolerance. After 6 and 12 h post treatment, small RNAs were isolated and sequenced via the Illumina platform, revealing that 106 miRNAs were differentially expressed in at least one of the treatments and/or time points. Overall, for several miRNAs, the ANE treatment seemingly attenuated the salinity stresstriggered induction level. Putative in silico predicted targets of the miRNAs modulated by the ANE treatment or by the ANE-mediated salinity tolerance included transcription factors and other genes involved in very diverse biological processes. For 12 salinity stress-modulated miRNAs, the expression of 14 of their putative target genes could (partially) explain particular processes implicated in this stress mitigation. For instance, the ANE treatment in the presence of NaCl enhanced the expression of ath-miR396a-p that downregulates the AtGRF7 transcription factor, which, in turn, resulted in an increased expression of AtDREB2a and AtRD29, thus contributing to salinity tolerance enhancement. In contrast, the expression of ath-miR169-5p was transiently repressed by the combined treatment of ANE and NaCl, increasing the expression of its target transcription factors AtNFYA1 and AtNFYA2, with an improved stress tolerance as a result. Interestingly, ANE-treated plants had a lower Na + and a higher P content in the presence of NaCl than plants treated with NaCl alone, implying that the ANE treatment can prevent the phosphate uptake impairment typically associated with drought and salinity. The NaCl-induced expression level of ath-miR399a, ath-miR399b, ath-miR399c-3p, and ath-miR399c-5p that are involved in the regulation of phosphate starvation responses was strongly attenuated by the ANE treatment, leading to an increased expression of the target genes AtUBC24 and AtWAK2. Similarly, the high expression of at-miR827 in the presence of NaCl was reduced by the ANE treatment, also increasing the expression of its target AtNLA gene under these circumstances. These studies demonstrated that the alleviation of salt stress when different ANEs are applied results from the contribution of a multitude of different stress-related genes (Jithesh et al. 2018) and involves phosphate homeostasis ). Molecular analysis of ANE-induced biotic stress resilience in Arabidopsis and crops For most physiological effects induced by seaweed extracts, the bioactive molecules responsible for the initiation of the responses are unknown. In contrast, some of the elicitors have been identified that trigger biotic stress tolerance in a wide variety of plants (Vera et al. 2011). Indeed, especially the major cell wall polysaccharides of brown algae such as alginates and fucans, the principal storage polysaccharide laminarin, and their derived oligosaccharides have been shown to induce an oxidative burst and defense signaling pathways mediated by SA and jasmonic acid (JA)/ethylene (Eth) in plants. In turn, these responses result in the accumulation of antimicrobial pathogenesis-related proteins, defense enzymes, and phenolics that culminate in enhanced protection against a broad range of pathogens. Nonetheless, the specific receptors of these elicitors remain to be determined and, to our knowledge, genome-wide analyses to get a holistic view on the molecular basis of ANE-induced biotic stress resistance have not been reported yet. The effect of ANE treatments was analyzed on disease development imposed by the hemi-biotrophic Pseudomonas syringae pv. tomato DC300 (Pst DC300), the causative agent of bacterial speck, and on the expression of defense markers in Arabidopsis (Subramanian et al. 2011). Wild-type Col-0 plants and mutants in the SA and JA/Eth defense pathways were grown in Jiffy-7 peat pellets for 3 weeks in a growth chamber and irrigated with different ANE solutions (25 mL of 1 g L −1 solutions). In total, three different extracts were used: an aqueous solution and two organic subfractions (sequential fractionation of the MeOH fraction with chloroform and EA). After 48 h of ANE treatment fully expanded leaves were pressure inoculated with the hemi-biotrophic Pst DC300. All ANE treatments restricted the Pst DC300induced symptoms to minor chlorosis at the inoculation site and significantly reduced disease severity; the strongest reduction was observed with the EA extract (35 vs 57% of the control disease severity). The bacterial titers in the ANEtreated plants were lower than those of the untreated controls. Nevertheless, a direct antimicrobial effect was ruled out, because the ANEs themselves stimulated bacterial proliferation under culture conditions. Pst DC300 resistance involves SAmediated defense pathways (Uppalapati et al. 2007), but analysis of the responses of ANE-treated NahG transgenic plants and ics1 mutants that do not accumulate or produce SA, respectively, revealed the same disease reduction level as in wild-type plants. The ANE treatment seemingly did not affect the expression of the SA-inducible PR1 gene as analyzed in 7day-old PR1::GUS plants grown in vitro and treated with the ANEs for 48 h in liquid medium. RT-qPCR revealed that the expressions of PR1 and more strongly of ICS1 were repressed in ANE-treated compared with untreated plants, a downregulation speculatively attributed to laminaran (Mercier et al. 2001). Whereas the Pst DC300 infection did not result in an upregulation of these genes in the control plants, the ICS1 expression was moderately and that of PR1 strongly induced, especially in combination with the aqueous ANE treatment. Based on these data, the SA-mediated systemic acquired resistance was considered not to be involved in the ANEinduced protection of Arabidopsis against Pst DC300. Concerning the JA/Eth pathway, the ANE treatment of jar1 mutant plants, defective in the formation of a biologically active jasmonyl-isoleucine (JA-Ile) conjugate, did not reduce the disease severity or the bacterial titers compared with control plants. GUS staining of ANE-treated pAOS::GUS plants and RT-qPCR showed no differential expression of this allene oxide synthase gene implicated in JA-dependent signaling. In contrast, the expression of the JA-responsive defensin PDF1-2 was activated especially by treatment with the chloroform subfraction and, when combined with Pst DC300 infection, the PDF1-2 expression was even further increased. Together these data suggested that the lipophilic ANE components primed the JA-mediated defense responses and that these defense mechanisms were essential for the ANE-induced systemic resistance. Additionally, simultaneously with the root treatment, shoots received an extra 2 mL spraying with the aqueous ANE and 48 h later the leaves were either inoculated with mycelial plugs of the necrotrophic white mold-causing Sclerotinia sclerotiorum or with its pathogenicity factor oxalic acid. The ANE treatment increased the resistance against S. sclerotiorum as evidenced by the delayed and reduced size of the lesions formed, most probably by suppression of the oxalic acid-induced toxicity. Although no further molecular data were provided, the presence of sterols and fatty acids in the ANE was hypothesized to activate nonspecific lipid transfer proteins in the plasma membrane involved in JA-mediated disease signaling cascades (Subramanian et al. 2011). A putative priming effect of biotic stress tolerance by ANEs was also detected in the microarray analyses of Goñi et al. (2016). Indeed, the ANE treatment of Arabidopsis activated PR1, PR5, and WRKY transcription factor-encoding genes, indicating that the SA pathway is enhanced, whereas upregulation of two putative 2-oxoglutarate-dependent dioxygenases (At5g43450 and At2g25450) and ethyleneresponsive transcription factors (ERF2 and ERF72) hinted at the JA/Eth pathway activation. The protective effect of ANE pretreatments against the fungal pathogens Alternaria radicina and Botrytis cinerea (Jayaraj et al. 2008), causing black rot and gray mold, respectively, was also analyzed in carrot, either as such or combined with a single fungicide treatment (2 g L −1 chlorothalonil-50%). Eight-week-old carrot "High Carotene Mass" plants grown in a greenhouse were sprayed with a 0.2% aqueous ANE solution (Acadian Seaplants Ltd) until runoff; after 12 h, samples were taken and then daily until 4 days after treatment for the analysis of the expression and activity of defense-related enzymes. Northern blot analyses revealed that the expression of the genes PR1, PR5, SA receptor NPR-1, lipid transfer protein (LTP), chitinase (U52848), chalcone synthase-2 (CHS-2), and PAL-1 were all upregulated from 12 h after treatment until 72 h where after the expression decreased. In agreement with the expression data and following similar kinetics, the chitinase and PAL activities were higher in ANE-treated plants than in control plants as well as the peroxidase, polyphenoloxidase, β-1,3-glucanase, and lipoxygenase (LOX) activities. In line with the enhanced activities of PAL, peroxidase, and polyphenoloxidase, the total phenolic content of the ANE-treated plants had increased. The improved LOX activity was correlated with a clear H 2 O 2 accumulation in response to the ANE treatment. Interestingly, similar molecular and biochemical responses were obtained by treatment with 100 μM SA, but overall the effect of the ANE treatment was stronger and sustained longer in time. For the analysis of the biotic stress resilience, 6 h after the ANE treatment, the carrot plants were inoculated with conidial suspensions of the two fungal pathogens. The ANE treatment was repeated 10 and 20 days after infection and disease development was analyzed after 10 and 25 days. Disease development for both pathogens was already reduced by a single ANE treatment, but additional ANE applications resulted in an even lessened disease incidence. Importantly, the ANE treatment provided better protection than treatments with SA or fungicides, but the combined application of a single ANE and a single fungicide spray gave the best disease suppression. Additionally, the ANE-treated plants had the highest biomass, but their nutrient composition was unaltered compared with the other treatments. Thus, the ANE-induced resilience of carrots to two fungal pathogens was seemingly accomplished by SA-mediated systemic acquired resistance, possibly triggered by the oligosaccharide fraction and other unidentified elicitors of the ANE (Jayaraj et al. 2008). The effect of pretreatment with the commercial Stimplex was examined on disease development in cucumber after inoculation with either Alternaria cucumerinum, Didymella applanata, Fusarium oxysporum, and Botrytis cinerea¸the causing agents of Alternaria blight, Gummy stem blight, Fusarium root and stem rot, and Botrytis blight, respectively (Jayaraman et al. 2011). Cucumber var. sativus plants were grown for 30-40 days in a greenhouse before they were either sprayed, drenched, or sprayed and drenched with 0.5 or 1% ANE solutions. After 12 h and then daily for 4 days, leaf samples were taken for transcript and enzyme activity analyses. As assessed by Northern blot analysis, the expression of a chitinase, LOX, glucanase, peroxidase, and PAL genes were all significantly upregulated by the ANE treatments, but the strongest induction for all time points tested was obtained by the combined spraying and drenching. In agreement with the expression data, the chitinase, LOX, glucanase, peroxidase, and PAL activities were the highest in the sprayed and drenched plants, as well as the polyphenoloxidase activity. Additionally, in correlation with the increased PAL activity, the total phenolic content in sprayed and drenched plants was higher than that in control plants. For the effect on symptom development, the initial ANE treatment was repeated twice with 10-day intervals. Then, 6 h after the last ANE treatment, the plants were inoculated with conidial suspensions of the fungi. Additionally, spraying with ANE was combined with a single fungicide treatment (2 g L −1 chlorothalonil-50%) applied 10 days after pathogen infection. For all fungal pathogens, the best disease reduction was obtained with the combined spraying and drenching treatment with 0.5% ANE (70% for Alternaria, 47% for Didymella, 46% for Botrytis, and 88% for Fusarium), but an additional application of the fungicide provided an even better result (75% for Alternaria, 60% for Didymella, 55% for Botrytis). In the case of Fusarium, the fungicide alone already prevented disease development altogether, but the combination with the ANE treatment resulted in increased shoot and root biomasses. Thus, both for carrot and cucumber, comparable results were obtained. With the same experimental setup (and astonishingly in nearly the exact same wording), Abkhoo and Sabbagh (2016) analyzed the impact of pretreatment with the commercial ANE Marmarine (International Ferti Technology Corporation, Amman, Jordan) on disease induction by Phytophthora melonis, another damping-off pathogen. Cucumber 'Negin' F1 plants grown for 21 days in a greenhouse were treated by spraying and drenching with 30 mL of Marmarine (0.5 or 1%). Every 24 h for four consecutive days, leaves were sampled for expression analysis and roots for enzyme activity determination. As revealed by qRT-PCR, the expression of the pathogen-induced Cupi4, LOX, PAL, and galactinol synthase GolS genes was activated by all ANE treatments, but the strongest in sprayed and drenched plants. Overall, the expression peaked between 48 and 72 h after Marmarine treatment. In accordance with Jayaraman et al. (2011), peroxidase, polyphenoloxidase and β-1,3glucanase activities as well as the total phenolics content were also higher in the ANE-treated plants. For the pathogenicity tests, after the initial ANE drench treatment, two additional Marmarine treatments were applied with 5-day intervals and 6 h later the plants were inoculated with zoospore suspensions of P. melonis. Additionally, the combined ANE and fungicide treatment was evaluated by drenching the plants 6 days after pathogen inoculation with 2 g L −1 metalaxyl G5%. Similar to the findings of Jayaraman et al. (2011), the combined spraying and drenching with 0.5% Marmarine resulted in the highest disease reduction (68%) and, although a single fungicide treatment reduced disease incidence even further (close to 80%), the combination with the Marmarine treatment yielded an increased root biomass. Concluding remarks and perspectives As an entry point to get insight into the impact of ANEs on plant growth and stress tolerance, different approaches have been combined, including the evaluation of morphological modifications (such as root and shoot growth or FW and DW), physiological responses (photosynthetic parameters), biochemical analyses (enzyme activities, hormone profiling, and phenolic composition), responsiveness of mutant lines, and/or gene-specific or genome-wide determination of mRNA levels. Overall, it has become increasingly clear that ANEs affect the endogenous balance of plant hormones by modulating the hormonal homeostasis, regulate the transcription of a few relevant transporters to alter nutrient uptake and assimilation, stimulate and protect photosynthesis, and dampen stress-induced responses. Despite the progress made the exact molecular basis of improved growth and stress adaptation induced by ANE treatment proves difficult to unravel, partly because of the polygenic response implicated in such intricate and dynamic processes and of the complexity to discriminate between direct and secondary effects. For instance, the genome-wide transcriptomic analyses of ANE-induced plant growth promotion (Nair et al. 2012;Jannin et al. 2013;Goñi et al. 2016) illustrated that ANE treatments broadly redirect gene expression in plants, but at the same time that the differential gene sets show very little overlap. Additionally, the achievement of a general model on the ANE action mechanism has been impeded by (i) the diversity of the experimental setups used, such as the type and duration of the treatments and age and growth conditions of the plants, (ii) the use of different ANE products at different concentrations, (iii) the often incomplete compositional information, (iv) the poor reproducibility due to seasonal fluctuations in algal compositions, (v) the occurrence of species-dependent effects, and (vi) the complex relationship between ANE composition and biostimulant activity. Thus, understanding the ANE mode of action is imperative, not only from a fundamental point of view but also from a scientific viewpoint to support the biostimulant claim for future registration of particular ANE products. Indeed, the increased interest in seaweeds as biostimulants impels the establishment of a regulatory framework. To extend the current knowledge on the mechanisms of ANE action in different phenotypic responses, future research should focus on implementing fractionation techniques in new comprehensive experiments that assess the effect of the major and minor components of the ANE biostimulants singly and in combination, and transcriptome-wide and proteomic studies of ANE treatments should be directed toward the identification of marker genes for the beneficial responses that can be used in biostimulant product development (Calvo et al. 2014). Additionally, by focusing on the very early events during plantmicrobe interactions, the putative receptors (and coreceptors) of and the associated signaling pathways triggered by particular bioactive ANE components should be identified to comprehend the simultaneous activation of plant growth and defense against pathogens in ANE-treated plants (González et al. 2013). Moreover, the potential involvement of posttranslational modifications should be explored to understand the observed effects of ANEs (Billard et al. 2014). Even more, the importance of cell elongation and cell proliferation in ANE-stimulated shoot and root growths should be dissected in depth by means of available phenotyping initiatives (Dhondt et al. 2013). And finally, the current lack of functional data should be alleviated with the use of mutants. All in all, despite significant efforts, the road toward the proposition of a comprehensive model on the molecular mode of action of ANE-induced responses is still long, but imperative for the increased credibility of ANEs as robust crop inputs.
2019-09-10T16:41:37.325Z
2019-09-10T00:00:00.000
{ "year": 2019, "sha1": "2f0bf8bcbe6a948fe5dd669f6b0a85450e6fab50", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10811-019-01903-9.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "2f0bf8bcbe6a948fe5dd669f6b0a85450e6fab50", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
263625552
pes2o/s2orc
v3-fos-license
Identification and validation of targets of swertiamarin on idiopathic pulmonary fibrosis through bioinformatics and molecular docking-based approach Background Swertiamarin is the main hepatoprotective component of Swertiapatens and has anti-inflammatory and antioxidation effects. Our previous study showed that it was a potent inhibitor of idiopathic pulmonary fibrosis (IPF) and can regulate the expressions of α-smooth muscle actin (α-SMA) and epithelial cadherin (E-cadherin), two markers of the TGF-β/Smad (transforming growth factor beta/suppressor of mothers against decapentaplegic family) signaling pathway. But its targets still need to be investigated. The main purpose of this study is to identify the targets of swertiamarin. Methods GEO2R was used to analyze the differentially expressed genes (DEGs) of GSE10667, GSE110147, and GSE71351 datasets from the Gene Expression Omnibus (GEO) database. The DEGs were then enriched with Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis for their biological functions and annotated terms. The protein-protein interaction (PPI) network was constructed to identify hub genes. The identified hub genes were predicted for their bindings to swertiamarin by molecular docking (MD) and validated by experiments. Results 76 upregulated and 27 downregulated DEGs were screened out. The DEGs were enriched in the biological function of cellular component (CC) and 7 cancer-related signaling pathways. Three hub genes, i.e., LOX (lysyl oxidase), COL5A2 (collagen type V alpha 2 chain), and CTGF (connective tissue growth factor) were selected, virtually tested for the interactions with swertiamarin by MD, and validated by in vitro experiments. Conclusion LOX, COL5A2, and CTGF were identified as the targets of swertiamarin on IPF. Supplementary Information The online version contains supplementary material available at 10.1186/s12906-023-04171-w. Introduction IPF is a chronic progressive disorder and shares many common clinical, pathological, and immune characteristics with pulmonary fibrosis [1], one of the post-sequelae of COVID-19 [2].IPF can significantly lower patients' life quality and life expectancy [3] because of the irreversible decline in lung functions.Some environmental factors, genetic susceptibilities [4], and oxidative stress [5] were reported to increase the risk of getting IPF.Oxidative stress is believed to play an important role in the initiation of inflammation and the damage to DNA [6], which first occurs in the development of IPF.Various signaling pathways, e.g., hedgehog [7], TGF-β/Smad [8], and Wnt/β-catenin (Wingless and Int-1/beta catenin) [9], etc., are involved in the initiation and development of the IPF.Marker proteins of these signaling pathways, for instance, TGFBRI (transforming-beta type I receptor) [10] and CTGF [11], etc., were reported to be the targets of anti-IPF drugs.Our previous study of screening novel anti-IPF drugs with machine learning from traditional Chinese medicines found that swertiamarin, a secoiridoid glycoside with high anti-oxidation and antiinflammatory effects [12], has a strong effect on arresting the development of IPF [13], and identified that α-SMA and E-cadherin, two marker proteins of TGF-β/Smad signaling pathway, were significantly downregulated and upregulated, respectively, by swertiamarin.However, the targets still need to be investigated.For this new anti-IPF lead, it is tough and time-and-cost-consuming work to identify its targets with wet-lab experiments.Fortunately, bioinformatics provides us with powerful tools to exploit genomic, transcriptomic, and proteomic data [14] to gain insights into the anti-IPF mechanisms and to identify targets for further validation.Molecular docking (MD) is another effective in silico approach to accurately predict the stability of a ligand-receptor complex and understand the activity of the ligand.MD showed remarkable advantages [15] over traditional experiment paradigms in avoiding large amounts of intensive experiments.In this study, bioinformatics was first used to analyze the microarray datasets from the GEO database to obtain hub genes.MD was then applied to decide whether these hub genes were the targets of swertiamarin by predicting the interactions between swertiamarin and the corresponding proteins of these genes.Finally, the screened targets were experimentally validated. Microarray data GSE10667 series [16] on GPL4133 platform (Agilent-014850 Whole Human Genome Microarray 4 × 44 K G4112F), GSE110147 series [17] on GPL6244 platform (Affymetrix Human Gene 1.0 ST Array transcript), and GSE71351 series [18] on GPL10558 platform (Illumina HumanHT-12 V4.0 expression bead chip) were downloaded from the GEO database (https://www.ncbi.nlm.nih.gov/geo/).The probs in each series were replaced by official gene symbols according to the platform files.For the GSE10667 dataset, the samples with the title of control and UIP (usual interstitial pneumonia) were set as control and IPF groups, respectively.The samples of the GSE110147 dataset that has the title of normal control and idiopathic pulmonary fibrosis patient were selected as control and IPF groups, respectively.In the GSE71351 dataset, the samples of normal lung fibroblasts were set as the control group and those of rapidly/slowly lung fibroblasts were set as the IPF group.The detailed information on these groups is shown in Table 1.The above three datasets are freely accessed on the GEO database and do not include any other experiment data of the authors. Identification of differentially expressed genes To avoid the batch effect, GSE10667, GSE110147, and GSE71351 datasets were separately analyzed for DEGs with GEO2R (https://www.ncbi.nlm.nih.gov/geo/geo2r) between control and IPF groups.The genes with |logFC|≥0.5 and P ≤ 0.05 were identified as DEGs.The Venn Diagram (http://bioinformatics.psb.ugent.be/webtools/Venn/)was used to select the overlapping genes that simultaneously exist in the DEGs of GSE10667, GSE110147, and GSE71351 datasets for GO and KEGG enrichment analysis. GO and KEGG function enrichment analyses The overlapping DEGs including upregulated and downregulated genes were used for GO and KEGG [19] enrichment analyses with David [20] (https://david.ncifcrf.gov/).For the GO analysis, the result was filtered with P ≤ 0.05 and Count ≥ 10 and the resultant genes were investigated for their classified physiological functions i.e., cellular component (CC), and their annotated terms.For KEGG analysis, the genes with P ≤ 0.05 and Count ≥ 4 were considered as significantly enriched in the corresponding biological pathways. PPI network and hub genes identification The DEGs including upregulated and downregulated genes were used to construct PPI networks with STRING [21].The PPI pairs were extracted with a combined score of 0.4.Nodes with higher degrees of connectivity are considered more important to the stability of the PPI network.The PPI networks were plotted by Cytoscape 3.9.1.The hub genes were calculated with the CytoHubba [22] plugin of Cytoscape 3.9.1 [23] and the algorithm of Closeness was used to rank the nodes from which the top 3 hub genes were selected for validation. Molecular docking To decide whether the selected hub genes were the potential targets of swertiamarin, MD was used to predict the interactions between swertiamarin and the proteins of selected genes.3D crystal structure of human LOX (PDB ID 5ZE3) [24] with a resolution of 2.40Å was downloaded from the PDB database (https://rcsb.org/structure/5ZE3).The 3D structures of CTGF (https:// alphafold.ebi.ac.uk/entry/P29279) and COL5A2 (https:// alphafold.ebi.ac.uk/entry/P05997) were obtained from AlphaFold.The docking pocket of LOX was decided with its catalytic domain [24].The docking pocket of CTGF was defined as its heparin-binding region (https:// www.uniprot.org/uniprotkb/P29279/entry).The VWFC domain of COL5A2 protein (https://www.uniprot.org/uniprotkb/P05997/entry) was used as its docking pocket.Autodock vina 1.2.3 [25] was used to perform the MD of swertiamarin into LOX, CTGF, and COL5A2.The structural files of LOX, CTGF, COL5A2 in PDB format, and swertiamarin in SD format were converted into PDBQT format with OpenBabel 2.4.1 [26] for docking, and other docking parameters were set as default. Western blot testing The western blot experiments were carried out with three experiment groups, i.e., the control, the IPF model, and the test groups (detailed information is shown in Fig. 5).The A549 cells from the above three groups were lysed with RIPA (radioimmunoprecipitation assay) lysis buffer and vortexed at 4℃.After the cells were collected.Total protein was loaded and separated by electrophoresis on 10% SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel electrophoresis) and transferred onto polyvinylidene difluoride membranes (Millipore, USA).The membranes were blocked with 5% defatted milk for 1.5 h at 20℃ and then incubated with primary antibodies at 4˚C overnight.After being washed with 10% TBST (mixture of tris-buffered saline and polysorbate 20), the membranes were incubated with HRP-conjugated (horseradish peroxidase-conjugated) secondary antibodies of anti-mouse IgG (immunoglobulin G) (1:5000) and anti-rabbit IgG (1:5000) for 2 h at 20℃.The bound antibodies were visualized using an enhanced chemiluminescence kit (Millipore, USA). Statistical analysis The GEO analyses were carried out by the GEO2R.The statistical analysis of expressions of the hub genes in the western blot experiments was performed by Pearson's correlation analysis in the Python environment.The confidence interval was set as 0.95 for DEGs and western blot analysis. GO and KEGG enrichment analysis The GO enrichment aims to analyze the main functions, e.g., cellular component, biological process, molecular function, etc., of DEGs.The results (Table 2) showed that the DEGs were enriched in cellular components including the plasma membrane, integral component of membrane, extracellular region, extracellular space, extracellular exosome, and Golgi apparatus, which were consistent with the biological process of IPF.The results of KEGG Fig. 2 The DEGs and the biological functions of GO-enriched genes.A, the intersecting upregulated genes.There are 76 overlapping upregulated genes (supplement 1).B, the intersecting downregulated genes.There are 27 overlapping downregulated genes (supplement 2); C, the biological functions of DEGs enriched with GO analysis.The GO-enriched genes were selected with Count ≥ 18 and P ≤ 0.05 analysis indicated that these DEGs were mainly enriched in 7 pathways including Hsa05202, Hsa04020, Hsa04115, Hsa04068, Hsa04978, Hsa05218, and Hsa05205, which are relative to cancer and tissue development. Molecular docking The docking score for swertiamarin and LOX was − 9.344 kcal/mol and the interactions were shown in Fig. 4A.Besides 5 carbon-hydrogen bonds, 6 conventional hydrogen bonds (between swertiamarin and residues of Ser609, Thr546, Ser544, Ser486, and Asn487 of LOX with bond lengths of 3.11, 2.80, 2.52, 3.38, 3.22 and 3.23Å, respectively) and 3 alkyl bonds (between swertiamarin and residues of Pro548, Val720, and Arg612 of LOX with bond lengths of 4.10, 4.83 and 3.68Å, respectively) were found in the swertiamarin-LOX complex, which indicated that swertiamarin has strong interactions with LOX and the swertiamarin-LOX complex was very stable.The docking score for swertiamarin and CTGF was − 8.125 kcal/mol.The swertiamarin-CTGF interactions (Fig. 4B) gave out 5 conventional hydrogen bonds (between swertiamarin and Glu269, Ser271, Met215, and Lys211 with bond lengths of 2.95, 3.13, 3.57, 2.94, and 3.18Å, respectively) and 6 alkyl bonds (between swertiamarin and Met215, Leu236, Len270, Ile217, and Lys21 with bond lengths of 4.86, 4.66, 5.42, 4.21, 4.83 and 3.81Å, respectively).The vinyl of the swertiamarin formed 3 alkyl bonds with residues of Leu270, Ile217, and Leu236, which suggested that this vinyl is important to the stability of the complex.The residue of Ser271 provided three strong conventional hydrogen bonds (bond lengths < 3Å) with swertiamarin, which suggested that the Ser271 and the hydroxyls from the glucosyl of swertiamarin are the key groups keeping the stability of the swertiamarin-CTGF complex. The docking score for swertiamarin and COL5A2 was − 12.681 kcal/mol, which indicated that the swertiamarin-COL5A2 complex is more stable than the swertiamarin-LOX and swertiamarin-CTGF complexes.The swertiamarin-COL5A2 interactions (Fig. 4C) showed the same result.There were more and stronger conventional hydrogen bonds were formed: six of the nine conventional hydrogen bonds are shorter than 3Å and three conventional hydrogen bonds are close to 3Å (3.26Å between swertiamarin and Cys1336, 3.07Å between swertiamarin and Ser1338, and 3.06Å between swertiamarin and Ser1342).Therefore, glucosyl is very important to the anti-IPF capability of swertiamarin. Western blot Experiment results showed that TGF-β1 and swertiamarin have no toxicity to the A540 cells (supplement 4). Figure 5 showed that, after being treated with swertiamarin for 24 h, the expressions of LOX and COL5A2 were significantly downregulated, and the expression of CTGF was observed to be slightly downregulated. Discussion Target identification is of central importance to the understanding of the anti-IPF mechanism of swertiamarin.However, solving this problem using web-lab experiments usually means expensive and slow processes, whereas computation-aided approaches provide efficient complements.Here we used the bioinformatics and MDbased approach to screen the anti-IPF targets of swertiamarin with the public database.In this study, GSE10667, GSE110147, and GSE71351 datasets were used to analyze the DEGs.It is important to clarify here that the GSE10667 dataset contains samples with interstitial pneumonia and it made the results more reliable but also more difficult to obtain the overlapping DEGs with high log|FC| values because of considering the gene expressions in the early stage of IPF.Even with lower log|FC| values, the hub genes were still successfully screened out.The KEGG analysis suggested that the selected DEGs were related to tissue and cancer development.This result was consistent with the fact that IPF shares several pathogenetic similarities, e.g., DNA methylation [27], with lung cancer [28] and that patients with IPF are at high risk of getting lung cancer [29]. Three hub genes (LOX, COL5A2, and CTGF) were screened out through the above analysis.The LOX is a cuproenzyme and is also known as protein-lysine 6-oxidase encoded by the human LOX gene [30].LOX catalyzes the conversion of lysine into highly reactive aldehydes that form the cross-linking collagen and elastin in ECM proteins [31], contributing to the ECM's stiffness.The stiffness of the ECM would increase fibroblast proliferation and contraction [32].Aberrant expression and activity of LOX were associated with IPF [33] and led to the development of the IPF microenvironment.Therefore, LOX is a key participant in ECM remodeling [34].In this study, the PPI analysis of LOX showed that the LOX has strong interactions with TGF-β1, TGFBR1, TGFBR2 (TGF-β receptor 2), and SMAD2/3 (suppressor of mothers against decapentaplegic family member 2 and 3) (Supplement 5), which are the main marker proteins of TGF-β/Smad signaling pathway.Therefore, LOX is the potential target of IPF [35] and inhibition of its activity would alleviate the IPF [33], which was identified by the MD model.The interactions between LOX and swertiamarin indicated a stable ligand-receptor complex (Fig. 4A), which is consistent with the western blot results showing the downregulation of the LOX expression. CTGF, also known as CCN2 (cellular communication network factor 2), is a matricellular protein of the CCN family of ECM-associated heparin-binding proteins [36].CTGF was reported to be associated with wound healing (the key initial process of IPF) and all fibrotic pathology [37].In IPF tissues, CTGF expression is upregulated by TGF-βs [38], SMAD2 [39], and other physiological and pathological factors.The upregulation of CTGF expression would further exacerbate the ECM accumulation and aggravate the development of IPF [37].The CTGFinvolved PPI analysis (Supplement 6) showed that CTGF has interactions with TGF-β1, TGF-β3, and TGFBR2 (one of the key targets of exogenous factors that promote the expression of TGF-βs).The MD model also suggested that the swertiamarin-CTGF complex is highly stable and CTGF should be the target of swertiamarin on IPF.Western blot results showed that swertiamarin slightly but not significantly downregulated the expression of CTGF.Therefore, we concluded that swertiamarin can only inhibit the activity of CTGF rather than downregulate the expression of CTGF. COL5A2 is a protein encoded by the COL5A2 gene [40] and is responsible for the formation of other collagen fibrils in tissues of the body.COL5A2 was reported to be involved in the development of pathological scarring [41].COL5A2 is modulated by TGF-βs [42] and is highly related to human systemic sclerosis [43] and IPF [44].In this study, the COL5A2 protein has a strong interaction with ITGB1 (integrin beta-1, a cell surface receptor in human and function as collagen receptors and modulate the migration across basement membranes in human [45]) and ADAMTS14 (ADAM metallopeptidase with thrombospondin type 1 motif 14, an enzyme cleaves the amino-propeptide of fibrillar collagens) (Supplement 7).The DEG analysis showed that the COL5A2 gene was significantly upregulated in the IPF patients.The MD model suggested that swertiamarin could modulate the activity Fig. 5 The expressions (A) and the statistics results (B) of COL5A2, LOX, and CTGF in the western blot analysis.*Represents the P < 0.05.The A549 cells (control group) were pretreated with 10ng/ml of TGF-β1 to build the in vitro IPF model, and then the cells were treated with 1.5µmol/l of swertiamarin (test group) for 24 h.The samples for determining the expressions of COL5A2, LOX, CTGF, and GAPDH (glyceraldehyde 3-phosphate dehydrogenase) were derived from the same batch of experiments.The gels were processed in parallel of COL5A2 and the western blot showed that the expression of COL5A2 was downregulated by swertiamarin. Conclusion Our bioinformatics analysis identified 103 DEGs that co-exist in GSE10667, GSE110147, and GSE71351 datasets.The GO and KEGG functional analyses showed that these DEGs were mainly related to the cell process that related to cancer and tissue development.The MD models and experimental results proved that these three genes/proteins are the targets of swertiamarin.The experiment data didn't show significant downregulation of CTGF.It is very important to further experimentally investigate the effects of swertiamarin on the activities of LOX, CTGF, and COL5A2, and especially to validate its effect on the expression of CTGF. Fig. 4 Fig. 4 The 3D interactions between swertiamarin and LOX (A), CTGF (B), and COL5A2 (C), respectively.Green lines, conventional hydrogen bonds.Pink lines, alkyl bonds.Light blue lines, conventional hydrogen bonds.The numbers beside the bonds represent the bond lengths.The interactions were plotted by the Discovery Studio 2019 client Table 1 Statistics of the three microarray databases Table 2 Significantly enriched GO terms and KEGG pathways of DEGs Table 3 The top ten hub genes ranked with scores
2023-10-05T13:54:25.449Z
2023-10-05T00:00:00.000
{ "year": 2023, "sha1": "7439ec4f32591221b2b622e133cfc2292b98134e", "oa_license": "CCBY", "oa_url": "https://bmccomplementmedtherapies.biomedcentral.com/counter/pdf/10.1186/s12906-023-04171-w", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7439ec4f32591221b2b622e133cfc2292b98134e", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
53801947
pes2o/s2orc
v3-fos-license
Analysis of the Effect of Intestinal Ischemia and Reperfusion on the Rat Neutrophils Proteome Intestinal ischemia and reperfusion injury is a model system of possible consequences of severe trauma and surgery, which might result into tissue dysfunction and organ failure. Neutrophils contribute to the injuries preceded by ischemia and reperfusion. However, the mechanisms by which intestinal ischemia and reperfusion stimulate and activate circulating neutrophils is still not clear. In this work, we used proteomics approach to explore the underlying regulated mechanisms in Wistar rat neutrophils after ischemia and reperfusion. We isolated neutrophils from three different biological groups; control, sham laparotomy, and intestinal ischemia/reperfusion. In the workflow, we included iTRAQ-labeling quantification and peptide fractionation using HILIC prior to LC-MS/MS analysis. From proteomic analysis, we identified 2,045 proteins in total that were grouped into five different clusters based on their regulation trend between the experimental groups. A total of 417 proteins were found as significantly regulated in at least one of the analyzed conditions. Interestingly, the enzyme prediction analysis revealed that ischemia/reperfusion significantly reduced the relative abundance of most of the antioxidant and pro-survival molecules to cause more tissue damage and ROS production whereas some of the significantly up regulated enzymes were involved in cytoskeletal rearrangement, adhesion and migration. Clusters based KEGG pathways analysis revealed high motility, phagocytosis, directional migration, and activation of the cytoskeletal machinery in neutrophils after ischemia and reperfusion. Increased ROS production and decreased phagocytosis were experimentally validated by microscopy assays. Taken together, our findings provide a characterization of the rat neutrophil response to intestinal ischemia and reperfusion and the possible mechanisms involved in the tissue injury by neutrophils after intestinal ischemia and reperfusion. Intestinal ischemia and reperfusion injury is a model system of possible consequences of severe trauma and surgery, which might result into tissue dysfunction and organ failure. Neutrophils contribute to the injuries preceded by ischemia and reperfusion. However, the mechanisms by which intestinal ischemia and reperfusion stimulate and activate circulating neutrophils is still not clear. In this work, we used proteomics approach to explore the underlying regulated mechanisms in Wistar rat neutrophils after ischemia and reperfusion. We isolated neutrophils from three different biological groups; control, sham laparotomy, and intestinal ischemia/reperfusion. In the workflow, we included iTRAQ-labeling quantification and peptide fractionation using HILIC prior to LC-MS/MS analysis. From proteomic analysis, we identified 2,045 proteins in total that were grouped into five different clusters based on their regulation trend between the experimental groups. A total of 417 proteins were found as significantly regulated in at least one of the analyzed conditions. Interestingly, the enzyme prediction analysis revealed that ischemia/reperfusion significantly reduced the relative abundance of most of the antioxidant and pro-survival molecules to cause more tissue damage and ROS production whereas some of the significantly up regulated enzymes were involved in cytoskeletal rearrangement, adhesion and migration. Clusters based KEGG pathways analysis revealed high motility, phagocytosis, directional migration, and activation of the cytoskeletal machinery in neutrophils after ischemia and reperfusion. Increased ROS production and decreased phagocytosis were experimentally validated by microscopy assays. Taken together, our findings provide a characterization of the rat neutrophil response to intestinal ischemia and reperfusion and the possible mechanisms involved in the tissue injury by neutrophils after intestinal ischemia and reperfusion. Keywords: ischemia reperfusion, neutrophils, proteomics, systemic inflammatory response, LC-MS/MS INTRODUCTION The intestine is the most sensitive organ to ischemia and reperfusion (IR) injury. This injury can result from various clinical situations, such as intestinal obstruction, acute mesenteric ischemia, incarcerated hernia, small intestine transplantation, neonatal necrotizing enterocolitis, trauma, and shock, taking the patient to relentless clinical syndromes, and even death (Mojzis et al., 2001;Mallick et al., 2004;Guneli et al., 2007). It is clear now that the reperfusion following ischemia leads to significantly greater mucosal intestinal injury as compared to the ischemia alone (Crissinger and Granger, 1989) whereas development of the systemic inflammatory response syndrome (SIRS) and multiple organ failure (MOF) can be the final consequences of IR (Ceppa et al., 2003). Among the polymorphonuclear leukocytes (PMNs), neutrophils are the first line of defense against bacterial and fungal infections (Kaufmann, 2008). However, a large number of studies showed that IR injury is mainly because of PMN and endothelial cell (EC) interactions in reperfused tissues (Massberg et al., 1998;Kumar et al., 2009;Kvietys and Granger, 2012). Normally, a multistep process of neutrophil recruitment to the site of infection requires three types of adhesion receptors, like integrins, selectins, and adhesion receptors of the immunoglobulin superfamily (Rao et al., 2007). It has been shown that P-and E-selectins are over expressed in a mouse model after intestinal ischemia/reperfusion and that inhibiting P-selectin decreased neutrophil rolling and adhesion, reducing the injury (Riaz et al., 2002). Another IR model showed decreased neutrophil rolling and adherence by blocking of P-and L-selectins (Kubes et al., 1995). The up-regulation of adhesion molecules on the endothelial surface was observed following ischemia/reperfusion injury (IRI) that can result in diapedesis of neutrophils, further contributing to muscle dysfunction (Hierholzer et al., 1999). Similarly, in small intestine, an increase in expression of inflammatory mediators like tumor necrosis factor-α (TNF-α), cyclooxygenase-2 (COX-2) and intercellular adhesion molecule-1 (ICAM-1), have been observed after IR with increase in neutrophil infiltration (Watanabe et al., 2014). Another study demonstrated systemic serum level elevation of the CC chemokines along with XC chemokines in a model of intestinal ischemia, hence leading to greater PMNs activation and tissue injury (Jawa et al., 2013). Each of these signals interact with specific receptors expressed on the plasma membrane of PMNs with an overlapping array of signal transduction pathways leading to functional responses such as rearrangement of the actin cytoskeleton (Luerman et al., 2010). A gradient of chemoattractant signals arising from the dying tissues helps in the recruitment of PMNs (McDonald et al., 2010) that secrete a large number of factors like reactive oxygen species, chemokines, cytokines, lipid mediators, and proteases (Rodrigues and Granger, 2010). However, molecular mechanisms, enzymes, and pathways by which PMNs participate in the IR injury are not fully understood. The current understanding of the interconnections among the many signal transduction pathways that regulate neutrophil activation is incomplete. Proteomics research has improved the understanding of the neutrophil biology in the past, and few publications describe the effect of the inflammatory response on the PMNs proteome (Morris et al., 2008). There is one proteomic study on the response to intestinal ischemia and reperfusion, however limited on molecules expressed by the intestinal epithelium, and there is lack of understanding regarding the PMNs response against IR. The main objective of this study is to explore the effect of IR on the PMNs in rats using mass spectrometry based proteomics (Hurst et al., 1998). We analyzed the neutrophil proteome from three different biological conditions, including control, sham laparotomy and intestinal ischemia/reperfusion. Proteomic analysis of neutrophils after ischemia and reperfusion revealed that neutrophils down regulate the expression of different antioxidant, pro-survival molecules. Significantly up regulated oxidoreductases and down regulated transferases can interfere in the integrin signaling pathway, lipid metabolism and reactive oxygen species (ROS) generation, leading to the local and remote tissues injury. Furthermore, our analysis shows the regulation of different proteins and pathways required for neutrophil adhesion, directional migration, and phagocytosis after intestinal ischemia and reperfusion. Down regulation of important enzymes from LTB4 synthesis pathway opens some questions that need further analysis. Functional assays revealing increased ROS production and decreased phagocytosis after ischemia/reperfusion are coherent with the proteomics findings. We anticipate that these findings will provide trustworthy basis for further deep analysis in neutrophil biology. Protein Identification and Relative Abundance Profile For the large-scale proteomics analysis of rat neutrophils, samples were collected from three experimental groups (Figure 1) including control, laparotomy and intestinal ischemia/reperfusion groups. Proteins from each experimental group with five biological replicates were iTRAQ labeled for relative quantification analysis. To find the significant changes in protein abundance among three groups, unique peptides with high confidence identification and their respective iTRAQ reporter ion intensity values were analyzed by using R. For a detailed description of the methods, see the Supplementary Methods section. Mass spectrometry FIGURE 1 | Experimental design. Surgical procedures were followed by isolation of neutrophils from the three conditions. Protein extraction was performed by using FASP protocol. After Hilic fractionations, LTQ Orbitrap MS analysis was performed and the data obtained was analyzed by using bioinformatics tools. Another set of isolated neutrophils was used to determine the phagocytosis rate and ROS production by optical microscopy. analysis resulted in the identification of 2045 proteins (Supplementary Table S1). After the data analysis, proteins were grouped into clusters according to their abundance profile among the three conditions. To assign the proteins according to their abundance in the best number of clusters, two validation indices, Xie-Beni index (Xie and Beni, 1991) and minimal centroid distance (Schwammle and Jensen, 2010), were applied and proteins were assigned to five different clusters based on their regulation trend (Figure 2, Supplementary Table S2). Expression changes are shown as z scores (i.e., relative abundances normalized by mean and standardized by dividing by the standard deviation). The colors correspond to the so-called membership values (values in the range [0.1]), representing the degree of how much a protein belongs to the nearest cluster. Only proteins with memberships above 0.5 are shown. A subset of 417 proteins was significantly regulated (Limma and rank products q-value < 0.05) in at least one of the analyzed conditions within the five clusters. A standard PCA analysis was performed to check the similarity among different samples. Go slim Analysis and Enzyme Activity Prediction for Total Rat Neutrophil Proteome Protein classification was performed by GO slim analysis using ProteinCenter (Thermo Scientific) as a platform. The resultant cellular component, biological process and molecular function terms for the five clusters are shown in Supplementary Figure S1. Molecular function analysis revealed that about 33% of the neutrophil proteome has predicted enzymatic activity. It is composed of 18% oxidoreductases (EC:1), 27% transferases (EC:2), 38% hydrolases (EC:3), 3% lyases (EC:4), 5% isomerases (EC:5) and 9% ligases (EC:6) as predicted activity, as shown in Figure 3A. The overall distribution of the enzymes across five clusters is illustrated in Figure 3B whereas Table 1 represents the significantly regulated proteins in at least two conditions among 5 clusters. Major Functional Classes of the Neutrophil Proteome Regulated by IR For the functional classification of the significantly regulated identified proteins, KEGG and Wiki pathways analyses were done and the enriched pathways are listed in Table 2. Most of the enriched pathways found were immune-related indicating the effect of intestinal ischemia and reperfusion on the neutrophil function. Phagocytosis was found significantly down regulated in IR as shown in Table 2 Verification of ROS Production and Phagocytosis To validate such findings we performed a ROS production and phagocytosis assays by incubating neutrophils from the three groups with Saccharomyces cerevisiae yeast cells. Phagocytic activity was significantly decreased in IR group (p-value < 0.05) compared to control and LAP (Figure 4). Only about 23.90% of cells phagocytosed in IR group while control and LAP presented 50 and 40.7% phagocytosis rates respectively. DISCUSSION Speculating about correlations between cluster profiles and physiological conditions, cluster 1 suggests possible markers that would correlate to the injury severity, since protein abundances progressively increase with the severity of the surgical event. Clusters 2 and 5 suggest proteins that change the sense of regulation when the condition changes from mild to more intense surgical procedure and, in similar ways, cluster 3 detected proteins that respond similarly to any intensity of trauma, and proteins regulated only by intestinal ischemia and reperfusion were clustered in 4. The five clusters where our quantified proteins were grouped did not enrich for specific cellular localization. According to Supplementary Figure S1A, most of the significantly regulated proteins from clusters 1, 3, and 4 were annotated to cytoplasm, extracellular, and nucleus. Cytosol and organelle lumen had proteins from cluster 1 whereas ribosome proteins were exclusively annotated to cluster 1. Mostly membrane proteins were grouped in cluster 4 that is down regulated in IR as compared to the control and laparotomy groups. Proteins related to metabolic process from cluster 1 and 4 showed enrichment (Supplementary Figure S1B) whereas, the highest number of proteins related to metabolic processes belong to cluster 1 which is continuously up regulated in laparotomy and IR as compared to the control group. Gene Ontology (GO) terms distribution for the molecular functions (Supplementary Figure S1C) of all the regulated proteins from clusters 1, 3, and 4 showed annotation to catalytic activity, binding proteins whereas cluster 1 represented a high amount of protein enrichment for RNA binding and structural molecule activity. Proteins from cluster 1 and 4 showed highest diversity in molecular functions among the clusters. After identifying a large number of proteins, the data were analyzed in Uniprot to predict the enzyme activity for the identified neutrophil proteome. It has been clear from the literature that different enzymes play an important role in neutrophils. The enzyme activities encountered varies for each cluster ( Figure 3B). Predicted Enzyme Activity for Cluster 1 Proteins These enzymes are found progressively up regulated in neutrophils from the control to the laparotomy to the ischemia/reperfusion group suggesting that the intensity of regulation may be proportional to the severity of the inflammatory process. Oxidoreductases The quantitative analysis revealed significantly up regulated oxidoreductases in cluster 1, among which we found superoxide dismutase (SOD). Normally during inflammation, SOD regulates ROS concentration and reactive nitrogen species however high levels of extracellular SOD activity resulted in reduced innate Frontiers in Molecular Biosciences | www.frontiersin.org FIGURE 3 | Enzyme activity prediction. (A) A pie chart shows the predicted enzyme activity of six classes of enzymes for total rat neutrophil proteome. (B) A bar chart shows predicted enzyme activity for regulated proteins from all 5 clusters. EC 1 represents Oxidoreductases, EC 2-Transferases, EC 3-Hydrolases, EC 4-Lyases, EC 5-Isomerases, and EC 6-Ligases. A detail of the predicted enzymatic activities of these proteins is listed in Table 1. The small images above each cluster show the general cluster pattern. For details on this see Figure 2. immune response of neutrophils (Break et al., 2012). Increased level of DHFR was observed in our quantitative analysis whereas in peripheral blood leucocytes, mainly neutrophils, of cancer patients' higher expression of DHFR has been associated with leukocytosis (Iqbal et al., 2000). Other significantly up regulated oxidoreductases of cluster 1 include Prostagladin reductase 1 (PtGR-1), and fatty acyl-CoA reductase 1 ( Table 1). PtGR-1 is a nitroalkene reductase and its overexpression in HEK293T cells promoted inactivation of nitroalkane, inhibition of hemeoxygenase HO-1 and finally abrogated tissue protection (Vitturi et al., 2013). It has been shown that inducing HO-1 prior to IRI results in a significant decrease of intestinal tissue injury (Wasserberg et al., 2007). Transferases N-myristoyltransferase (NMT1) catalyzes protein myristoylation process that is essential for leukocyte growth and development, found elevated in activated neutrophils along with increase in lifespan (Kumar et al., 2011). The majority of significantly regulated transferases are oligosaccharyltransferases involved in N-Linked protein glycosylation. Since pro-inflammatory stimuli modify glycan profiles on cell surfaces, the overexpression of such enzymes can lead to an increased adhesion of activated neutrophils to endothelial cells (Sriramarao et al., 1993). S-adenosylmethionine (SAM) acts as a donor for methyltransferases that is involved in methylation of DNA, RNA, proteins, and lipids. SAM has important implications in antigeninduced immune responses (Wu et al., 2005). Hydrolases Most of the regulated hydrolases are annotated to DNA helicases and one RNA helicase. Other statistically regulated hydrolases, relevant to the inflammation were Caspase-1, involved in the inflamasome pathway, and Cathepsin D. Increased activity of cathepsin D in myocardial ischemic neutrophils was found to be associated with increased ROS production (Miriyala et al., 2013). Both the Caspase-1 and Cathepsin D were significantly found up regulated in this study showing high inflammasome pathway activation and ROS production. Predicted Enzyme Activity for Cluster 4 Proteins Another group of transferases, oxidoreductases, and hydrolases present predominant regulation in neutrophils from intestinal ischemia/reperfusion in cluster 4 as shown in Figure 3B. Oxidoreductases There are some important oxidoreductases regulated in this cluster. Arachidonate 15-lipoxygenase was found significantly down regulated. It has been known that arachidonate 12/15lipoxygenases (12/15-LOX) forms hydroperoxy eicosatetraenoic acids (HPETEs) from arachidonic acid that subsequently produces eicosanoids. Leukocytes highly expresses 15-LOX but little is known about its role in neutrophils (Nadel et al., 1991). Xanthine oxidase (Xdh) and thioredoxin reductase 1 (TXNRD1) are cellular defense enzymes against oxidative stress. Significantly down regulation of these antioxidants after IR can be an important step in modulating the neutrophil response to oxidative stress (Vorbach et al., 2003;Biterova et al., 2005). The most extensively studied primary granule enzyme of neutrophils is myeloperoxidase (MPO), found significantly regulated in cluster 4 by our quantitative analysis ( Table 1). Recently an increase in MPO along with ICAM-1 and P-selectin expressions in neutrophils has been found in a model of small intestinal ischemia in rats (Gan et al., 2015). Transferases Most down regulated transferases have function in phospholipid metabolism, carbohydrate degradation (glycolysis) and nucleotide metabolism. The tyrosine-protein kinases Lyn/Fgr belong to the Src family of kinases that negatively regulate integrin-signaling pathway. Neutrophils deficient of these kinases had enhanced respiratory burst, secondary granule release, and a hyper-adhesive phenotype due to reduced mobilization of SHP-1 (Pereira and Lowell, 2003). Many metabolic enzymes including hexokinase, phosphofructokinase and pyruvate kinase were found significantly down regulated in cluster 4 in this study, changes that have also been previously observed during neutrophil activation (Huang et al., 2002). Pyruvate kinase down regulation can lead to partial inhibition of glycolysis at the last step and enhance the synthesis of lipids and nucleic acids, however neutrophils deficient in PK also showed immunodeficiency (Burge et al., 1976). At the first step of glycolysis, Hexokinase (HK3) has 70-80% activity in granulocytes while its down regulation impairs neutrophil differentiation (Federzoni et al., 2012). Although significantly regulated, the transposition of these results to functional inferences needs validation due to the presence of multiple proteoforms of such enzymes. Hydrolases Some of the important hydrolases were found significantly down regulated like P97573, Phosphatidylinositol 3,4,5-trisphosphate 5-phosphatase (SHIP). Neutrophil with loss of SHIP showed defective cell migration, loss of polarity upon cell adhesion and increased adhesion due to Akt activation and higher PtdIns(3,4,5)P3 (Mondal et al., 2012). Leukotrienes (LTs) are implicated in a wide variety of inflammatory disorders and are produced in arachidonic acid (Kutmon et al., 2015) cascade in immune cells. Two bifunctional hydrolases of AA cascade were found regulated in cluster 4. One is leukotriene A-4 hydrolase (LTA4H) involved in leukotriene B4 biosynthesis from LTB 4 (Liu and Yokomizo, 2015) and in aminopeptidase activity by breakdown and clearance of Proline-Glycine-Proline (PGP), a neutrophil chemoattractant (Snelgrove et al., 2010). The other bifunctional hydrolase found down regulated, Gammaglutamyltransferase 5 (GGT5), participates in glutathione metabolism and in leukotriene D4 biosynthesis from LTC 4 and down regulation of these enzymes can lead to accumulation of leukotriene products (Liu and Yokomizo, 2015) along with PGP and affect neutrophil biology with influx of neutrophils into the tissue and air spaces (Paige et al., 2014). Another interesting hydrolase is Dynamin-2\Dynamin GTPase (DNM), which is involved in microtubules production, binding and hydrolyzation of GTP. Recently, a study showed that inhibition of dynamin impaired the membrane fusion/fission events and resulted in production of highly adhesive cellular secretory protrusions called cytonemes that help neutrophils in longrange contacts with other cells or bacteria after adhesion. Down regulation of dynamin can be an important step in cytoneme production leading to increase in adhesion (Galkina et al., 2005). An overall evaluation of the enzymes regulated in cluster 1 suggests that neutrophils would progressively increase the ROS production and self-protection against ROS (like SOD, DHFR, Cathepsin-D), decrease tissue protection e.g. PtGR1, increase neutrophil lifespan such as NMT1 and promote adhesion (oligosaccharyl transferases). Such progression appears to be proportional to the injury severity. The regulated enzymes in cluster 4 also suggest an intense effect of IR on the oxidative stress mechanism (including XO, TXNRD1, MPO, Lyn, Fgr, and GGt5) and leukotriene metabolism (as LTA4H and GGT5). To validate the results regarding ROS production, we performed an NBT test that allows the visualization of blue formazan crystals inside cells that produced ROS. We compared the formazan crystals formation in control, LAP and IR surgical groups. Yellow and water-soluble NBT is reduced to blue formazan crystals by ROS produced by activated neutrophils (Baehner and Nathan, 1968). The exposure of neutrophils to intestinal IR induced a significantly higher (p < 0.05) amount of formazan (Figure 4), used as a marker of NADPH oxidase activity: 46.5% of the cells contained extensive formazan formation while LAP and control showed 16.9% and 14.4% cells respectively, probably due to baseline production. These results support the hypothesis that neutrophils contribute to the ischemic oxidative stress as shown in (Jaeschke et al., 1992;Arumugam et al., 2004) and may be related to the regulation of antioxidant molecules after IR found in cluster 4, as well as the regulation of SOD, DHFR, and Cathepsin-D found in cluster 1. Major Functional Classes of Neutrophil Proteome For the functional classification of the significantly regulated identified proteins, KEGG pathways, and Wikipathways databases were used as a reference knowledge base to understand various signaling mechanisms and pathways (Zhang and Wiemann, 2009;Kutmon et al., 2015). Differentially regulated proteins were mapped to the Rattus norvegicus genome as reference set for enrichment analysis using the online analysis WebGestalt (Wang et al., 2013). Most of the enriched pathways are immune-related indicating the effect of intestinal ischemia and reperfusion on the neutrophil function. Five proteins from cluster 1 were found to be involved in antigen processing and presentation. They are Hspa8 (heat shock 70kDa protein 8), Hspa5 (heat shock protein 5), Hsp90aa1/Hsp90ab1 [heat shock protein 90, alpha (cytosolic), class A member 1/class B member 1] and Pdia3 (protein disulfide isomerase family A, member 3). Antigen processing and presentation is a well-known phenomenon performed by antigen-presenting cells, including dendritic cells, B cells, macrophages, and thymic epithelial cells, signaling to antigenspecific T cells in order to generate effective adaptive immune responses (Roche and Furuta, 2015). It has been found recently that mouse neutrophils express MHC class II and co-stimulatory molecules like CD80 and CD86, and can prime antigen-specific T cells in an MHC class II-dependent manner (Abi Abdallah et al., 2011). Interestingly the identification of up regulation of the identified proteins in our data set suggests that intestinal ischemia and reperfusion increases antigen processing and presentation process in neutrophils. TNF-alpha/NF-kB signaling pathway was significantly enriched with eight proteins from cluster 1. TNF-alpha is a multifunctional proinflammatory cytokine, which can induce a wide range of apoptosis and cell survival genes as well as inflammation and immunity-related genes by the NF-kB signaling pathway (Barnes and Karin, 1997;Blackwell and Christman, 1997). It has been shown that up-regulation of TNF-alpha/NF-kB signaling pathway after ischemia and reperfusion induces inflammation and tissue injury by the activated neutrophils (Kin et al., 2006). The identification of up-regulation of eight proteins in this pathway expands the previous results that showed up-regulation of this pathway in activated neutrophils after intestinal ischemia and reperfusion causing more injury to the bystander tissues by ROS production (Tang et al., 2011). Three proteins with significant up-regulation from cluster 1 were grouped in IL-2 signaling pathway with significant enrichment. They include Gnb2l1 [Guanine nucleotide binding protein (G protein), beta polypeptide 2 like 1], Hsp90aa1 [Heat shock protein 90, alpha (cytosolic), class A member 1], and Stat3 (Signal transducer and activator of transcription 3). Jablons et al. (1990) demonstrated that in vivo administration of IL-2 in humans with advanced cancers suppresses FcγR expression (CD16) and chemotaxis in neutrophils whereas another study had opposite results and was performed in vitro to check the direct effect of IL-2 on the neutrophils functions (Girard et al., 1996). Our results are in accordance with the in vivo analysis as our results show up-regulated IL-2 signaling pathway proteins concomitantly with a down regulation of FcγR signaling proteins after IR. Fc gamma R-mediated phagocytosis down regulation was found in cluster 4 ( Table 2) and is further discussed below. Here Fc gamma R mediated phagocytosis, regulation of actin cytoskeleton and chemokine signaling pathways were found down regulated in IR, as described by the cluster 4 profile. All the down regulated proteins in these pathways could lead to dramatically slower cell migration, improper polarization, transendothelial migration (TEM), and phagocytosis. Studies have reported that WASF requirement for initial spike in actin polymerization correlates with directional sensing. Zhang et al. (2006) observed reduced adhesion followed by reduced migration toward fMLP along with reduced TEM in WASFdeficient neutrophils associated with defective β2-integrin clustering. In addition, WASF may play multiple roles in chemotaxis (Kumar et al., 2012). Disruption of SHIP1\INPP5D, a primary inositol phosphatase causes accumulation of [a PtdIns(3,4,5)P3 probe] and F-actin polymerization across the cell membrane in neutrophils and as a result the neutrophils become flattened and display irregular polarization and less cell migration (Nishio et al., 2007). We observed down regulation (cluster 4) of Leukotriene synthesis pathways represented by important enzymes like 15 lipoxygenase, 5 lipoxygenase activating protein (FLAP), and LTA 4 hydrolase. Leukotrienes (LTs) are inflammatory mediators causing, for example, phagocyte chemotaxis and increased vascular permeability. LTB 4 is a potent chemotactic agent produced by almost all types of immune cells, especially by neutrophils, and its overproduction leads to certain pathological conditions such as lung edema (Pace et al., 2004) and inflammatory bowel disease (Singh et al., 2003). Few studies have shown the effect of down regulation of 5-LO however, one study reported an increased neutrophil infiltration and TNF expression within the myocardial infarction area of 5-LO deficient mice. However inhibition of 5-lipoxygenase did not affect IR related injury of the wild type mice (Adamek et al., 2007). We have also found down regulation of important enzymes of LTB4 synthesis pathway in PMNs after intestinal IR. Similarly, platelets could naturally inhibit the LTB4 synthesis in neutrophils through their spontaneous interactions with these cells. The inhibitory factor can be adenosine as identified in ligand-operated interactions of platelets with neutrophils, but the platelet-derived product responsible for down regulation of neutrophil lipid mediators release and generation remains to be identified. This could have a significant impact on the homeostatic process of inflammation (Chabannes et al., 2003). It is clear that intestinal obstruction (Sagar et al., 1995) and ischemia (Meddah et al., 2001) cause mucosal injury with a subsequent increase of mucosal permeability and bacterial translocation. Therefore, the up regulation of antimicrobial proteins in cluster 5 like CAMP and Lcn2 observed after IR could be related to protection against bacterial infection and modulation of oxidative stress (Chakraborty et al., 2012). To validate such findings we performed a phagocytosis assay by incubating neutrophils from the three groups with Saccharomyces cerevisiae yeast cells. Phagocytic activity was significantly decreased in the IR group (p-value < 0.05) compared to control and LAP (Figure 4). Only about 23.90% of cells phagocytosed in IR group while control and LAP presented 50 and 40.7% phagocytosis rates, respectively. The mechanisms for the decreased neutrophils phagocytosis have not been explained yet, but might be related to the down regulation (cluster 4) of SHIP and the five proteins that belong to the Fc gamma R-mediated phagocytosis pathway (Detmers et al., 1987;Ravetch and Kinet, 1991). As a result of oxidative stress, most of the tissues undergo various anti-oxidative defense mechanisms. In our study, we found various anti-oxidant proteins in neutrophils that have been significantly regulated after IR. It includes glutathione reductase and glutathione S-transferase (GST). GSTs and its isoenzyme activities increase in response to oxidative stress due to lipid peroxidation resulting from superoxide production (Vasieva, 2011). Glutathione S transferase 1 (GSTP1), is a detoxification enzyme and regulator of cell signaling in response to growth factors, hypoxia, stress, and other stimuli in human hepatocellular carcinoma (HCC) (Kou et al., 2013) but its role in neutrophils is not clear. Down regulation of the response to reactive oxygen species was observed in cluster 3 after IR in neutrophil. Their major role in inflammatory and immune responses has long been thought to be phagocytosis and killing of bacteria via the generation of ROS and release of lytic enzymes stored in granules (Root and Cohen, 1981). Especially ROS, since they are potentially toxic essential molecules in the killing of bacteria. PMNs are thought to be exposed to ROS produced by themselves and by other inflammatory cells, and to suffer from resulting damage, such as DNA cleavage, protein modifications, and lipid peroxidation. The ROS-mediated damage to intracellular molecules is considered to be limited by cellular antioxidant enzymes, such as SODs (superoxide dismutase), Glutathione-s-transferase (Hurst et al., 1998) and GPx (glutathione peroxidase) (Arai et al., 1999). Down regulation of Glutathione-s-transferase and peroxidase could limit PMNs role in inflammatory and immune responses such as phagocytosis and killing of bacteria, as confirmed by our functional assays and previously mentioned in the literature (Hattori et al., 2005). CONCLUDING REMARKS Conclusively, our proteomic approach revealed that intestinal ischemia/reperfusion causes the down regulation of important antioxidants together with the up regulation of enzymes involved in ROS production. This could result in reactive oxygen species accumulation observed by many researchers and confirmed herein. From enzyme classification, cluster based KEGG pathways analysis and phagocytosis assays, we observed the changes in neutrophils motility, phagocytosis, directional migration, and cytoskeletal machinery activation after ischemia and reperfusion. Moreover, regulated pathways and enzymes suggest the influence of IR on the carbohydrate and lipid metabolism and a possible correlation to bacterial translocation, but such findings need further studies. Collectively, our MS-based quantitative proteomic analysis illustrates the significance of comparative proteomic strategies applied to the neutrophils in different surgical groups, showing results supported by functional assays and consistent with the literature, and lay the basis for further profound studies in neutrophil biology. ETHICS STATEMENT This study was carried out in accordance with the recommendations of the Ethics committee at the Faculty of Medicine, University of São Paulo. The protocol was approved by the Ethics committee at the Faculty of Medicine, University of São Paulo, registered under protocol number 8186. AUTHOR CONTRIBUTIONS WF and PR: Study idea and design, data analysis, manuscript writing. MT, SA: sample prep, proteomics analyses, manuscript writing. BF: surgical assays. IL, KB, and MC: functional assays, manuscript writing. SS, VS: data analysis, manuscript writing.
2018-11-29T14:03:58.317Z
2018-11-29T00:00:00.000
{ "year": 2018, "sha1": "8d69338c3f78f8b43eed27489e3b6e52ef00a3cb", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2018.00089/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d69338c3f78f8b43eed27489e3b6e52ef00a3cb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
268027656
pes2o/s2orc
v3-fos-license
An improved gray wolf optimization to solve the multi-objective tugboat scheduling problem With the continuous prosperity of maritime transportation on a global scale and the resulting escalation in port trade volume, tugboats assume a pivotal role as essential auxiliary tools influencing the ingress and egress of vessels into and out of ports. As a result, the optimization of port tug scheduling becomes of paramount importance, as it contributes to the heightened efficiency of ship movements, cost savings in port operations, and the promotion of sustainable development within the realm of maritime transportation. However, a majority of current tugboat scheduling models tend to focus solely on the maximum operational time. Alternatively, the formulated objective functions often deviate from real-world scenarios. Furthermore, prevailing scheduling methods exhibit shortcomings, including inadequate solution accuracy and incompatibility with integer programming. Consequently, this paper introduces a novel multi-objective tugboat scheduling model to align more effectively with practical considerations. We propose a novel optimization algorithm, the Improved Grey Wolf Optimization (IGWO), for solving the tugboat scheduling model. The algorithm enhances convergence performance by optimizing convergence parameters and individual updates, making it particularly suited for solving integer programming problems. The experimental session designs several scale instances according to the reality of the port, carries out simulation experiments comparing several groups of intelligent algorithms, verifies the effectiveness of IGWO, and verifies it in the comprehensive port area of Huanghua Port to get the optimal scheduling scheme of this port area, and finally gives management suggestions to reduce the cost of tugboat operation through sensitivity analysis. Introduction Against the backdrop of open world ports and economic development, international trade competition has become increasingly fierce.The expansion of port scale and the continuous growing operational demands have become major trends and currents [1].The demand for tugboats during ship arrivals and departures at ports is increasing.At the same time, sustainable development and low-carbon environmental issues in maritime transportation have become hot topics of concern [2,3]. Tugboats are crucial for components in maintaining port operations, are costly and subject to increasing wear and tear, while port resources are limited [4].As a result, optimizing tugboat scheduling has become a crucial aspect to reduce port operating costs [5].By optimizing the tugboat scheduling process reasonably, a significant amount of tugboat resources can be saved, accelerating the efficiency of port vessel operations and reducing carbon emissions to a certain extent, thereby promoting energy conservation and environmental protection within ports. Literature review The tugboat scheduling problem (Tug-SP) originated from the Job-shop Scheduling Problem (JSP).Traditional tugboat scheduling models mainly rely on the experience of scheduling personnel, supplemented by some integer programming methods [6][7][8].This kind of scheduling model has more advantages in solving small-scale scheduling, but with the continuous growth of port operational demands and scale, traditional methods have become inadequate, necessitating more scientific and effective approaches to solve the tugboat scheduling problem.Current research focuses on establishing tugboat scheduling models that are in line with practical situations and employing suitable metaheuristic algorithms for solution. Existing research related to tugboat scheduling can be broadly categorized into two phases, as shown in Table 1.In the first stage, researchers mainly concentrated in the initial exploration of intelligent tugboat scheduling research, placing greater emphasis on model construction while neglecting the actual needs of ports.For example, researchers such as Liu, Xu, and Kang described Tug-SP as a multiprocessor task scheduling problem, optimized the port tug tugboat scheduling problem based on the objective of maximizing the tugboat's running time using different intelligent algorithms [9][10][11], and Wang et al. investigated the tugboat schedule ng problem considering a multi-service model with multiple waypoints, with the optimization objective of minimizing the cost, and solved it with an adaptive large-domain algorithm and applicable to large-scale problems [12].This significantly promoted the study of Tug-SP models and represented a substantial improvement over traditional scheduling models.However, there are also limitations because, in practical port operations, tugboat efficiency is reflected by more than just maximum operating time.Port managers need to consider scheduling issues from various perspectives, including cost, efficiency, and environmental protection.A single objective function often fails to capture the complexity of real-world situations. The second stage of the research takes into account other indicators reflecting the efficiency of the model, and the solution algorithms are gradually intelligent, but the applied algorithms have certain shortcomings in solving such problems as Tug-SP.Goli et al. studied the non-permutation flow-shop scheduling problem with the optimization objective of minimizing the completion time and the total energy consumed at the same time, and proposed Multi-objective Ant Lion Optimizer (MOALO), Multi-objective Keshtel Algorithm (MOKA) and Multiobjective Keshtel and Social Engineering Optimizer (MOKSEA) for solving the problem, and the experiments proved that these types of algorithms have good performance on the JSP [13].Tirkolaee et al. jumped out of the JSP and used different heuristic algorithms to solve the Municipal Solid Waste (MSW) and Location Allocation Routing Problem (LARP) respectively, considering multiple metrics such as cost and time, to achieve better results [14,15].Moharam et al. proposed a Discrete Chimpanzee Optimization Algorithm (DChOA) for the delay/loss (TL) penalty scheduling problem in discrete optimization problems, which is more advantageous than many other algorithms for solving the TL problem [16].Zhong et al. establish a dual-objective tugboat scheduling model with maximum completion time and total fuel consumption as the optimization objectives, and choose NSGA-II algorithm for the solution, which achieves better results on the example of Guangzhou port [17]. Due to the current deficiencies in solving high-dimensional problems and inadequate convergence accuracy in tugboat scheduling algorithms, we have noticed a remarkable new intelligent algorithm in the field of scheduling in recent years-the Grey Wolf Optimization (GWO) algorithm proposed by Mirjalili, a scholar from Griffith University, Australia, in 2014 [18].The advantages of this algorithm lie in its simple structure, few parameter settings, fast convergence speed, strong global optimization capability, and applicability to solving high-dimensional problems.As a result, GWO has been widely applied in various fields such as job-shop scheduling, engineering manufacturing, image information, and energy optimization [19][20][21][22]. However, GWO also has its limitations.It encounters conflicts when solving integer programming problems and is prone to getting stuck in local optima in the later stages of the algorithm.Many scholars have proposed improvements from various aspects such as control parameters, individual updates, and incorporating other strategies.Jordehi established an intelligent home energy management scheduling model and used GWO to solve it under the condition of integer encoding.By incorporating a penalty mechanism, the algorithm's convergence accuracy was improved, to some extent, mitigating the conflicts in GWO when solving integer problems [23].Liang et al. targeted the JSP and introduced a reverse learning strategy to increase population diversity.They combined it with simulated annealing to compensate for GWO's tendency to fall into local optima, thereby enhancing the algorithm's performance [24].Zheng et al. proposed an optimal-worst reverse learning strategy for optimizing scheduling algorithms, enabling timely escape from local optima [25].Furthermore, Zuo et al. built upon the optimal-worst reverse learning strategy and proposed a trend-optimizing reverse learning strategy for the Differential Evolution algorithm, further enhancing the algorithm's ability to escape from local optima [26]. Objectives and contributions This paper investigates the problem of harbor tugboat scheduling.Effective tugboat scheduling can save harbor cost and improve harbor navigation efficiency.A multi-objective tugboat scheduling model is constructed under taking into account tugboat operation time, port operation costs and port navigation efficiency.The improved Gray Wolf Optimization Algorithm is used to solve the problem, and as a result, the optimal scheduling plan is obtained, which provides effective decision-making reference and optimization direction for port managers. The main contributions of this paper are as follows: • This paper addresses the tugboat scheduling problem, which has been widely used in reality but has not received much attention. • A new multi-objective tugboat scheduling model with optimization objectives that are more in line with realistic scenarios, including tugboat running time, tugboat fuel cost and tugboat power overflow, is proposed to better simulate the real situation. • In order to solve the problem of integer coding conflicts and the tendency to fall into local optimization in GWO, the GWO algorithm is improved in this paper.This includes the introduction of a convergence parameter based on cosine mode, a dynamic weight adjustment mechanism with an inverse learning strategy for trend optimization to complement the current shortcomings of GWO. • In the experimental part, the Taguchi method was tested to determine the optimal parameter combination of the algorithm.Simulation comparisons of the solution effects of IGWO are performed using multiple arithmetic cases of different scales by combining IGWO with CPLEX, Improved Particle Swarm Optimization (IPSO) [27], Discrete Chimp Optimization Algorithm (DChOA), Adaptive Large Neighborhood Search Algorithm (ALNS), GWO and IGWO are evaluated by comparing them and the results show that IGWO has better solution performance on TSP. The rest of this paper is organized as follows: Section 2 presents the tugboat scheduling model, including a detailed model description, parameter settings, and value ranges. Section 3 introduces the improved Grey Wolf Optimization algorithm (IGWO), including the principles of GWO, the rationale and effects of the proposed improvements, and the process of applying IGWO to solve the Tug-SP problem. Section 4 designs numerous simulation experiments and applies multiple algorithms to solve the problem, providing a comparative analysis to validate the performance of IGWO. Section 5 is designed to further validate the effect and improvement direction of IGWO, to obtain the optimal scheduling plan to provide some reference value for port scheduling, and to design a sensitivity analysis scheme to consider the effect of cost on the tugboat scheduling model. Section 6 is to optimize the tugboat scheduling plan for the integrated port area of Huanghua port, and to test the sensitivity of the unit usage cost of two types of tugboats in this port area, so as to provide further adjustment experience for the managers. Section 7 sets out the conclusions and outlook. Problem description Tugboat berthing and unberthing tasks can be divided into three stages: the approach stage, the assist stage, and the return stage.The approach and return stages are considered as the process in which a tugboat operates without any load, while the assist stage is considered as the operational process.Taking the example of two tugboats assisting in berthing, the specific process is illustrated in Fig 1. The assumptions for constructing the model are as follows: 1. Task scheduling: Prior to the start of tugboat operations, the port receives the daily vessel operational requirements and specific berthing/unberthing details.Tasks are assigned to each vessel in chronological order, taking into account the available number of tugboats, their power, and other relevant information.Tasks that are close in time (with similar start and end times) can be combined, utilizing shared tugboats. 2. Tugboat operations: After completing the assist stage of a task, the tugboat randomly returns to a base and waits for the next task (which may not be immediately following). Returning to the nearest base may not be advantageous for achieving the globally optimal solution. 3. Tugboat bases: Initially, the tugboats are randomly distributed among the tugboat bases. The time required for tugboats to enter or exit the bases is negligible. 4. Tugboat power: Each tugboat has a fixed power for propulsion.Tugboats with higher power have higher empty cruising speeds and fuel consumption.The cost of empty cruising per unit distance and the cost of assistance per unit time are also higher. Navigation rules: The port has a one-way channel, and vessels cannot sail in parallel. 6. Task time windows: Based on the task scheduling information, the time interval for each vessel to complete the berthing/unberthing can be determined. Parameter settings The parameter settings for the tugboat scheduling problem, along with their descriptions, are presented in Table 2, while Table 1 is a two-tier table, where the first and third columns represent the parameter in the model, and the remaining two columns provide corresponding explanations.In addition, the individual tugboats in the model are meant to be similar tugboats and have different performance in terms of power, speed, and fuel consumption.In addition, the individual tugs in the model are meant to be similar tugs and have different performance in terms of power, speed, and fuel consumption.L1 k ij is related to the current position of the tugboat and the position of the target ship entering the harbor, and L2 k ij is related to the current position of the tugboat and the berth where the ship is docked. Model establishment Firstly, in order to improve the efficiency of port operations, it is necessary to minimize the operational time of tugboats during the berthing and unberthing process of vessels.The operational time of tugboats in the port refers to the total time spent on both idle operations and assisting operations for completing all tasks.The first objective function is shown in Eq (1). Additionally, the control of fuel costs for tugboats is considered.When tugboats assist vessels in berthing and unberthing operations, the main costs are based on power of tugboats and the duration of task execution.Therefore, in this study, the cost during the assistance phase is measured based on the assistance cost of different tugboats.The remaining phases involve unloaded travel and do not involve assistance phase, which can be measured by the cost per unit distance traveled by the tugboats.The cost of tugboat operations (during the assistance phase) is influenced by two factors.On the one hand, it depends on the power of the operating tugboat, with higher power resulting in higher costs.On the other hand, it is determined by the duration of the operation, which is governed by the berthing and unberthing schedule of the target vessel.The second objective function is shown in Eq (2). Furthermore, the overflow power of the tugboats is calculated.For tugboat-assisted berthing and unberthing tasks, tugboats should arrive within the time window for the target vessel to avoid delays in cargo handling and occupation of the designated berth, resulting in demurrage fees.Early arrival has less impact on the port, therefore, the waiting cost is not considered in this study.The third objective function is shown in Eq (3). In summary, considering the optimization objectives of minimizing the maximum (4). The constraints of the model are represented by Eqs ( 5) to (14), Eq (5) represents the number of tugs allocated to each task.Eq (6) indicates the initial positions of the tugs before the first task.Eq (7) states that the tugs completing a task must meet the minimum power requirement for the current task.Eq (8) means that the tugs assigned to the first task should arrive at the starting point of the task before the start of the first task.Eq (9) denotes that after completing the previous task, the tugs should return to the docking base and reach the starting point of the current task in time.Eq (10) represents the requirement for the tugs to return to a specific base before proceeding to the next task.Eq (11) indicates that the number of tugs at a base should meet the required number of tugs to complete the tasks.Eq (12) states that a tug can only dock at one base at a given time.Eq (13) implies that in the absence of any tasks at the current stage, the tug remains at the previous base.Eq (14) signifies that the variables are integer values ranging from 0 to 1. Principles of gray wolf algorithm The GWO algorithm is characterized by simple structure, few parameters to be adjusted, easy to implement, etc., in which there are convergence factors that can be adjusted adaptively as well as the information feedback mechanism, which can achieve a balance between local optimization and global search, and GWO has a good performance in terms of solution accuracy and convergence speed. The solution of multi-objective problems is often limited by the complexity of the solution space, while the global optimization ability and high-dimensional solution performance of GWO can compensate for this limitation to a certain extent, so that GWO has better robustness and applicability in multi-objective problems after parameter adjustment. (1) Social hierarchy GWO is optimized by simulating the predation behavior of grey Wolf groups.Grey Wolf groups have a strict social hierarchy, as shown in Fig 2, which is divided into four classes.The specific meanings of each social hierarchy in the figure are as follows: Level 1: Alpha wolf (α).The leader of the pack is responsible for leading the pack to hunt prey, which is the optimal solution in the optimization algorithm. Level 2: Beta wolf (β).Assisting the alpha pack, the sub-optimal solution in the optimization algorithm. Layer 3: Delta wolf (δ).Obey the orders of alpha and beta, responsible for reconnaissance, lookout, etc.The α and β of fitness differences will decrease to δ. (2) Hunting process GWO mimics the predatory characteristics of a gray wolf pack, with the goal of tracking, surrounding, chasing, and attacking prey to achieve optimal search.The hunting process of gray wolves is implemented based on Eqs ( 15) and ( 16): While t represents the current iteration count, X !P represents the position vector of the prey, X !ðtÞ represents the position vector of the current gray wolf; A and C are cooperative coefficient vectors that vary according to Eqs (17) and (18): During the hunting process of the wolf pack, a linearly approaches 0 from 2; r1 and r2 are random vectors in the range [0, 1].Clearly, as the iteration progresses, a gradually decreases to 0, causing the gray wolves to approach the prey.Simultaneously, r1 and r2 provide the opportunity for the wolves to escape from local optimal solutions. Convergence parameters based on cosine variation The magnitude of the cooperative coefficient vector A directly affects the overall and local optimization capabilities of the algorithm.Eq (17) indicates that A is controlled by the convergence parameter a, which linearly decreases from 2 to 0 with the number of iterations.However, during the operation and search of the algorithm, this linear variation does not fully reflect its optimization process.Hou et al. improved the gray wolf optimization algorithm by introducing a non-linear convergence parameter [28].Based on this theory, this study introduces a convergence parameter expression based on cosine variation: While a initial and a final respectively denote the initial and final values of the convergence parameter a, while n represents the decreasing index, 0�n�1.The convergence parameter varies over time as shown in Fig 3 .From the change in the absolute value of the slope in the graph, it can be observed that the descent pattern of this convergence parameter is slow-fastslow.This indicates that the improved convergence parameter decreases slowly in the initial stage, enabling a global search over a wide range.In the middle stage, it decreases rapidly, allowing faster encirclement of the prey and positioning in the vicinity of the optimal solution.In the final stage, it decreases slowly, enhancing the algorithm's local search capability and improving algorithm accuracy. Dynamic weight correction and update mechanism In the original Grey Wolf Optimization (GWO) algorithm, each individual has the same learning degree towards the alpha, beta, and delta wolves, which hinders the ability of the new generation individuals to become more excellent through learning from the alpha wolf.To address this limitation, Meng et al. proposed a nonlinear dynamic learning weight mechanism [29].however, it still couldn't fully exploit the learning capability of individuals.In practical situations, the excellence of alpha, beta, and delta wolves is ranked as α>β>δ.Therefore, a dynamic weight mechanism based on step length Euclidean distance is introduced initially to dynamically adjust the learning weights according to the differences among the alpha, beta, and delta wolves.The specific formulas are shown in Eqs (18) and (19). The tugboat scheduling problem is typically addressed using real-number encoding.However, the updating process in the Grey Wolf Optimization (GWO) algorithm often involves non-integer solutions.Moreover, the solution individuals in tugboat scheduling problem are subject to constraints, such as the non-repetition of tugboats for the same task and the minimum power requirement for each task.After the application of the GWO algorithm, the individuals may easily produce infeasible solutions.To address this issue, a cross-repair mechanism is proposed in combination with the aforementioned dynamic weight adjustment mechanism to optimize non-integer solutions.The specific steps of the dynamic weight correction and updating mechanism are as follows: 1. Based on the positions of the current wolf and the contemporary alpha, beta, and delta wolves in the population, as well as their respective cooperative coefficient vectors A and C. 2. Calculate the distance vectors between the current wolf and each of the alpha, beta, and delta wolves, and perform preliminary individual updates based on the dynamic weight mechanism using Formulas (18) and (19). 3. Correct the preliminary updated individuals to integer solutions (which are likely to still be infeasible solutions).4. Obtain the set of eligible tugboats based on the minimum power requirement for each task. 5. Take the intersection between the infeasible integer solutions obtained in step 3 and the set of eligible tugboats from step 4, and select the individual that is closest to the infeasible integer solution to complete the update of the current individual.If there are missing parts in the individual, they are randomly selected from the remaining tugboats. 6. Perform the final update on all individuals in the Grey Wolf population following the above steps, thereby completing the overall update of the Grey Wolf population. Taking the example of 5 tasks and 8 tugboats, Fig 4 demonstrates the process of cross-repair updating.The individual is composed of the tugboats required for all tasks.In this process, the current individual is cross-fused with the parameters A, C, and the contemporary alpha, beta, and delta wolves of the current generation.After rounding, the next generation's Grey Wolf position vectors X1, X2, and X3 are obtained.The vectors are then centered and adjusted, and their intersection with the set of eligible tugboats is taken (with the remaining missing parts of the individual randomly selected).After multiple individual updates, the final result is a new generation of Grey Wolf individuals. The application of the dynamic weight correction updating mechanism can make the new generation of individuals self-adaptive learning, and the learning rate of better wolves is Towards excellent inverse learning strategy The inverse learning strategy, introduced by Tizhoosh in 2005 [30], is a novel algorithm that enhances the global optimization capability of various algorithms, thereby improving their search efficiency.Building upon the theory of elite inverse learning, Li et al. incorporated a reverse learning mechanism for the best and worst individuals [31], as calculated in Eq (20). where x ij represents the current individual, lb represents the upper bound of the current individual, ub represents the lower bound of the current individual, and x r represents the reverse solution. Ordinary elite inverse learning strategies only apply reverse learning to the best individuals in the population.However, the worst individuals in the population also contribute to diversity accumulation.Considering the characteristics of the Grey Wolf Optimization (GWO) Their respective inverse solutions are calculated using Eq (23).These reverse solutions are then sorted based on their fitness values, and the lower half of the sorted individuals with the lowest fitness values are selected as new solutions.The algorithm randomly replaces individuals in the wolf pack with these new solutions, thereby enhancing the algorithm's ability to escape local optima. IGWO solving process Based on the above theoretical methods, this paper adopts the new IGWO algorithm to solve the tugboat scheduling model.The specific solution process is shown in Fig 7 .The solution process is divided into four stages, which are as follows: 1.The first stage: First obtain the relevant information of the port task, and enter the task data, including the number of tugboats required for each task, the required working time, and the required tugboat power.The second is the data of the existing tugboats in the port, including the number of tugboats, power, idle speed, idle cost and navigational aid cost, two of which are jointly set by local regulations, the Maritime Authority and the port company.Finally, the data of the tugboat base, including the number and capacity of the tugboat base, as well as the location of the base. 2. The second stage: tasks will match a certain number and power of tugboats, which restricts the use of tugboats, and according to the ordering of tasks and the constraints of model conditions, the set of tugs currently available is obtained.Finally, according to the current available tugboat set, the initial grey Wolf population is randomly generated, which ensures that there will be no illegal solution in the initial solution. 3. The third stage: In the process of solving the gray Wolf algorithm, it is necessary to calculate the fitness of individual gray wolves in the gray Wolf population.The closer the individual is to the objective function, the greater the fitness is.All individuals are sorted according to the fitness, and the three gray Wolf individuals with the largest fitness are screened out, which are the first wolves of this generation α,β,δ.Next, all individuals need to learn from the modern leading Wolf to update themselves, and the updating method adopts the 4. The fourth stage: the update in the third stage satisfies the convergence process of the gray Wolf algorithm, but the algorithm is still prone to local optimization.Therefore, the optimal reverse learning strategy is adopted in the fourth stage to update the gray Wolf population twice.Although the optimal reverse learning strategy can help the algorithm jump out of the local optimal, it is prone to illegal solutions after updating, so the illegal solutions are corrected at last.The specific method is to take the intersection of the available tugboat set and the current gray wolf population to generate a legal new solution.If the length of the new solution is insufficient, it is randomly selected from the set of available tugboats to make up for it. Tugboat scheduling case study simulation To validate the effectiveness of the IGWO algorithm in solving the tugboat scheduling problem, simulation experiments are conducted to compare different instances and multiple intelligent algorithms.The code for each experiment is implemented using Python 3.7 (with the CPLEX library called through Python functions).The simulation runs on a laptop with an 11th Gen Intel(R) Core(TM) i5-11400H @ 2.70GHz processor, 16GB RAM, and Windows (64-bit operating system). Experimental parameter settings To assess the performance of IGWO in harbor tugboat scheduling, we generate tugboat scheduling cases based on different scales.The scale of each instance is controlled by the number of tasks and tugboats.In addition to the parameters controlling the scale mentioned in Table 2, there are several common parameter ranges for all cases.These ranges are used to define constraint conditions and calculate various cost expenditures during the scheduling process.The parameter ranges are determined by referring to relevant data on China's port charging standards [32], taking into account the specific model proposed in this study and the actual conditions of the one-way channel in the harbor.The parameter ranges are shown in Table 3. Table 3 shows the ranges of some parameters.Columns 1 and 3 are the parameters, columns 2 and 4 indicate the parameter ranges, and columns 3 and 6 denote the units of the parameters.The explanations for the parameters have been given earlier in the text and will not be repeated here. Feasible solution setup Given the numerous constraints in the model, it is necessary to set up the generation of feasible solutions.The constraints to be considered can be divided into three aspects: the demand of each task for tugboats, the operational limitations of the tugboats themselves, and the capacity limitations of the tugboat base.The setting of feasible solutions for the multi-objective tugboat scheduling model is shown in Algorithm 1, which first obtains the task ship and tugboat data through the port, and generates the set of available tugboats through this data to clarify the number of tugboats required for each task.On this basis, a single feasible solution is generated, and the population of feasible solutions is generated according to the population size parameter N P . Algorithm parameter optimization The parameters that can be adjusted in the IGWO algorithm include: the population size N P , the number of iterations N t , and the convergence parameter a, in which the convergence parameter a has been adjusted based on the cosine law, so the next two parameters can be adjusted. Different combinations of algorithm parameters will affect the solution performance of the algorithm, so the Taguchi method is used to test to determine the best combination of parameters.One of the two parameters, three levels of parameter combination experiments are shown in Table 4. Next, the results of each parameter combination were tested at three different scales, respectively, and the scale of the test cases is shown in Table 5.The test results were measured using the signal-to-noise ratio (SNR), which is calculated as shown in Eq (24): Where y ij represents the test results of each group.Orthogonal experiments were carried out according to three levels of each parameter, and each group was tested 10 times, and the test results are shown in Table 6, and it can be seen by observing the table that the SNR value reaches the minimum in the 5th group of experiments, so this paper sets the algorithm parameters as the number of populations N P = 200, and the number of iterations N t = 1000. Small-scale simulation experiments To verify the applicability of the IGWO algorithm in small-scale tugboat scheduling problems, 15 sets of different small-scale cases were designed for solving using the IGWO algorithm. A comparison experiment was conducted with CPLEX, a mathematical solver, as shown in Table 7.The first column of the table represents the 15 sets of generated instances.The second column indicates the number of tasks in each instance, which is slightly larger than the number of target vessels since each task requires 1-5 tugboats.On average, the number of target vessels is 2.5 times the number of tasks.The third column represents the total number of available tugboats for each instance.The fourth and fifth columns show the results of CPLEX and IGWO, respectively, for each instance.The last column presents the deviation between the two solutions. Among all the small-scale cases, IGWO achieved the same optimal value as CPLEX for the first 6 instances, with a deviation of 0.00%.For instances 7-12, the solutions of both methods were very close, with a deviation controlled within 0.9%.In the last 3 instances, CPLEX encountered a memory overflow issue and failed to obtain a solution, while IGWO's solving process remained unaffected.At present, most of the ports in reality are small in scale, and it can be seen from the above examples that IGWO has good solving performance in the problem of small-scale tugboat scheduling.Therefore, the IGWO proposed in this paper has important practical significance. Medium to large-scale simulation experiments Similarly, we need to go to observe the performance of IGWO in medium and large-scale instances, because CPLEX cannot solve larger-scale problems, so the other three improved intelligent algorithms as well as the original Gray Wolf optimization algorithm were selected for experiments, designed 10 groups of medium and large-scale instances, and analyzed the strengths and weaknesses of the algorithms based on a number of indicators, as shown in Table 8.Table 8 consists of the following columns: Instance Number, Task Quantity, Tugboat Total, and Solution Metrics including best value (Bes.), average value (Avg), worst value (Wor.) and solution time (Time.).Columns 5 to 8 represent the solution results of each intelligent algorithm for the given model, and column 9 indicates the gap in the optimal value (Gap.) between IGWO and other algorithms.Gap. is calculated using Formula (25), where a larger Gap.indicates the superiority of IGWO over other algorithms. Comparing the simulation results of different algorithms at different scales, it can be seen that the IPSO algorithm enables the algorithm to complete the fine search at a later stage through the nonlinear inertia weighting strategy, but it is limited by the algorithm's own problem that the global optimal object of individual learning is only a single individual, which is not conducive to searching in the high-dimensional solution space.For the DChOA algorithm, although the algorithm improves the parameters and operators, the problem solving effect is not ideal for high-dimensional solution space, it can be seen that as the size of the solution increases, the dimensionality of the solution space is also increases, and the solving accuracy of DChOA highlights the lack of precision, and it is more likely to produce the phenomenon of premature maturity.The ALNS algorithm, due to the improved domain search operator, has a better performance compared to the previous algorithms, but the convergence speed is slower, and the GWO algorithm can easily fall into the local optimum without adjusting the convergence parameter alpha and the learning strategy, resulting in a lower solution accuracy.For the IGWO algorithm, observation of the experimental results shows that in examples 2-5, due to the relatively small size of the instances, the solution accuracy of IGWO is 10%-14% ahead of the other algorithms, and the remaining 7 instances still maintain a 2%-10% lead.In addition, the solution time of IGWO is significantly shorter than other algorithms, and its solution speed dominates at all scales.This indicates that IGWO is more stable and outperforms all other compared algorithms, including the original GWO algorithm, in terms of overall performance.This is due to the tuning of the convergence parameters and the dynamic learning strategy, the better balance between the algorithm's global and local convergence capabilities, and the introduction of the inverse learning strategy, which enables the algorithm to jump out of the local optimum in time. Optimal solution tracking Section 3 verified IGWO's solution accuracy and stability was accomplished.In order to further enhance the algorithm's performance and apply it to practical port operations, this Section presents a redesigned set of four instances with a large span of scales.The algorithm parameters remain the same as in Section 3. The aim is to track the optimal values and corresponding scheduling solutions of each intelligent algorithm, obtain their iteration curves, analyze the directions for improving IGWO, and ultimately provide a reference for practical port operations based on the optimal scheduling solution obtained through IGWO. Validation of optimal solutions As shown in The IGWO algorithm outperforms other comparative algorithms in terms of solution accuracy but has some limitations.Further improvements could focus on enhancing its ability to escape local optima in the later stages of the algorithm. Optimal scheduling solution To apply this algorithm to practical port operations, case (a) was selected as an example.The Gantt chart in Fig 9 represents the optimal scheduling plan generated by IGWO, providing a clear view of the assigned tugboat numbers and the sequence in which they perform tasks.It serves as a reference for tugboat scheduling in port operations.For example, tugboats assigned to serve Task 1 are Tugboat 1, Tugboat 14, Tugboat 15, and Tugboat 17.The sequence of tasks performed by Tugboat 12 is Task 2-Task 7-Task 18-Task 19.Additionally, it can be observed that Tugboat 1 is frequently assigned tasks, while Tugboats 3-8 remain idle.This is related to the existing tugboat capacities and task requirements.To address this, further optimization can be achieved by introducing constraints on continuous tugboat operations in the model. Sensitivity analysis This section focuses on the sensitivity of the objective function to the tugboat's unloaded travel cost and pilotage cost.The optimal value (Bes.) is used as the evaluation criterion.The simulation experiments in this chapter continue to utilize the four instances introduced in Section 4. Firstly, a sensitivity coefficient is defined with a range of [0, 2] and a step size of 50%.It is used to test the impact of tugboat's unloaded travel cost (Ct i ) and navigation assistance cost (Co i ) on each set of cases.The sensitivity performance of Ct i and Co i in the four sets of cases is shown in Fig 10,while (a) shows the curve of the optimal value with respect to the sensitivity coefficient of idle cost, while panel (b) depicts the curve of the optimal value with respect to the sensitivity coefficient of navigation assistance cost. From Fig 10 it can be seen that sensitivity is directly proportional to the slope of the curve.Analyzing Fig 10-(A), it can be observed that in small-scale instances, changes in Ct i do not significantly affect the variation of the optimal value, only exerting a relatively larger impact on the optimal value in larger-scale cases.Conversely, in Fig 10-(B), Co i has caused significant disturbances to the optimal value across instances of various scales.In conclusion, the tugboat scheduling model established in this paper is significantly influenced by the tugboat assistance cost, and it is particularly affected by the tugboat unloaded travel cost in large-scale scenarios.This provides some recommendations for port managers.They should strive to minimize the tugboat assistance cost while also considering the unloaded travel cost, especially in large-scale ports, in order to achieve cost control and operational efficiency. Huanghua port example study In order to validate the effectiveness of the algorithm and the model, an example study is conducted with the integrated harbor area of Huanghua Port in China, which is about 9.5 nautical miles in length and is now in operation with 11 berths, 1 tugboat base, and 2 mission start points. The data of May 1, 2022 is selected for the study of this port area, and the information of the tugboat configuration of this port area and the incoming ship data of that day are shown in S2 Table, the tugboat scheduling of that day in this port area is optimized by using the IGWO algorithm, and the parameters of the algorithm are the same as in the previous section, which results in the optimal operation scheme as shown in Table 9.The 21 vessels on that day are divided into 8 tasks, and the optimal order of tugboats to perform each task can be clearly seen through this table.In addition, the same method was used to test the sensitivity of tugboat idling cost Ct i and aids to navigation cost Co i to the optimal value Bes. as shown in Table 10, where the first column of the sensitivity coefficients indicates the proportion of the change in the two factors, and the second and third columns indicate the extent to which the change in Ct i and Co i , respectively, affects the optimal value (Bes.).In this example, the effect of the change in Ct i is small, while the effect of Co i is large.Therefore, if conditions permit, port managers should try to reduce the cost of tugboat navigation assistance Co i , which can more efficiently reduce the cost of port operation as well as improve the efficiency of port navigation. Conclusion and future work This paper proposes a novel tugboat scheduling model based on different power levels of tugboats and vessels.Multiple objective functions are established, including in-port operation time, various tugboat costs, and overflow power.The study explores the application of the Improved Grey Wolf Optimization (IGWO) algorithm to the tugboat scheduling problem.The algorithm's performance is verified through simulation experiments at various scales.The results show that IGWO optimizes several other comparative algorithms, and finally the research in this paper is applied to the integrated port area of Huanghua Port to obtain the optimal scheduling scheme with effective management insights. The proposed tugboat scheduling model in this study simulates the real-world scenario of tugboats assisting ships in entering and leaving ports.Through IGWO optimization, it generates efficient tugboat scheduling schemes that lead to fuel and manpower savings for ports while also reducing carbon emissions to some extent.This offers new insights and methods for promoting the sustainable development of maritime transportation. This paper establishes a new tugboat scheduling model based on the task scheduling theory, but the model and algorithm can be further optimized in the future, model-wise the objective function can be replaced according to the actual needs of the port and the constraints of continuous operation of tugboats and inter-port scheduling can be added.In terms of algorithms, deep reinforcement learning is noteworthy, but such algorithms have high environmental requirements, and whether they are applicable to the tugboat scheduling problem needs to be studied. Fig 8 , while (a), (b), (c), and (d) correspond to instances with task-tractor ratios of 20-32, 40-48, 60-72, and 80-96 respectively.According to the iteration curve, it can be observed that both HNGA and GWO algorithms suffer from the disadvantage of easily converging to local optima.Although IPSO improves the inertia weight, its performance is poor for moderate-scale problems. S1 Appendix.Port billing schemes.The S1 Appendix describes the rules and regulations related to port billing in China.(DOCX) S1 Table.Tugboat scheduling simulation dataset.The S1 Table 2 . Parameters of the tugboat scheduling model. Parameter type Parameter Description Set tugboats, fuel costs associated with idle and assisting operations, and the overflow power of tugboats, a new multi-objective tugboat scheduling model is established.The overall objective function of the model is shown in Eq I Set of tugboats, i2I = {1,2,3,. ..,n}J Set of tasks, j2J = {1,2,3,. ..,m}K Set of bases, k2K = {1,2,3,. ..,l} i Location of tugboat i at base k after completing the previous task st i Time of arrival of vessel for task j at the starting point et j Time of arrival of vessel for task j at the ending point v i Empty cruising speed of tugboat i Decision variables x ij 0-1, 1 if tugboat i is assigned to task j, otherwise 0 y k ji 0-1, 1 if tugboat i is at base k after completing task j, otherwise 0 https://doi.org/10.1371/journal.pone.0296966.t002operational time of Table. Data of Huanghua Port Comprehensive Port Area . The S2 Table contains three sub-tables representing the basic data of the port area, the ships on a particular day, and the pre-processed ship data.(XLSX)
2024-02-28T05:07:22.682Z
2024-02-26T00:00:00.000
{ "year": 2024, "sha1": "012ffd60c439a5e090f10d4d96b564c614f82def", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0296966&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "012ffd60c439a5e090f10d4d96b564c614f82def", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
56253488
pes2o/s2orc
v3-fos-license
Resilience and learning from insurance firms: Dataset on British long-term insurance market performance This data article revealed data about the UK long-term insurance market performance for over a decade. The data was acquired from the ABI and contains important trading results (i.e. premiums generated) across different types of long-term insurance. It also revealed the outgoings (i.e. claims incurred) and data on total individual business in force at year end – relating to number of policies. The data relates specifically to life and annuities, individual pensions, occupational pensions, income protection and other insurance business. The dataset revealed some important information on long-term insurance products within the UK insurance market which could serve as a high-quality resource for longitudinal analysis in the field. Thus, the data can provide crucial insights about UK long-term insurance market resilience during the global financial crisis and can be compare with different era, sectors and countries to show hidden business resilience factors, competitiveness and survival strategies. a b s t r a c t This data article revealed data about the UK long-term insurance market performance for over a decade. The data was acquired from the ABI and contains important trading results (i.e. premiums generated) across different types of long-term insurance. It also revealed the outgoings (i.e. claims incurred) and data on total individual business in force at year endrelating to number of policies. The data relates specifically to life and annuities, individual pensions, occupational pensions, income protection and other insurance business. The dataset revealed some important information on long-term insurance products within the UK insurance market which could serve as a high-quality resource for longitudinal analysis in the field. Thus, the data can provide crucial insights about UK long-term insurance market resilience during the global financial crisis and can be compare with different era, sectors and countries to show hidden business resilience factors, competitiveness and survival strategies. & Subject area Insurance, risk management, business continuity and resilience More specific subject area Insurance, and risk management Type of data Tables How data was The trading result data for UK insurers allows for comparison of insurance companies in other countries over the same period. Specifications table The data allows for examination of insurance business resilience and business continuity during the global financial crisis. The data allows for linking strategic innovation and insurance business performance using premiums generated, claims incurred and number of insurance policies in force. The data can reveal transferrable lessons from insurance sector and increase awareness of resilience as practice. Data The dataset of this article contains contextual information about the United Kingdom (UK) longterm insurance market for over a decade (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016). The purpose of the collecting the data was to examine the performance of UK insurance market during the global financial crisis with the aim of exploring business resilience factors within the insurance sector. Data were acquired by the ABI through survey of all insurance companies in the UK. The data clearly revealed the net written premiums by line of business such as life and annuities, individual pensions, occupational pensions, income protection and other business (Tables 1-3); total claims incurred ( Table 4) and total individual business in force at year end (i.e. number of policies) ( Table 5). The data can allow for examination of insurance business resilience and business continuity during the global financial crisis. Previous studies that used industry data for insurance and risk management analysis can be found in Refs. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Details on other researched works on the subject and relevant literature can be found in Refs. . There are several lessons other businesses can learn from insurance sector dataset especially in terms of business continuity, business resilience, strategic innovation, and sustainable survival during the global financial crisis. Moreover, the UK long-term insurance market data allows for comparisons of other companies' performance in different sectors. The data have been simplified in this article to inform future research in insurance and risk management. Moreover, the data is relevant for researchers interested in investigating issues of interaction among insurance firms' resilience, competitiveness, innovation and sustainable growth. Experimental design, materials, and methods The data concerning UK long term insurance industry performance between 2007 and 2016 was acquired through survey by the Association of British Insurers (ABI). The data are classified into three main categories: income, outgoing and in-force. The income relates to premium generated from the long-term insurance industry across four classes of business including (a) life and annuities, (b) individual pensions, (c) occupational pensions, and (d) income protection and other business. Moreover, the outgoing within the dataset concerns data on total claims incurred whereas in-force relates to data concerning total number of policies at the year end. The data as shown in Tables 1-5 include net written premiums from total premiums, regular premiums and single premiums categories; total claims incurred and total number of policies in force during the period 2006 and 2016. Fig. 1 summarized overview of UK long-term insurance total premiums (d'm). The summary of UK long-term insurance total claims incurred (d'm) and total number of policies in force at year end (000's) are shown in Figs. 2 and 3 respectively. Table 6 explains the categories of the business lines and related long-term insurance products. For future research, the data included in this article can be used to explore nature and mechanisms of insurance sector performance, value of enterprise risk management, business resilience, competitiveness and innovation within companies.
2018-12-18T14:04:07.396Z
2018-11-24T00:00:00.000
{ "year": 2018, "sha1": "e00943446c6e89389f499b3b8004264af8280aba", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.dib.2018.11.098", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e00943446c6e89389f499b3b8004264af8280aba", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Medicine", "Business" ] }
4469329
pes2o/s2orc
v3-fos-license
Linc00152 promotes Cancer Cell Proliferation and Invasion and Predicts Poor Prognosis in Lung adenocarcinoma Background: The long non-coding RNA Linc00152 stimulates tumor progression in cancer. However, its clinical significance and biological functions in lung adenocarcinoma remains unknown. We evaluate the expression of Linc00152 in lung adenocarcinoma and its possible correlation with clinicopathologic features and patient survival to reveal its biological effects in cancer progression and prognosis. Methods: Total RNA extraction was performed on 110 pairs of lung adenocarcinoma and adjacent normal tissue samples, and then RT-qPCR was conducted. Chi-square test analysis was used to calculate the correlation between pathological parameters and the Linc00152 mRNA levels. Kaplan-Meier and Cox proportional hazards analyses were used to analyze the overall survival (OS) and disease-free survival (DFS) rates. We also detected the potential functional effects of overexpression and knockdown of Linc00152 in vitro cell proliferation, tumor cell invasion and migration, as well as in vivo nude mouse xenograft and metastasis models. Results: The Linc00152 expression levels were higher in lung adenocarcinoma samples than in the adjacent normal tissues. Linc00152 expression levels tightly correlated with lymph node metastasis station, remote metastasis and TNM staging. The Kaplan-Meier analysis suggested that high Linc00152 expression caused significantly poorer OS and DFS rates, and a multivariate analysis revealed that Linc00152 was an independent risk factor for both DFS and OS. Overexpression of Linc00152 in lung cancer cells stimulated proliferation, tumor cell invasion and migration. Knockdown of Linc00152 inhibited cell growth and cell invasion and migration. Finally, Linc00152 knockdown inhibited lung tumor growth and tumor metastasis in nude mice models. Conclusions: Our study suggests that Linc00152 independently predicts poor prognosis and promotes tumor progression in lung adenocarcinoma. Linc00152 needs to be considered as a potential molecular target in future cancer pharmacology. Introduction Lung cancer (LC) is the leading cause of cancer-related mortality in the world as the top 1 cause of cancer mortality each year in China [1]. The most common histological type of non-small cell lung carcinoma (NSCLC) is lung adenocarcinoma, accounting for 70% of NSCLC and nearly half of all Ivyspring International Publisher lung carcinoma [2]. Identifying genes that participate in GC biology is critical to improve clinical practice. The non-coding RNAs with sizes larger than 200 nucleotides (long non-coding RNAs, lncRNAs) are reported to play critical roles in somatic malignancies [3]. Dysregulation of lncRNA expression is associated with lung tumor tumorigenesis, recurrence and metastasis [4,5]. Exploration of potential lncRNAs related to the pathogenesis of lung adenocarcinoma would be illuminative to future therapeutic strategies of lung cancer [6]. The non-coding RNA Linc00152 is a newly found lncRNA and has been reported to be involved in tumor initiation and progression [7]. Our former study revealed that Linc00152 promotes renal clear cell progression and predicts poor prognosis [8]. Other studies have got similar findings in gastric cancer [9] and hepatic carcinoma [10], which suggests that Linc00152 might serve as a powerful oncogenic lncRNA in somatic malignancies. However, the correlation between Linc00152 expression and tumorigenesis of lung cancer remains elusive. In this study, we aimed to detect the expression of Linc00152 in lung adenocarcionma and its possible correlation with clinicopathologic features and prognosis to investigate the biological functions of Linc00152 on lung adenocarcinoma. Patient population A total of 110 patients who underwent surgical resection of primary lung adenocarcinoma at Fudan University Shanghai Cancer Center from 2009 to 2012 were retrospectively analyzed. No patients had received preoperative therapy. The resected cancer tissue and adjacent normal tissue (ANT) samples were frozen in liquid nitrogen immediately and stored at -80 °C until RNA extraction. The related data including age, gender, tumor location, size, histologic stage, lymph node status, remote metastasis and recurrence were collected from the medical record system. All patients were staged based on the TNM classification system. Patient follow-ups were performed every month during the first year after surgery and every 3 months thereafter until August 31, 2016. All patients had complete follow-up information. The Clinical Research Ethics Committee of Fudan University Shanghai Cancer Center approved the study. Written informed consent was obtained from all of the participants for the use of their tissues in the current study. Cell lines and culture conditions The human lung cancer cell lines, H1299, H1975, A549 and H1650 were purchased from the Fudan University IBS cell bank, Shanghai, China. All cell lines were cultured in RPMI-1640 or high glucose DMEM (Gibco, Carlsbad, CA, USA) that was supplemented with 10% fetal bovine serum (FBS) (Gibco, Carlsbad, CA, USA) and 1% penicillin and streptomycin (Sigma, St. Louis, MO, USA) at 37°C in a humidified atmosphere with 5% CO2. Stable Linc00152-infecting H1299 and H1975 as well as stable Linc00152-knockdown A549 and H1650 cells were maintained with 1 µg/ml puromycin (Sigma, USA). Plasmids and lenti-virus constructions The full-length Linc00152 sequence was amplified by PCR from cDNA of NIH: H293 cells and then subcloned into the pcDNA3.1 (+) vector (Transheep, Shanghai, China). pHBLV-IRES-puro lenti-virus was constructed by Hansheng (Shanghai, China). The information of shLinc00152 was listed in the previous article [8]. Proliferation assays Cells were seeded in 96-or 6-well plates 24 h prior to the experiment. After infected with indicated lenti-virus, proliferation was measured using the CCK-8 (Dojindo, Japan) and the colony-forming assays. The CCK8 assay counts living cells at different time points. Approximately 3.5×10 3 infected cells in 100 µl were incubated in triplicate in 96-well plates. The CCK-8 reagent (10 µl) was added to each well and incubated at 37 °C for 2 h every 24 h for 4 consecutive days. Then, we measured the optical density at 450 nm using an automatic microplate reader (Synergy4; BioTek, Winooski, VT, USA). For the colony-forming assay, 800 infected cells were seeded in 6-well plate and incubated with according culture for 14 days. Then the cells were fixed with ethanol and then stained by crystal purple solution. Cell invasion detection A transwell assay was used to assess cell invasion with a transwell system from Corning co. Ltd., USA. Twenty-four h after transfection, nearly 4.0×10 4 cells in 100 µl of serum-free medium were added to each upper chamber. Medium containing 10% fetal bovine serum was applied to the lower chamber as chemo-attractant. After a 24 h incubation at 37 °C, the non-invasive cells that were remaining on the chamber were removed with cotton-tipped swabs. The cells that migrated and adhered to the lower surface of the filter were fixed with ethanol, stained with 0.5% crystal violet, photographed at 200×, and counted at 400× magnification (BX51, Olympus, Japan). Cell Migration Detection A wound-healing assay was used to assess cell motility. Transfected cells were plated at equal density in 6-well plates and grown to confluent to 90%. Wounds were then scratched with a sterile pipette tip, the cells were washed twice with PBS and serum-free culture medium was added. The wound closing procedure was observed for 48 h, and images were captured every 24 h at 100× magnification (BX51, Olympus, Japan). Total RNA Isolation and RT-qPCR Total RNA was extracted from the tumorous and adjacent normal tissues using Trizol (Invitrogen, Carlsbad, CA, USA) following the manufacturer's protocol. Real-time (RT) and quantitative polymerase chain reaction (qPCR) kits were used to evaluate the Linc00152 expression levels from the tissue samples. The RT and qPCR reactions were conducted as previously described [11,12]. Relative gene expression was calculated using the comparative cycle threshold (CT) (2 −ΔΔCT ) method. Actin was used as an endogenous control to normalize the data. The utilized Linc00152 primers were designed in a previous study [8]. The Linc00152 mRNA levels were defined as high if they were above the median value [8,12]. In vivo nude mouse xenograft and tumor metastasis models The Shanghai Medical Experimental Animal Care Commission approved the animal experiments. Male BALB/c-nu mice (4-5 weeks of age, 18-20 g) were maintained under specific pathogen-free conditions at the Fudan University Experimental Animal Department. All experimental procedures involving animals were undertaken in accordance with the institute's guidelines. For the tumor metastasis models, 1×10 6 Lenti-shNC and Lenti-shLinc00152 stably infected A549 cells were injected subcutaneously or into tail veins of mice (n=3 per group) and then were kept under observation for 28 days. All the mice were anaesthetized and sacrificed. The livers were excised and measured and fixed followed by H&E sections. Statistical analyses Each experiment was repeated three times, and the data are presented as the mean with error bars indicating the standard deviation. All statistical analyses were performed using SPSS 20.0 (IBM, SPSS, Chicago, IL, USA). Student's t-test and one-way ANOVA analyses were used for either 2 or multiple group comparisons, respectively, for statistical significance. Correlations among the clinicoparameters and Linc00152 expressions levels were examined using the chi-square or Fisher's exact probability tests. DFS and OS curves were calculated by the Kaplan-Meier method and analyzed with the log-rank test. The DFS rates were calculated from the date of surgery to the date of disease progression (local and/or distal tumor recurrence) or to the date of death. The OS rate was defined as the length of time between the diagnosis and death or last follow-up. Variables with a value of p<0.05 in the univariate analyses were used in the multivariate analysis on the basis of the Cox proportional hazards model. All p values were 2-sided, and statistical significance was established at p<0.05. Linc00152 was upregulated in lung adenocarcinoma We first detected the Linc00152 levels using RT-qPCR in 110 patients with lung adenocarcinoma to confirm whether its level would be higher in malignant lesions than that in ANT. We observed significantly higher Linc00152 mRNA levels in the 110 malignant lesions than those in the ANTs (p<0.05, Fig. 1A-B), suggesting that Linc00152 could be detected in lung adenocarcinoma and its mRNA level is abnormally upregulated. Linc00152 upregulation correlated with clinicopathologic characteristics and patient survival of lung adenocarcinoma Second, we analyzed the correlation between Linc00152 levels and the clinicopathologic status of lung adenocarcinoma ( Table 1). The Linc00152 mRNA levels were upregulated in tumors with a higher tumor burden, as defined by further lymph node metastasis station (p=0.005), more remote metastasis (p=0.002) and more advanced TNM staging (p=0.004, Fig. 1C, Table 1). However, no significant correlation was found between the Linc00152 levels and age, tumor grade or T stage only. Next, we conducted a Kaplan-Meier analysis using the log-rank test to explore the potential influence of Linc00152 expression on patient survival. Linc00152 mRNA levels were divided into low (n=55) and high (n=55) levels based on the median value [11,13,14]. The total median follow-up time for the patients who were still alive at the endpoint for analysis was 76 months. The median follow-up time for the patients who were still alive at the endpoint was 37.56 months in the group with high Linc00152 expression and 30.96 months in the group with low expression. The results showed that patients with high Linc00152 levels (n=55) had significantly shorter DFS (p=0.017; Fig. 1D) and OS rates (p=0.021; Fig. 1E) than those with low levels (n=55). A univariate Cox analysis showed that T stage, TNM stage, N station and Linc00152 levels correlated with the DFS and OS rates ( Table 2-3). A multivariate analysis using the Cox proportional hazard model demonstrated that the Linc00152 level was an independent risk factor for DFS (p<0.001, Table 2) and OS (Table 3). Notably, the T stage, TNM staging and Linc00152 were independent risk factors for both DFS and OS ( Table 2-3). These results identified that upregulated Linc00152 levels in lung adenocarcinoma predicted poor survival and might be a prognostic biomarker for the disease. Efficiency identification of Linc00152 overexpression in GC cells To identify the biological functions of Linc00152 in lung cancer cells, we first detected the baseline mRNA levels of Linc00152 in 5 lung cancer cell lines by RT-qPCR. The RT-qPCR showed that the Linc00152 expression was lower in both the H1299 and H1975 cells but higher in A549 and H1650 cells ( Fig. 2A). Next, we transfected pcDNA3.1 or pcDNA3.1-Linc00152 constructs into H1299 and H1975 cells for 48 h. We also infected A549 and H1650 cells with shNC and shLinc00152 shRNAs for 48 h. The following RT-qPCR showed that the Linc00152 mRNA levels were significantly increased by Linc00152 overexpression in the H1299 and H1975 cells (Fig. 2B) but decreased by shRNA in A549 and H1650 cells (Fig. 2C). Linc00152 promotes lung tumor cell proliferation To observe the influence of Linc00152 on tumor cell proliferation, we conducted a panel of experiments. First, the CCK8 assay results suggested that Linc00152 overexpression significantly increased the living H1299 and H1975 cells compared with the controls (p<0.01, Fig. 3A). In contrast, knockdown of Linc00152 significantly reduced the living cells in A549 and H1650 cells compared with controls (p<0.01, Fig. 3B). Next, the colony-forming assay showed more colonies in the Linc00152-overexpressing H1299 and H1975 cells than in the controls (p<0.05, Fig. 3C) but fewer colonies in the Linc00152-knockdown A549 and H1650 cells compared with controls (p<0.05, Fig. 3D). Together, these results suggested that Linc00152 promoted lung cancer cell proliferation in vitro. Linc00152 enhanced tumor cell migration and invasion in vitro To determine whether Linc00152 promoted tumor cell invasion and migration in lung adenocarcinoma, we conducted transwell and wound-healing assays. Linc00152 overexpression in both H1299 and H1975 cells resulted in significantly increased cellular mobility (Fig. 4A, p<0.05). In contrast, knockdown of Linc00152 in A549 and H1650 reduced cell migration compared with controls (Fig. 4B, p<0.05). Consistently, Linc00152 overexpression in both H1299 and H1975 cells resulted in significantly increased cellular invasiveness (Fig. 4C, p<0.05). Meanwhile, knockdown of Linc00152 in A549 and H1650 reduced cell invasion compared with controls (Fig. 4D, p<0.05). Collectively, these data suggest that Linc00152 induces tumor cell migration and invasion in lung cancer cells. Linc00152 promoted tumor growth and metastasis in nude mice models Finally, we investigated tumor growth and metastasis by knockdown of Linc00152 in nude mice. Knockdown of Linc00152 decelerated the tumor growth speed in nude mice models compared with controls (Fig. 5A, p<0.05). The final tumor volumes and weights were significantly lower in the group of shLinc00152 compared with controls (Fig. 5B, p<0.05). As to the metastasis models, although there were no conspicuous visible lesions in the samplings of lung and liver in both groups (Fig. 5C), the following H&E slides determined that the metastatic cancer lesions were formed microscopically in the controls but not in Linc00152-overexpressing group (Fig. 5D). Taken together, these data suggest that Linc00152 promotes tumor growth and metastasis in nude mice models. Discussion Our study was the first one to report the mRNA level changes and biological functions of Linc00152 in lung adenocarcinoma tissues and cell lines. We found that Linc00152 was abnormally highly expressed in lung tumor tissues compared with ANT. High Linc00152 expression was correlated with increased lung cancer malignancy and poorer prognoses. The in vitro experiments revealed that Linc00152 promoted lung cancer cell proliferation, migration and invasion. Moreover, we identified that knockdown of Linc00152 decelerated tumor growth and reduced the incidence of remote metastasis in the in vivo nude mice models. Linc00152 is a newly found lncRNA located on 2p11.2 [15,16]. It has been reported to exert oncogenic functions in gastric cancer, breast cancer and hepatic cancer [7,9,15,17]. We formerly also found that Linc00152 predicts poor prognosis and facilitate tumor progression in renal clear cell carcinoma, suggesting that Linc00152 is a general oncogene in somatic malignancies [8]. In our study, we observed abnormally upregulated Linc00152 expression in lung cancer tissues and high level of Linc00152 predicted a poor prognosis in lung adenocarcinoma, reducing both DFS and OS. Moreover, the multivariate analysis found that Linc00152 was an independent risk factor, suggesting that Linc00152 might be a novel prognostic indicator of lung adenocarcinoma. Future studies could explore the serum level of Linc00152 in patients with lung adenocarcinoma to confirm to dig its clinical value in the clinical monitoring of tumor progression and prognosis. Previous studies have provided evidence that Linc00152 promotes tumor cell proliferation to prompt gastric cancer progression [18]. Consistently, we also found that overexpression of Linc00152 could stimulate tumor cell proliferation in vitro and facilitate tumor growth in nude mice models in vivo. What was more important was that the statistical analysis showed that Linc00152 was correlated with lymph node metastasis station, remote metastasis and TNM staging. Notably, the expression of Linc00152 was not correlated with T stage only in our study, suggesting that the tight correlation between Linc00152 expression and TNM staging in lung adenocarcinoma might be mainly due to its contribution to tumor invasion and metastasis. These clinical analyses were also supported by the in vitro and vivo experiments in our study, as overexpression of Linc00152 promoted tumor cell invasion and migration while knockdown of Linc00152 suppressed tumor cell invasion as well as metastasis in nude mice models, suggesting that Linc00152 might contribute to tumor metastasis majorly in lung adenocarcinoma. Future studies could focus on the potential mechanisms by which Linc00152 drives tumor invasion and metastasis in lung cancer. Conclusions Our study suggests that Linc00152 predicts poor prognoses and promotes tumor progression in lung adenocarcinoma. Linc00152 might be applied as a prognostic predictor for the prediction of patient risk.
2018-03-30T19:33:40.993Z
2017-07-05T00:00:00.000
{ "year": 2017, "sha1": "fb8aaaf7de44e458584ea924b43d8e5d2313efba", "oa_license": "CCBYNC", "oa_url": "https://www.jcancer.org/v08p2042.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb8aaaf7de44e458584ea924b43d8e5d2313efba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3041670
pes2o/s2orc
v3-fos-license
Important factors to consider when treating children with chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME): perspectives of health professionals from specialist services. Background Paediatric Chronic Fatigue Syndrome (CFS)/Myalgic Encephalomyelitis (ME) is relatively common and disabling. Improving treatment requires the development of Patient Reported Outcome Measures (PROMs) that enable clinicians and researchers to collect patient-centred evidence on outcomes. Health professionals are well placed to provide clinical insight into the condition, its treatment and possible outcomes. This study aimed to understand the perspectives of specialist paediatric CFS/ME health professionals and identify outcomes that are clinically important. Methods Focus groups and interviews were held with 15 health professionals involved in the care of children with CFS/ME from the four largest specialist paediatric CFS/ME services in the NHS in England. A range of clinical disciplines were included and experience in paediatric CFS/ME ranged from 2 months to 25 years. Ten participants (67%) were female. Focus groups and interviews were recorded, transcribed verbatim and data were analysed using thematic analysis. Results All health professionals identified the impact of CFS/ME across multiple aspects of health. Health professionals described four areas used to assess the severity of the illness and outcome in children: 1) symptoms; 2) physical function; 3) participation (school, activities and social life); and 4) emotional wellbeing. They also described the complexity of the condition, contextual factors and considerations for treatment to help children to cope with the condition. Conclusions Clinically important outcomes in paediatric CFS/ME involve a range of aspects of health. Health professionals consider increases in physical function yet maintaining school functioning and participation more widely as important outcomes from treatment. The results are similar to those described by children in a recent study and will be combined to develop a new child-specific PROM that has strong clinical utility and patient relevance. Background Chronic Fatigue Syndrome/Myalgic Encephalomyelitis (CFS/ME) is a complex condition that includes a range of symptoms such as debilitating fatigue, sleep disturbances, pain, cognitive dysfunction, headaches, dizziness and sweats. Symptoms vary between individuals and fluctuate in intensity and severity [1]. Paediatric CFS/ME is relatively common with a prevalence of between 0.4% and 2.4% [2][3][4][5] in population studies and between 0.06% and-0.1% [6,7] in studies based in hospital settings. It is increasingly recognised as an important disabling condition [1,8,9]. Physical activity is usually limited and loss of schooling occurs, ranging from low attendance to extended periods of absence [10][11][12]. There is little research on effective treatments [13]. NICE recommends that children with CFS/ME should be offered either Cognitive Behavioural Therapy (CBT), Graded Exercise Therapy (GET) or activity management [1]. In a condition with no objective measures of outcome, it is important to collect subjective measures of healthrelated quality of life (HRQoL) directly from patients. Improving the evidence base requires the development of questionnaires or Patient Reported Outcome Measures (PROMs) that enable clinicians and researchers to collect patient-centred evidence on HRQoL outcomes for these treatment approaches. PROMs can also be used in shared decision making [14][15][16] to prompt discussion between patients and health professionals, alert professional to the patient's concerns about their HRQoL and clarify the patient's priorities for care [17]. The utility of PROMs is dependent on the relevance and acceptability of the PROM to both patients and clinicians. There is currently a lack of well-developed PROMs in CFS/ME [18,19]. We have recently described a children-derived conceptual model which describes the impact and outcomes of CFS/ME and will be the basis of a new child-specific CFS/ME PROM [20]. Clinicians need to be involved in the development of a PROM alongside children if it is to be used clinically and in trials [21][22][23]. Clinicians are aware of how outcomes manifest across a wide range of people and contexts [24]. Optimal content validity in PROM development includes both perspectives where appropriate, rather than prioritizing one over the other [24]. Children with CFS/ME receive specialist care from a range of clinical disciplines: paediatricians, nurses, physiotherapists and psychologists who have accumulated knowledge and experience of the condition. However, little is known about how specialist CFS/ME clinicians view the condition. Professionals working with adults with CFS/ME have described the complexity of the condition and the onus on the person with CFS/ME to manage their illness [25]. A recent study found professionals working with children with CFS/ME in general paediatric clinics, 'work with uncertainty' and use previous experiences to inform the labels they give to children [26]. However, they did not explore management of the condition or outcomes of importance. This study aimed to understand the perspectives of specialist paediatric CFS/ME health professionals and identify the outcomes that are clinically important. Methods The study sought to explore the views and experiences of health professionals who work in specialist paediatric CFS/ME services in England and have regular contact with children with CFS/ME. Qualitative methods were used. Setting We purposefully sampled specialist CFS/ME paediatric services within the NHS. The four largest specialist services were recruited, based in the following UK regions: South West, London, East of England and the North East. The health professionals treated children (<19 years old). Two of the clinicians only treated teenagers. The services provided activity management, CBT and GET as treatment approaches. Participants Within the sampled specialist CFS/ME settings, we planned to sample health professionals from a range of professional backgrounds, who had regular contact with children with CFS/ME. Lead clinicians from the specialist services were approached by email and asked to cascade the information to eligible multidisciplinary clinical team members. While we sought to be purposeful in participant selection, our sampling relied on access via a gatekeeper (the lead clinician) and therefore, our final sample was a convenience sample of those clinicians who responded. Data collection Data were collected using a mixture of focus groups, paired and individual interviews, as determined by participant preference and practicality. Where possible, focus groups were conducted at the health professionals' hospital premises to facilitate group interaction and breadth of discussion [27]. However, if focus groups were not practically feasible or at participants' request, interviews were conducted face to face in private rooms within the hospital or by telephone. Interviews were either individual (with a single participant) or paired (with two participants). Paired or dyadic interviews sit somewhere between a group and individual interview, allowing relatively detailed contributions from individual participants while also giving opportunities for comments from one participant to prompt responses from the other [28]. We acknowledge that these different forms of data collection can generate different forms of qualitative data. In the context of this study, each were appropriate for generating data relevant to our study aim and were offered flexibly as a practical strategy for facilitating participant engagement. One author (RP) facilitated the focus groups and conducted all interviews. A flexible topic guide was developed following discussions with all authors to enable participants to talk about their views and to raise issues of importance, but also to provide some consistency between the different groups and interviews (Appendix A). The guide covered: 1) the service context within which the professional(s) worked; 2) current use of PROMs and 3) views about the aspects of health that are important in their assessment of outcomes and shared decision-making with children with CFS/ME. The topic guide was revised as the study progressed to reflect emerging issues raised during the focus groups and interviews. Data analysis Analysis was led by RP. Preliminary analysis was conducted alongside data collection, to enable data gathered earlier on to inform subsequent data collection. Data were transcribed verbatim and checked for accuracy. Notes made after each focus group and interview were considered alongside relevant transcripts. Data were analysed thematically [29], incorporating a mixture of deductive and inductive coding, to enable development of both anticipated and emergent themes. Transcripts were read line-by-line for content and meaning, and provisional codes were applied to relevant sections of text. Coding was undertaken using the software package NVivo 10 [30]. This process led to the development of an initial coding framework. Other members of the research team (EC, AS, KH) read and independently coded a sub-set of the data to incorporate different perspectives and enhance interpretation. The coding framework was refined, with new codes added and existing codes merged or split. Through this process, broader categories and higher-level recurring themes were developed. Data within themes were examined for disconfirming and confirming perspectives. Finally, a narrative summary of the findings was written, integrating illustrative data and giving attention to the different perspectives represented. The research team included a range of disciplines, both clinical and methodological (qualitative methods, PROMS). In presenting findings, data have been anonymised to protect confidentiality. Results Eighteen health professionals were approached via email. Fifteen health professionals consented to participate. Three did not participate due to lack of time. A range of clinical disciplines were included (Table 1) and experience in paediatric CFS/ME ranged from 2 months to 25 years. Ten participants (67%) were female. Two focus groups (compromising 5 participants in one and 4 in the other), a paired interview, two individual face-to-face interviews and two telephone interviews took place. These were conducted at participants' place of work or over the telephone and lasted between 43 and 61 min (median 52 min). All health professionals talked about the complexity of paediatric CFS/ME and the impact across multiple aspects of health. They also described the impact of context and strategies they adopted to help children to cope with the condition. Important health outcomes Health professionals described four areas that they currently use to assess the severity of illness and outcome in children with CFS/ME: 1) symptoms; 2) physical function; 3) participation (school, activities and social life); and 4) emotional wellbeing. Symptoms Health professionals described the wide range of symptoms children with CFS/ME present with which limit their ability to do things. "… it's a complex area. So there's the physical side of it, obviously, so the fatigue the poor sleep, um, lots of children with stomach problems, um, lots of children with cognitive problems" (HP14) A reduction in symptoms or a child's ability to manage them better was felt to be an important outcome. Improving the quality of sleep was viewed by all participants as an essential outcome, as sleep impacts a child's ability to manage other areas. Other symptoms were not as prevalent (e.g. loss of appetite), but were important to detect and treat. "…diminishing of symptoms, either a decrease in symptoms or ability to manage them better…" (HP10) "Anchoring their sleep is absolutely fundamental to getting better, so that's absolutely really important" (HP1) "…loss of appetite become a norm, but actually they haven't mentioned it because it's not predominant along with maybe fatigue, pain…" (HP12) Physical function Health professionals were additionally concerned about the impact of symptoms on a child's ability to do daily physical activities. They described different levels of physical function with some children: unable to do anything, walking on crutches or being physically exhausted after a full school day. In some cases, health professionals felt reducing "payback", or the increase in symptom severity limiting physical function after doing more activity, important to improve and measure. "…they say they are feeling or unable to do anything, we tend to sort of, respond to that." (HP8) "We've had some who have not been able to walk, or walk on crutches." (HP13) "If they say, "Well, I still don't want to get out of bed, but it's finding it easier," that's quite a good mark of disease improvement. And the same is if you're doing a bit more physical activity without it being absolutely exhausting, and you're not getting payback the next day." (HP1) Participation: school, leisure activities and social life Social withdrawal was described as a key consequence of CFS/ME. Health professionals considered both school and social or leisure activities as important signs of participation. They described one overall health outcome domain of participation where school and leisure activities such as sport and hobbies provide children with the ability to participate in normal social structures, with a cumulative impact for improvement. "So your social participation with your friends might be when you go and play football." (HP7) Most health professionals felt increasing school attendance suggested an improvement in health. However, some thought that school attendance could be a misleading outcome as it is often reduced during treatment and does not necessarily reflect a child's disability. Children with CFS/ME may put all their energy into school leaving them exhausted. "…whether they're getting worse or not, or whether they're improving, is whether they are able to increase the amount they're going to school" (HP13) "…in a school situation, or a social situation, the young person will give 110% to appear normal, and will use up all their energy …But when they get home, …They're completely wiped out." (HP3) "…I perversely think sometimes less school attendance may be a sign of improvement." (HP12) Emotional wellbeing Emotional wellbeing emerged as a significant outcome to monitor and address, with some health professionals proposing that it is more important than school attendance. Health professionals described low mood and frustration in children, caused by being unable to do things. School stress and anxiety about returning to school were additional dimensions. They felt that this low mood, anxiety and stress can develop into formal anxiety and depression. "Emotional wellbeing is I think is what I'd say, in school or out of school." (HP7) "Yes; they're just frustrated that they can't do stuff, in the moment, that they want to do." (HP1) "some of them become clinically depressed as a result of their illness, and that needs to be recognised and treated." (HP15) Social anxiety was raised as a particular problem in children with this condition. Children were said to be anxious about being able to cope with their symptoms in social situations. Lack of understanding from others "It's often because they're worried, they have got so (..) affected by CFS and also frightened by the symptoms they are experiencing,…so become frightened of the idea of trying to retain or re-attain normality" (HP8) Self esteem Health professionals described the 'loss' felt by children due to what they could no longer do as a consequence of having CFS/ME. Not being able to fulfil their goals had a role to play in how they saw themselves. Health professionals often provided examples of athletic children who were no longer able to perform their sports or academic children who were unable keep up with schooling impacting their self-esteem. "there are quite a few children we've had who been, um, swimmers and athletes, and that's been the thing that has given them their self-confidence and selfesteem, and their sense of, you know value, and they've had to stop doing that, and that's really difficult." (HP13) "…the ones particularly at school who feel like failures all the time, because they're not reaching their potential or their goals…" (HP1) Developmental differences in important outcomes The impact of the condition and important outcomes to children identified by health professionals was felt to vary developmentally. Health professionals reported younger compared to older children concerned with being 'normal' and 'back doing what their friends are doing'. "…it varies by developmental and time factors" (HP8) "They tend to want to be back at school seeing friends" (HP8) Teenagers face extra complexities such as the disruption of natural independence from parents as well as spending time with a boyfriend/girlfriend. Health professionals described more issues around school and academic achievement in adolescents; problems with memory and concentration impacted children's ability to keep up with school work. They talked about stress due to exams and falling behind. Health professionals felt there was higher susceptibility for anxiety and depression at this stage. "…especially with teenagers, sort of, natural break where they're becoming more independent from their parents is being interrupted" (HP13) "We tend to get a lot with extra issues like anxiety and stress and worrying and low mood coming in then because they have stress for exams."(HP8) At a pivotal developmental stage where many young people are completing important exams, having CFS/ ME can demotivate them and make them anxious about their future. Older children were said to have particular worries about the future and achieving their goals. "as they get older, there's so much more around, having to drop GCSEs or 'how am I going to manage my A Levels' , or 'what am I going to do if I want to go to university, how is this going to impact on me' and those kind of real future worries" (HP11) Complexity, context and facilitators to coping Health professionals described the difficulty of treating children with CFS/ME due to the variability and fluctuation of the condition and environmental barriers preventing children from returning to normal. A number of strategies were employed to help children cope with the condition. Complexity and circularity in CFS/ME Health professionals talked about the complexity of the condition with symptoms varying between children. "I mean, in terms of sleep problems…either slept huge number of hours, which is like sort of 18 out of 24 h, or we've had one particular young person who slept two hours a night for about two years…So it's really variable, and that's really difficult to deal with." (HP14) Circularity was also described as a feature of the condition. Children experience a 'boom and bust' pattern with increased symptom severity (payback) following activity which can lead to a downward spiral of reduced activity. They additionally described the circularity of low mood as maintenance factor preventing improvement in CFS/ME. Children can have low mood due to symptoms and a lack of participation and can then become more vigilant to symptoms. This can then lower their thresholds for participation, further lowering their mood in a negative cycle. "Um, trying to stop that boom and bust pattern. A lot of them are still really, really pushing themselves" (HP13) "Obviously, if they've got symptoms and that stops them participating, their emotional wellbeing deteriorates and that makes their symptoms worse, so you actually get a negative feedback loop." (HP15) "…when children are, or anyone is low or anxious, you become, quite um, hypervigillant to what is going on in your body" (HP11) Facilitators to help children cope Flexible strategies were required to treat the variable severity of symptoms and functional ability of individual children. "If the child is saying, "I'm in pain and actually it's only when I'm tired or at the end of the day," that's different to a child says, "Well I wake up and I am in pain all the time," and different strategies for management." (HP12) Considering the individual functional level and priorities of children when setting treatment goals was highlighted. Health professionals described how they could be working with an athletic child one minute and then a child who only wants to see their friends the next. Or a child that wants to return to extracurricular dance versus a child who just wants to be able to wash their hair. "…they might attend loads of school but all of their efforts are into attending school and they might see no friends and do nothing outside with the family." (HP11) "… she had an objective, and it was very small, but I think that's really important,… "I'd like to wash my hair," because getting upstairs where the shower is, such an effort." (HP13) Health professionals described how, in some cases, children appeared to be improving in terms of function whilst symptoms remained the same. Therefore, coping and the ability to do things despite symptoms was an important focus for health professionals. Health professionals concentrated on activity management and setting baselines to reduce boom and bust patterns and increase children's physical function and their ability to do things. They recognised the importance of school for academic, social and emotional development and therefore the delicate balance of reducing school to improve functioning, yet maintaining a sense of normality and contact for children. "It's quite interesting because they will be coping and managing and doing more and more and more but still complaining that they feel absolutely exhausted. But the reality is they're not crashing anymore and their concentration is returning and all those other things." (HP10) "…if they are getting into school more and more but their wellness score is staying the same or going down, I would be concerned and talk with them about decreasing the amount of school they do." (HP6) "…you also look at what they're doing outside socially as well, so ask people about social contact, friends, do you go out, what do you do at weekends?" (HP12) All health professionals in this study worked in a multidisciplinary team and integrated psychological approaches in order to manage negative cycles of low mood. There was a need to give children back the value they had lost and rebuild their self-esteem; refocusing on realistic goals and focusing positively on what can be done. Several health professionals talked about providing hope for the future, particularly for older children looking at alternative pathways for children to achieve their goals. They often used the success of previous patients to encourage and support those at an earlier stage who don't see a future or even see themselves getting better. "…CBT has a really important role to deal with some of the being negative, um, views about the illness, and about symptom management and moving people from 'I've got pain and it's difficult, therefore you know I can't do anything about it, therefore my pain is worse' , trying to break those cycles as well." (HP12) "I'm much more interested in getting them to focus on what they can do." (HP15) "I think one of the most important things about our service is giving those young people hope" (HP13) Contextual factors Health professionals identified external environmental factors that can act as a barrier to children with CFS/ ME returning to normality. These included understanding, attitudes and support from others (friends, school and family). Due to a lack of understanding from the community, children with CFS/ME can be faced with negative attitudes and comments. Many health professionals reported that some children can become socially withdrawn. "because actually, friends don't understand, and they don't want to be seen to be different. So actually avoid social contact, so isolation, um, becomes an issue." (HP12) All health professionals reported the profound impact CFS/ME has on the family. They felt this could affect the ability of families to follow clinical advice. CFS/ME has an impact on family activities and holidays, dependence on parents, parental tension regarding management of the illness, impact on siblings and the burden and cycles of guilt within the family. "Yeah, I've had a few, um, young people, um, expressing guilt that they feel that their illness is taking up a lot of their parents' time, and is affecting the family, so they can't go out and do the normal things that families would do, so going out on holidays or going out" (HP13) "Just sort of family dynamics, or family tension, or other sorts of external stresses, erm, that are causing them to struggle to follow our advice" (HP4) A lack of school support can hinder reintegration. Some health professional described how returning to school can be difficult for children due to the 'hustle and bustle'. Children can develop anxiety about their ability to cope. How supportive schools were affected children's desire to return to school. All health professionals reported schools to vary in experience of CFS/ME, attitudes and support. Schools often have a lack of understanding about the condition and unrealistic expectations of what the child can do. "…it's a big red flag for prognosis if you have an unsupportive school." (HP1) "schools really vary as to how sympathetic they are. Um, we have had a few young people who have developed sort of anxiety about going into school" (HP13) Working with schools Working with schools was a core part of treatment for all services involved in this study; educating schools, correcting unrealistic expectations and formulating reduced timetables. One health professional serving 16-18 year olds only, actively encouraged children with CFS/ME to leave mainstream school to colleges where he felt they were much more able to handle people with disabilities. "…schools also have unrealistic expectations of what they can do as well. It's about educating them about what they can do" (HP15) "…we encourage them to leave mainstream school and go into colleges and further education, they are much better at handling people with needs" (HP15) Discussion Health professionals working in specialist paediatric CFS/ME services describe how CFS/ME affects multiple areas of health. The variability of the condition, shrouded by social misunderstanding along with normal developmental challenges in children need to be taken into account when devising treatment programmes and understanding outcomes in children. Strength and limitations To the authors' knowledge this is the second qualitative study of health professionals working with children with CFS/ME but the first to focus on outcome. We interviewed specialist clinicians from a range of disciplines and geographical locations. The findings can inform those setting up new specialist services in paediatric CFS/ME. The impact of CFS/ME across physical, social and psychological areas of health was consistent across focus groups and interviews. The use of focus groups allowed for more debate surrounding the most important outcomes and the expression of divergent and shared perspectives within a clinical team [27]. We explored interactions in the groups (conflicts and affirmations) and these were useful to confirm the problems with focusing on school attendance as the primary outcome domain with some clinicians talking about the importance of emotional wellbeing in or out of school. It is possible that interviewing within a clinical team may have prevented junior members from speaking out, but we did not observe this happening. Where focus groups were not possible, individual interviews allowed more detailed examination of an individual's experience and perspective. Paired interviews sit between a group and individual interview, allowing some depth while still enabling expression of shared or divergent views between clinical colleagues [28]. The paired interview included two clinical psychologists who talked more in depth about the loss experienced by children and the importance of providing hope. However, participants were not rigid in their perspectives based on their profession, for example a paediatrician talked about emotional wellbeing as the most important outcome. The findings from the different methods were similar in terms of content and range of issues discussed, even if there were variations in depth and richness of the accounts, and diversity in the roles and interactions of the researcher and participants [28,31]. Results in the context of previous literature This study extends the qualitative literature on clinical perspectives of treating paediatric CFS/ME. A recent meta-synthesis of qualitative studies of health professionals treating adults with CFS/ME [32] identified several barriers to the diagnosis and management of CFS/ME, but did not explore important areas of health in treatment and assessment of outcome. A recent qualitative study on the perspectives of health professionals treating children with CFS/ME [26] explored how professionals conceptualise CFS/ME in diagnosis but did not explore what is important in treatment. In this study, clinicians discussed the need to balance treatment and outcomes across physical, social and psychological areas of a child's life. This is consistent with the literature on the impact of CFS/ME on children's function [10], schooling [4,12,33,34], social activities [35][36][37][38], family [39][40][41] and emotional function [37,38,[42][43][44][45][46][47]. The wide-ranging symptoms, physical function and individual and developmental differences between children were described by health professionals as important considerations for treatment. The complexity, co-morbid mood disorders and developmental issues adds strength to an individualised approach. This is consistent with Knight et al. [48] who identified a high number of complex and interacting symptoms in children with CFS/ME and the need for multifaceted treatment. Health professionals in this study worked with children on shared realistic goals for treatment. Health professionals have successfully managed CFS/ME in adults by taking a collaborative approach to management [32]. Goal attainment has been found to be significant predictor of quality of life improvement for people with CFS/ME [49]. Health professionals in this study described the difficulty of reducing a child's school attendance to improve function yet maintaining participation for the child. School has been described as one of the most important outcomes for children with CFS/ME [20] and is a critical protective factor for many adverse outcomes among children and adolescents [50]. Schools were reported to vary in their understanding and support, acting as a barrier to children with CFS/ME returning to normality [51]. Working with schools was a key facilitator described by health professionals to reintegrate children and reduce social withdrawal. Health professionals in this study described negative cycles of low mood in children. This is consistent with the strongest finding in a recent review, with higher rates psychiatric co-morbidity in children with CFS/ME compared to healthy controls or other illness groups [52]. CFS/ME families have been found to identify with the concept of vicious cycles arising as a consequence of the condition [39]. Patients are said to avoid activity due to the resulting symptoms that then leads to more symptoms due to physical deconditioning [53,54]. In this study, setting baselines to reduce boom and bust, realistic individual goals and giving children hope for the future were key treatment priorities. Mackenzie and Wray [39] advocated the importance reassuring patients and their carers that they will recover and go on to achieve academic qualifications. 'Symptoms' , 'physical function' , 'participation' and emotional wellbeing' described by health professionals in this study overlap with those talked about by children with CFS/ME in a recent study [20]. However, health professionals identified differences in important outcomes depending on age such as relationships with boyfriend/ girlfriend(s), independence from parents and emotional difficulties in adolescents. They also identified differences in outcomes depending on the severity of CFS/ME. This could be because the child study interviewed children who were slightly younger and mild to moderately affected. This has implications for the development of a new PROM across a wide age range of children and adolescents. Differences in outcomes for those children who are severely affected by CFS/ME warrants further research. Conclusion Identifying ways to increase physical function yet decrease the impact of CFS/ME on school functioning and participation more widely is a priority for health professionals. Working with schools is key to this process. Health professionals use symptoms, physical function, social participation and emotional domains to clinically understand the impact of CFS/ME on children and changes from treatment. Most of these outcomes are not currently measured in PROMs used in paediatric CFS/ME [18] and should be included in a new paediatric CFS/ME PROM. This adds to previous research which captures the perspectives of children and their parents on the experience of symptoms, outcomes and disability in the construction of a new PROM for CFS/ME that has strong clinical utility and patient relevance. How well do you think that the proposed model captures what really matters to children with CFS/ME? How well does the model capture what you think is importantand necessary for your clinical decisionmaking? Do you think that there are any important outcomes that are missing from the model? How do you think contextual factors impacts the experience of CFS/ME? To what extent would you use contextual factors to influence your clinical decision-making? Part D: CFS/ME Paediatric Questionnaire. In your opinion, how might fluctuating symptoms impact the completion of paediatric CFS/ME questionnaires? When do you feel is the best time for children to complete a questionnaire? What do you feel should be the recall period for paediatric questionnaires in CFS/ME? How often do you feel questionnaires should be administered? How long do you think questionnaires should be administered for in children with CFS/ME? What do you think is the best method to administer questionnaires? Prompts: During the interview the researcher may use prompts to explore certain aspects in more detail.
2018-04-03T03:23:47.351Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "e8a727c7899d68d4a55a6b2b8ea531bfed59a434", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/s12887-017-0799-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8a727c7899d68d4a55a6b2b8ea531bfed59a434", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
246171739
pes2o/s2orc
v3-fos-license
Pulmonary Adenoid Cystic Carcinoma Presenting Late With Intrapericardial Extension: Case Report Adenoid cystic carcinoma, also known as cylindroma, is one of the rare and unexplored clinical presentations of lung cancer, for which existing knowledge is scarce. This case report discusses a presentation of this tumor in the right lung, which subsequently extended to the left atrium through the right superior pulmonary vein. The extension of this rare tumor into the left atrium makes this case both uniquely distinctive and clinically relevant. The management strategy opted for this case was a right posterolateral thoracotomy and right pneumonectomy with partial resection of the left atrium. The desired outcome of this report is to shed light on the unusual clinical pathophysiology, register its atypical extensions, and navigate surgeons who may encounter this manifestation in the future. Introduction Lung cancers are known to have high mortality and morbidity rates globally. 1 They are broadly classified into small cell carcinoma (15% of the cases), which presents with neuroendocrine attributes and a high malignancy index, and NSCLC (85% of the cases), which have further pathologic subdivisions such as adenocarcinoma and squamous cell carcinoma. 2 Adenoid cystic carcinoma (ACC), previously known as cylindroma, is a subtype of adenocarcinoma of the lung, affecting the lungs and associated upper airways. This rare salivary gland-type malignant neoplasm accounts for 0.04% to 0.2% of all primary lung tumors, making it a diagnostic and therapeutic challenge. 3 Extension into the left atrium is even rarer. When this type of cancer invades the left atrium and right superior pulmonary vein, it is classified as locally advanced lung cancer. 4,5 To the best of the authors' knowledge, this is the first case to report the presentation of ACC extending into the left atrium from a research-deficient, low-middle-income country. Because of the rarity of its presentation, a holistic understanding of the clinical and pathophysiologic features, treatment Case Presentation A 33-year-old man with no known comorbidities presented to the hospital with hemoptysis, preceded by chronic cough and intermittent chest pain for 4 years. Past medical history revealed that he had been diagnosed with pulmonary tuberculosis 7 years ago, for which he underwent successful treatment. Physical examination was unremarkable. Informed consent was taken from the patient to publish these findings in the literature. An urgent computed tomography scan was done, which revealed a well-defined lobulated soft tissue density, with internal hypodensity indicating necrosis and coarse calcific specks involving the right hilar region extending up to the subcarinal location. It measured approximately 5.8 cm by 4.7 cm by 4.2 cm (craniocaudal  transverse  anteroposterior) dimension and involved the lung hilum at the level of the third thoracic vertebra (Fig. 1). There was considerable compression of the right mainstem bronchus and bronchus intermedius and loss of fat planes with the right pulmonary artery causing splaying of its branches. The immunohistochemical examination of bronchioloalveolar lavage revealed positive cytokeratin AE1/AE3 and focally positive p63. The endobronchial biopsy report confirmed the diagnosis of ACC of the right lung. Surgical intervention was planned as the primary mode of treatment. The tumor site was approached through a right posterolateral thoracotomy and the right pleural cavity was entered. Exploration revealed that the tumor was densely adherent to the left atrium. On retrograde dissection, the following structures were sequentially divided: (1) inferior pulmonary vein, (2) the right mainstem bronchus, (3) the truncus anterior, and (4) the pulmonary artery. On entering the pericardium, the right and left atria and the pulmonary artery in that area were identified. The superior pulmonary vein was involved with the tumor, so the intrapericardial dissection of the left atrium was done. The left atrium was occluded with a partial occlusion clamp just distal to the entry of the superior pulmonary vein into the left atrium. The left atrium was cut and sewn in two layers with 4-0 Prolene. The tumor was removed with the entire right lung (Fig. 2). This was followed subsequently by lymph node dissection. Histopathologic analysis of the excised tissue revealed lung tissue exhibiting a neoplastic lesion arranged in nodules and aggregates, and some areas revealed a solid pattern. The nodules and aggregates of the tumor revealed a predominantly cribriform pattern with cystic spaces containing basophilic material. These were lined by low cuboidal cells and surrounded by a myoepithelial layer. The cells contained eosinophilic cytoplasm with moderate to marked pleomorphic nuclei. Scattered mitotic activity was appreciated, and extensive areas of perineural invasion were identified. The tumor was 1.5 cm away from the bronchial resection margin and less than 0.1 cm away from the outer painted pleural surface. A single lymph node was identified, which was tumor-free and was revealed to have anthracotic pigment. Sections examined from the proximal vascular margin reveal fibrocollagenous tissue with vessels. There was no clear evidence of malignancy. Furthermore, the TNM staging described the tumor measuring 4.5 cm by 3.5 cm by 3 cm to be noninvasive (T2b) with no lymph node involvement (N ¼ 0) and no metastasis (M ¼ 0). There were no intraoperative complications and the postoperative 1-year duration was unremarkable. No radiotherapy or chemotherapy was given after the surgery. Discussion Primary ACC of the lung is a rare salivary gland-type malignant neoplasm that is distributed along the submucosa of the major airways. 3 ACC of lungs arises from the tracheobronchial glands distributed in the airway submucosa, with a histological appearance similar to ACC arising in the salivary glands. 5 ACC occurs most often at other sites such as the breast, skin, uterine cervix, upper aerodigestive tract, and lung. The literature suggests that its extension to the left atrium is very rare. This type of tumor is seen in both men and women with a ratio of one-to-one, usually occurring in young adults, and is found more often in nonsmokers. 3 Because it often presents with cough followed by hemoptysis and dyspnea and in patients who have a positive history of tuberculosis, ACC is often misdiagnosed as asthma or bronchitis. 2 Often, it may also remain asymptomatic until detected by imaging for other purposes such as routine evaluations. 2 The literature suggests that lymphatic metastases are relatively uncommon, however, owing to the extensive spread along the major axis of the trachea at the time of diagnosis, residual tumors at the resection margin are not rare. 3,4 The role of radiotherapy is not well defined in the literature and is only reserved for patients with either incomplete resection margins or those with unresectable disease. 5 This type of tumor also generally does not respond to chemotherapy but may exhibit a partial response to targeted novel therapies. 3 In some case reports, several chemotherapeutic agents including seven cycles of weekly paclitaxel combined with cisplatin, two cycles of docetaxel, and, subsequently, gefitinib were tried, but there was no response. Surgical resection seems to be the mainstay of treatment in these tumors. 5 The strength of this report is the unique nature of clinical pathophysiology and the atypical extensions. The scarcity of available literature is the limitation of the study, which makes it a valuable resource for surgeons around the globe. Conclusion ACC presents a unique diagnostic challenge to clinicians and surgeons, owing to both a dearth of literature on it and the similarity of symptoms to other more known and understood lung pathologies. Once suspected, computed tomography scan followed by biopsy and histopathological analysis are vital next steps. Furthermore, once diagnosed, it is extremely important to resect all extensions of the primary tumor, as with the left atrium in this report.
2022-01-23T16:04:55.154Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "d28c2a9b4996d4368e1f55d89587575744d290af", "oa_license": "CCBYNCND", "oa_url": "http://www.jtocrr.org/article/S2666364322000091/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09e0bf8c66da1710b4b94cfb11b5cbb09e211606", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250659677
pes2o/s2orc
v3-fos-license
The Intracellular Distribution of the Small GTPase Rho5 and Its Dimeric Guanidine Nucleotide Exchange Factor Dck1/Lmo1 Determine Their Function in Oxidative Stress Response Rho5, the yeast homolog of human Rac1, is a small GTPase which regulates the cell response to nutrient and oxidative stress by inducing mitophagy and apoptosis. It is activated by a dimeric GEF composed of the subunits Dck1 and Lmo1. Upon stress, all three proteins rapidly translocate from the cell surface (Rho5) and a diffuse cytosolic distribution (Dck1 and Lmo1) to mitochondria, with translocation of the GTPase depending on both GEF subunits. We here show that the latter associate with mitochondria independent from each other and from Rho5. The trapping of Dck1-GFP or GFP-Lmo1 to the mitochondrial surface by a specific nanobody fused to the transmembrane domain (TMD) of Fis1 results in a loss of function, mimicking the phenotypes of the respective gene deletions, dck1 or lmo1. Direct fusion of Rho5 to Fis1TMD, i.e., permanent attachment to the mitochondria, also mimics the phenotypes of an rho5 deletion. Together, these data suggest that the GTPase needs to be activated at the plasma membrane prior to its translocation in order to fulfill its function in the oxidative stress response. This notion is substantiated by the observation that strains carrying fusions of Rho5 to the cell wall integrity sensor Mid2, confining the GTPase to the plasma membrane, retained their function. We propose a model in which Rho5 activated at the plasma membrane represses the oxidative stress response under standard growth conditions. This repression is relieved upon its GEF-mediated translocation to mitochondria, thus triggering mitophagy and apoptosis. Introduction Yeast cells have evolved to respond to drastic changes in their environment by appropriately changing their metabolism and/or gene expression. For this purpose, different signalling cascades have been installed which frequently involve the use of small GTPases as molecular switches. A subfamily of the latter is comprised by the Rho-type GTPases (for "Ras homology"), six of which have been described in S. cerevisiae, namely Rho1 to Rho5, in addition to Cdc42 [1,2]. They mediate diverse physiological adaptations, ranging from cell wall integrity (CWI) signalling (Rho1 [3]) to cell polarity establishment in budding and cell fusion during mating (Cdc42 [4]). The switch is turned on by binding of a GTP molecule, and off by its hydrolysis to GDP [5][6][7]. The interconversion between these states is aided by the action of guanine nucleotide exchange factors (GEFs), which facilitate the exit of GDP and its substitution by GTP due to the higher intracellular concentration of the latter, and GTPase activating proteins (GAPs), which stimulate the intrinsic hydrolytic activity, respectively [8,9]. After their lipid modification at a C-terminally conserved cysteine residue (CAAX-box), the GTPases associate with cellular membranes, where they exert their physiological functions [10]. Cytosolic trafficking in the inactive state can be aided by Int. J. Mol. Sci. 2022, 23, 7896 2 of 14 masking the lipid anchor through association with a GDP-dissociation inhibitor protein (GDI, Ref. [11]). Rho5 was the last member of the subfamily described in S. cerevisiae, then as a negative regulator of CWI signalling [12]. Later studies implicated its function in a variety of other signalling processes, suggesting that it may be a central hub in their coordination (reviewed in [13]). Briefly, the synthetic lethality of a hyperactive RHO5 allele with a ste50 deletion provided evidence that the GTPase works as a negative regulator in the high osmolarity glycerol (HOG) pathway, which could be viewed as having an opposite function to CWI signalling, as it responds to high rather than low medium osmolarity [14]. Moreover, the observed synthetic lethalities of rho5 ras2 and rho5 sch9 deletions suggest a repressing function of the GTPase in nutrient starvation [15]. In addition to its proposed role as a repressor of these signalling pathways, Rho5 was also found to regulate the response to oxidative stress, as judged from the hyper-resistance of its deletion mutants to hydrogen peroxide [16]. In fact, the GTPase was found to rapidly translocate from the plasma membrane to mitochondria upon exposure of the cells to this agent, a process dependent on the presence of its dimeric GEF (guanine nucleotide exchange factor), which is composed of two subunits encoded by DCK1 and LMO1 (Figure 1; ref. [17]). where they exert their physiological functions [10]. Cytosolic trafficking in the inactive state can be aided by masking the lipid anchor through association with a GDP-dissociation inhibitor protein (GDI, Ref. [11]). Rho5 was the last member of the subfamily described in S. cerevisiae, then as a negative regulator of CWI signalling [12]. Later studies implicated its function in a variety of other signalling processes, suggesting that it may be a central hub in their coordination (reviewed in [13]). Briefly, the synthetic lethality of a hyperactive RHO5 allele with a ste50 deletion provided evidence that the GTPase works as a negative regulator in the high osmolarity glycerol (HOG) pathway, which could be viewed as having an opposite function to CWI signalling, as it responds to high rather than low medium osmolarity [14]. Moreover, the observed synthetic lethalities of rho5 ras2 and rho5 sch9 deletions suggest a repressing function of the GTPase in nutrient starvation [15]. In addition to its proposed role as a repressor of these signalling pathways, Rho5 was also found to regulate the response to oxidative stress, as judged from the hyper-resistance of its deletion mutants to hydrogen peroxide [16]. In fact, the GTPase was found to rapidly translocate from the plasma membrane to mitochondria upon exposure of the cells to this agent, a process dependent on the presence of its dimeric GEF (guanine nucleotide exchange factor), which is composed of two subunits encoded by DCK1 and LMO1 (Figure 1; ref. [17]). Figure 1. Proposed role of the intracellular distribution of Rho5 and its dimeric GEF Dck1/Lmo1 in the oxidative stress response. Rho5 is inactive in its GDP-bound and active in its GTP-bound state, Figure 1. Proposed role of the intracellular distribution of Rho5 and its dimeric GEF Dck1/Lmo1 in the oxidative stress response. Rho5 is inactive in its GDP-bound and active in its GTP-bound state, which are interconverted by the help of a GTPase activating protein (GAP, Rgd2) and the dimeric GDP/GTP exchange factor (GEF, Dck1/Lmo1). Negative regulation (lines with bars) of the cell wall integrity pathway (CWI) and the high osmolarity glycerol pathway (HOG) is indicated for the active Rho5 associated with the plasma membrane by its lipid anchor (wavy line). A possible indirect effect of the CWI pathway on mitophagy and apoptosis (see discussion section) is symbolized by the dashed arrow. Dotted arrows show proposed routes of intracellular trafficking of the GTPase and its GEF subunits under different physiological conditions. Fusion constructs to confine Rho5 to either the plasma membrane through the CWI sensor Mid2, or to the mitochondrial outer membrane through the transmembrane domain (TMD) of Fis1, either directly or fused to a GFP nanobody (GFP-nb), constructed in this work are also indicated. Phenotypes regarding oxidative stress response are highlighted in blue print. In that work, a lack of either Rho5 or one of the GEF subunits was shown to drastically reduce mitophagy and apoptosis in S. cerevisiae, indicating that active Rho5 triggers these processes upon oxidative stress [17]. As the downstream MAP kinases in both the CWI and the HOG signalling pathway were shown to be required for mitophagy [18,19], it seems plausible that Rho5 at least partially acts by modulating their activity. A more direct role in mitophagy was also suggested from high-throughput screens, which identified Atg21 and Msp1, components of the mitophagy pathway and the outer mitochondrial membrane, respectively, as interaction partners of Rho5 [20]. Studies on the domain structure of Rho5 showed an unusual extension of 98 amino acids in the C-terminal half, which precedes the polybasic region (PBR) and the CAAX-box common in Rho-type GTPases [21]. All three regions were required for proper stressinduced translocation of yeast Rho5 [22]. Moreover, the trapping of GFP-tagged Rho5 to the mitochondrial surface in vivo by a specific nanobody rendered it non-functional with regard to oxidative stress, with phenotypes similar to the rho5 deletion. It was thus proposed that the GTPase needs to be activated at the plasma membrane in order to fulfill its role in mitophagy and apoptosis [22]. Translocation of homologs of Rho5 and the two GEF subunits to mitochondria was also observed under oxidative and nutrient stress in the more respiratory yeast K. lactis, in which Klrho5 deletions displayed pronounced morphological defects, in contrast to S. cerevisiae [23]. This is reminiscent of its human homolog Rac1, which, amongst a variety of cellular processes, regulates the dynamics of the actin cytoskeleton [24]. Like Rho5, Rac1 can be activated by the GEF DOCK180 in a complex with the ELMO protein, which are homologs of yeast Dck1 and Lmo1, respectively [25,26]. Malfunctions of Rac1 have been associated with serious diseases, including cancer and diabetes [27][28][29][30]. Consequently, yeast expression systems may provide excellent tools to study the molecular functions of the associated RAC1 alleles [23]. How mitochondrial translocation of yeast Rho5 is achieved, and, more importantly, how the physiological functions of Rho5 are related to its subcellular distribution, remains to be elucidated. In this work, we investigated whether the GEF subunits depend on each other and on Rho5 to translocate under oxidative stress. To gain some insight into the in vivo importance of this trafficking, the three proteins were confined to different subcellular microdomains, i.e., the plasma membrane and the mitochondrial surface. Phenotypic analyses suggest that Rho5 exerts important functions in oxidative stress response when still associated with the plasma membrane, rather than exclusively after its translocation to mitochondria. The GEF Subunits Dck1 and Lmo1 Translocate to Mitochondria Independent from Each Other or Rho5 While previous work revealed that oxidative stress-induced translocation of GFP-Rho5 to mitochondria depends on both subunits of the dimeric GEF Dck1/Lmo1 [17], whether the GEF subunits depend on the GTPase or on each other was not addressed. We thus checked the intracellular distribution of Dck1-GFP and Lmo1-GFP fusion proteins before and after the addition of hydrogen peroxide in different mutant backgrounds. Dck1and Lmo1-GFP both displayed wild-type distributions when tested in a rho5 deletion background under standard growth conditions and upon oxidative stress (Figure 2a). translocate independent from each other. Strains expressing the GFP fusions of the indicated proteins from their native genomic loci were used to introduce a mitochondrial marker tagged with mCherry on a CEN/ARS plasmid (pJJH1408). Transformants were grown in selective minimal media. Representative images for each strain and condition are shown. When oxidative stress was applied by the addition of 4.4 mM hydrogen peroxide (+H 2 O 2 ), fluorescence images were taken within less than 15 min with the indicated channels (GFP/mCherry), or using differential interference phase contrast (DIC). The percentage of cells displaying colocalization of the two fluorescence markers at mitochondria (mit) is given together with the number of total cells inspected (n). The size bars in the DIC images correspond to 5 µm, which is applicable to all images in the same panel. The strains employed were Dck1-GFP rho5∆ = HCSO20; Lmo1-GFP rho5∆ = HCSO25; Dck1-GFP lmo1∆ = HCSO26; and Lmo1-GFP dck1∆ = HCSO33. It should be noted that the C-terminal Lmo1-GFP fusion did not complement the phenotypic defects of a lmo1 deletion, indicating that the tagged protein is not functional in vivo. However, a functional N-terminal GFP-Lmo1 fusion employed for the physiological studies in subsequent experiments was checked in the dck1 and rho5 deletion strains and confirmed the independent translocation. Interestingly, they were recruited to mitochondria independent from each other, as deletion of the gene encoding the other subunit (LMO1 or DCK1, respectively) had no effect on stress-induced translocation ( Figure 2b). Thus, stress signals presumably act on both Dck1 and Lmo1 to provoke their rapid translocation to mitochondria, for which neither the formation of the dimeric GEF, nor that of a trimeric complex with Rho5 is required. Trapping of Dck1 or Lmo1 to Mitochondria Impedes Rho5 Function in the Oxidative Stress Response Given their independent translocation, we wondered whether the GEF subunits activate Rho5 at the mitochondrial surface to trigger the cell's response to oxidative stress. To investigate this hypothesis, Dck1 and Lmo1 were trapped to the mitochondria, as previously exercised with GFP-Rho5, by using a "GFP binder" [22]. Therefore, the genes encoding the GEF subunits were tagged with GFP at their native loci, and combined by crossing and tetrad analyses with a strain carrying a specific GFP nanobody fused to the transmembrane domain of the mitochondrial outer membrane Fis1 protein. Since Lmo1-GFP constructs employed so far proved to be non-functional in subsequent physiological tests, i.e., they displayed a similar hyper-resistance towards hydrogen peroxide as the complete lmo1 deletion (data not shown), the C-terminal tag was substituted by GFP attached to the N-terminal end. Neither Dck1-GFP nor GFP-Lmo1 alone or in combination with the GFP-binder affected growth of the respective strains under standard conditions ( Figure 3a). Importantly, strains carrying only Dck1-GFP or GFP-Lmo1 were as sensitive to hydrogen peroxide as the wild-type, showing that both were fully functional. However, if efficiently trapped to the mitochondrial surface by the GFP-binder, as demonstrated by fluorescence microscopy (Figure 3b), a marked hyper-resistance of the respective strains was observed, mimicking the complete deletion in the case of GFP-Lmo1, and approaching the growth of the deletion strain for Dck1-GFP (Figure 3a). Consistent with the previous data on the trapping of the GFP-tagged GTPase itself, this suggested that in order to fulfill its physiological role in oxidative stress response, Rho5 has to be activated prior to reaching the mitochondria. coding the GEF subunits were tagged with GFP at their native loci, and combined by crossing and tetrad analyses with a strain carrying a specific GFP nanobody fused to the transmembrane domain of the mitochondrial outer membrane Fis1 protein. Since Lmo1-GFP constructs employed so far proved to be non-functional in subsequent physiological tests, i.e., they displayed a similar hyper-resistance towards hydrogen peroxide as the complete lmo1 deletion (data not shown), the C-terminal tag was substituted by GFP attached to the N-terminal end. Neither Dck1-GFP nor GFP-Lmo1 alone or in combination with the GFP-binder affected growth of the respective strains under standard conditions (Figure 3a). Importantly, strains carrying only Dck1-GFP or GFP-Lmo1 were as sensitive to hydrogen peroxide as the wild-type, showing that both were fully functional. However, if efficiently trapped to the mitochondrial surface by the GFP-binder, as demonstrated by fluorescence microscopy (Figure 3b), a marked hyper-resistance of the respective strains was observed, mimicking the complete deletion in the case of GFP-Lmo1, and approaching the growth of the deletion strain for Dck1-GFP (Figure 3a). Consistent with the previous data on the trapping of the GFP-tagged GTPase itself, this suggested that in order to fulfill its physiological role in oxidative stress response, Rho5 has to be activated prior to reaching the mitochondria. Attachment of Rho5 or its GEF Subunits to the Plasma Membrane Does Not Impair Its Repressor Function in Oxidative Stress Response In light of the results described above, we attempted to trap the GTPase in a similar approach by appropriate constructs with the GFP nanobody attached to different plasma membrane proteins. However, this did not yield satisfactory results, as fluorescence microscopy showed that a substantial amount of GFP-Rho5 still translocated to mitochondria upon application of oxidative stress (unpublished results from our laboratory). Therefore, the RHO5 coding sequence was directly fused to the 3 end of the gene encoding the cell wall integrity sensor Mid2 at its native locus. The sensor was previously shown to reside in the plasma membrane in specific microdomains with a fairly uniform distribution, its Cterminus is exposed to the cytosol, and it is not subject to rapid endocytosis [31,32]. Strains carrying the MID2-RHO5 fusion in conjunction with a rho5 deletion grew like wild-type under normal conditions and in the presence of hydrogen peroxide (Figure 4). Attachment of Rho5 or its GEF Subunits to the Plasma Membrane Does Not Impair Its Repressor Function in Oxidative Stress Response In light of the results described above, we attempted to trap the GTPase in a similar approach by appropriate constructs with the GFP nanobody attached to different plasma membrane proteins. However, this did not yield satisfactory results, as fluorescence microscopy showed that a substantial amount of GFP-Rho5 still translocated to mitochondria upon application of oxidative stress (unpublished results from our laboratory). Therefore, the RHO5 coding sequence was directly fused to the 3′end of the gene encoding the cell wall integrity sensor Mid2 at its native locus. The sensor was previously shown to reside in the plasma membrane in specific microdomains with a fairly uniform distribution, its C-terminus is exposed to the cytosol, and it is not subject to rapid endocytosis [31,32]. Strains carrying the MID2-RHO5 fusion in conjunction with a rho5 deletion grew like wild-type under normal conditions and in the presence of hydrogen peroxide ( Figure 4). This suggested that Rho5 at the plasma membrane suffices to trigger the appropriate response to oxidative stress under these conditions, i.e., that translocation of Rho5 and its GEF to mitochondria, is not required. In contrast to previous findings on the pronounced the hyper-sensitivity of strains with an activated RHO5 G12V allele towards oxidative stress [22], a pronounced difference as compared to the wild-type was found when the allele was fused with the MID2 coding sequence. Vice versa, a direct fusion of Rho5 with the transmembrane domain of Fis1, which confines the GTPase to the outer mitochondrial membrane, rendered the strains as hyper-resistant towards hydrogen peroxide as a rho5 deletion (Figure 4). This indicates a lack of GTPase function in the fusion protein, in accordance with previous data from trapping of Rho5 to mitochondria via a GFP nanobody [22]. Discussion Rho5 in S. cerevisiae has been implicated, amongst other functions, in linking the oxidative stress response to mitochondrial turnover and apoptosis [16]. This notion is consistent with the rapid translocation of Rho5 and that of its dimeric GEF Dck1/Lmo1 to mitochondria upon exposure to hydrogen peroxide [17]. We here provided evidence that the translocation occurs for each individual GEF subunit, independent from each other and Rho5. Previous models proposed that the trimeric complex between the GTPase and its GEF could move as an entity from the plasma membrane to the mitochondrial surface [17,22], analogous to the complex suggested to be formed by the human homolog Rac1 with the DOCK180/ELMO [25]. The independent translocation of the GEF subunits observed here points to an alternative mechanism in which the GEF dimer could first be assembled at the mitochondrial surface and only then recruit Rho5. Such a sequence has been suggested for the Rab-GTPase Ypt7 involved in vesicle transport, which is recruited to and activated by its GEF associated with the target membrane [33]. In this context, human Rac1 also associates with various subcellular compartments, frequently mediated by its interaction with a plethora of specific GEFs (reviewed in [34]). It would therefore be interesting to determine the exact timing of the appearance of Dck1, Lmo1, and Rho5 at yeast mitochondria upon exposure to oxidative stress, or if they indeed travel together as a trimeric complex. Given that the translocation occurs within a matter of seconds [17], this may be challenging. More importantly, we here addressed the physiological function of the intracellular distribution of the dimeric GEF by trapping either subunit to the mitochondrial surface through a GFP-tag and the corresponding nanobody. This rendered the strains hyperresistant to oxidative stress, mimicking the phenotype of dck1 and lmo1 deletions. Similarly, confining Rho5 to the outer mitochondrial membrane by fusion with the transmembrane domain of Fis1 left the GTPase non-functional, showing the same hyper-resistance as a complete rho5 deletion. These findings are consistent with previous data obtained from the nanobody-mediated trapping of GFP-Rho5 to mitochondria and support the notion that the GTPase needs to be activated at the plasma membrane prior to its translocation [22]. In contrast, the fusion of Rho5 to the plasma membrane sensor Mid2 did not affect the cell response to oxidative stress under the conditions tested herein. It should be noted that Mid2 is known to accumulate in membrane microdomains, which would cause an increased local concentration of the fused GTPase [31], in analogy to its human homolog, where local clustering of Rac1 was proposed as a mechanism of activation [35]. How is the localization and activity of Rho5 related to the oxidative stress response? The activated plasma membrane-bound Rho5 is believed to repress various signalling pathways, including CWI [12] and HOG [16]. In this context, upon exposure to hydrogen peroxide the CWI pathway was shown to transmit a signal generated by its sensors to the downstream MAP kinase Slt2 [36]. The kinase phosphorylates Cnc1, a cyclin regulating the cyclin-dependent kinase Cdk8, which triggers the nuclear exit of Cnc1. The latter apparently serves two functions: (i) it represses stress responsive genes in the nucleus in association with the mediator complex; and (ii) upon its expulsion into the cytosol, it associates with mitochondria and leads to their fission, mitophagy, and cell death [37]. However, it should be noted that the latter effect was observed under conditions of nitrogen starvation rather than with oxidative stress. Nevertheless, a lack of active Rho5 at the plasma membrane would be expected to increase CWI signalling, resulting in phosphorylation and nuclear exit of Cnc1, which could trigger mitophagy and apoptosis. How this could be related to a recent report on the involvement of the CWI sensor Mtl1 in the induction of autophagy and mitophagy upon glucose starvation during diauxic shift [38] also remains to be investigated. In addition to these rather indirect actions of Rho5, we believe that its direct association with mitochondria also triggers oxidative stress-induced mitophagy and apoptosis. This requires activation of the GTPase prior to its translocation, as indicated by the null phenotypes of trapping either of the GEF subunits or Rho5 itself to the mitochondrial surface. On the other hand, a similar trapping of the activated GFP-Rho5 G12V variant restored sensitivity towards hydrogen peroxide to wild-type levels, whereas the untagged hyperactive GTPase was hyper-sensitive [22]. The fact that confining the hyper-active variant to the plasma membrane with the Mid2-Rho5 G12V fusion did not markedly increase sensitivity to oxidative stress as compared to the wild-type suggests that the intracellular trafficking of Rho5 also plays an important role. Strains and Growth Conditions Yeast strains employed and their genotypes are listed in Table 1. All strains derived from the HD56-5A and its isogenic diploid DHD5, which are closely related to the CEN.PK background [39,40]. The yeast cell culture and genetic techniques followed standard procedures [41]. Rich medium (YEPD) contained 1% yeast extract, 2% Bacto peptone (Difco Laboratories Inc., Detroit, MI, USA), and 2% glucose. Synthetic media were prepared as described in [41], with the omission of amino acids or bases as required for selection of plasmids or deletion markers and 2% glucose (SCD). For selection of the kanMX marker, 100 mg/L of G418 were added to the medium after sterilization. Growth curves were obtained in 100 µL cultures in 96 well plates, in SCD with or without hydrogen peroxide as indicated, and recorded in a Varioscan Lux plate reader (Thermo Scientific, Bremen, Germany) as described previously [22]. For tetrad analyses, diploid strains were grown to stationary phase in liquid YEPD, collected by centrifugation and dropped onto 1% potassium acetate plates for sporulation at 30 • C. After two to three days and microscopic inspection for ascus formation, a sample of each culture was resuspended in 100 µL of sterile water and 4 µL of Zymolyase 100T (10 mg/mL) was added, followed by 7-10 min incubation at room temperature. 15 µL of the suspension was streaked out onto a YEPD plate and spores were segregated using a Singer MSM400 micromanipulator (Singer Instruments, Somerset, UK). Plates were incubated for 4 days at 30 • C, and colony formation was documented by scanning. Scanned images were adjusted for brightness and contrast using Corel Photo Paint with the same settings for the entire plate prior to compilation of sections into the final figures. For manipulations in E. coli, strain DH5α was employed with standard media as described previously [32]. Construction of Plasmids, Deletion Mutants and Gene Tagging Wild-type genes from S. cerevisiae were obtained by PCR using appropriate oligonucleotides with restriction sites and genomic DNA of strain DHD5 or its derivatives as templates. Deletion strains and gene fusions with specific tags were obtained by one-step gene replacement techniques [42] by adding 45-50 bp of flanking sequences homologous to the target gene with appropriate oligonucleotides used for PCR amplification. For selection of in vivo recombinants, either the kanMX or the SkHIS3 cassette from the Longtine collection [43], SpHIS5 from pUG27, or KlLEU2 from pUG73 were employed [44]. GFP tags were amplified from pJJH1619 (GFP-kanMX) or pJJH1620 (GFP-SkHIS3); mCherry tags from pJJH1524 (mCherry-SkHIS3), described in [23]. As a mitochondrial marker for fluorescence microscopy, either a genomic IDP1-mCherry fusion [22] was introduced by crossing with the appropriate strains and tetrad analysis, or plasmid pJJH1408 [17] was employed, which encodes a fusion of the Cox4-mitochondrial signal sequence with mCherry. For integration at the leu2-3,112 locus, constructs were subcloned into the vector YI-plac128 [45], and the resulting plasmids were linearized by digestion with BstEII prior to transformation and selection for leucine prototrophy. Proper integration was confirmed by PCR. Specifically, pLAO12 (PFK2p-GB-FIS1 TMD ) was used to integrate the GFP-nanobody construct, and pJJH3024 (RHO5-FIS1 TMD ) and pJJH3096 (RHO5 G12V -FIS1 TMD ) for integration of the respective RHO5 alleles fused to the mitochondrial transmembrane domain coding sequence. All PCR-generated fragments were verified by Sanger sequencing (Seqlab, Göttingen, Germany). Maps and sequences of all plasmids and modifications of genomic loci are available upon request. Fluorescence Microscopy The setup used for the fluorescence microscopy consisted of a Zeiss Axioplan 2 (Carl Zeiss, Jena, Germany) equipped with a 100× alpha-Plan Fluor objective (NA 1.45) and differential-interference contrast. Sample handling and image processing have been described in detail in [17]. Conclusions In conclusion, we assume that the GTPase has a dual role with regard to the oxidative stress response: while at the plasma membrane it represses the expression of stress-responsive nuclear genes through the CWI/Cnc1 relay under standard growth conditions. When stressed by hydrogen peroxide, Rho5 dissociates from the plasma membrane and thus relieves repression, priming the cells for mitophagy and apoptosis. The association of Rho5 with its GEF at the mitochondrial surface would then further promote the path to death. Further experiments to test the fusions obtained herein for their effect on mitophagy and apoptosis under strong oxidative stress are therefore required and currently in progress.
2022-07-20T15:10:54.123Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "9965ad98fb758176282f1e3d1dbac870204b829f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/14/7896/pdf?version=1658127916", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59f76d31328a3dc9ea59d034ff2f12d255ef5959", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
218869577
pes2o/s2orc
v3-fos-license
Interplay between Boolean rules and topology in the stability of Boolean networks Empirical evidence has revealed that biological regulatory systems are controlled by high-level coordination between topology and Boolean rules. In this study, we study the joint effects of topology and Boolean functions on the stability of Boolean networks. To elucidate these effects, we focus on i) the correlation between the sensitivity of Boolean variables and the degree, and ii) the coupling between canalizing inputs and degree. We find that negatively correlated sensitivity with respect to local degree enhances the stability of Boolean networks against external perturbations. We also demonstrate that the effects of canalizing inputs can be amplified when they coordinate with high-degree nodes. Numerical simulations confirm the accuracy of our analytical predictions at both the node and network levels. I. INTRODUCTION The random Boolean network proposed by Kauffman in 1969 [1] has been widely used in physics, biology, and computer science for modeling biological regulatory systems in an abstract manner [2][3][4][5][6]. Many functions in living systems can be modeled by Boolean networks, including genetic regulation [1,4,5], neural firing [7], and social activity [8]. The dynamical patterns of Boolean networks fall into two phases, namely stable and unstable (chaotic) phases. In a stable phase, most nodes rapidly reach a steady state and remain unchanged. In contrast, in an unstable phase most nodes change their states in a chaotic manner. It has been suggested that many biological regulatory systems ranging from genetic systems to neural systems tend to hover near the borderline between these two phases, achieving both stability and evolvability [9][10][11]. Empirical evidence supports the hypothesis that biological networks remain near criticality [9,12], especially for knockout experiments for single genes in Saccharomyces cerevisiae [13] and gene expression dynamics in macrophages [14]. Theoretical predictions regarding the dynamics of Boolean networks are based on stability against perturbations [15,16]. While damage caused by perturbations dies out quickly in a stable phase, it can spread through an entire system in an unstable phase. Pioneering research on the theoretical prediction of the stability of random Boolean networks has revealed that the mean degree k of a network and the mean bias p of Boolean functions typically determine the location of criticality [15,16]. Since then, many studies have attempted to assess the effects of structural and dynamical features, including scale-free structures [17? ], noise [18,19], multilevel interactions [20], asynchronous updates [21], continuous dynamics [22], veto functions [23], bipartite interactions [24], knockout [25,26], adaptive dynamics [27], * min.byungjoon@gmail.com and canalizing functions [28]. Recently, an exhaustive search of real-world networks revealed that biological regulatory circuits achieve stable and adaptive functionality based on the interplay between logical variables and causal structures, such as anti-correlated sensitivity to the local degrees of nodes and abundant canalizing functions [12]. There have been several attempts to study the role of the correlations between structural and dynamic properties in Boolean networks [29][30][31], but this role is still not fully understood, particularly for the correlations between node degrees and Boolean functions. In this study, we analyze the stability of a random Boolean network incorporating interplay between network topology and Boolean variables. Specifically, we aim to assess the effects of the correlation between local node degrees and Boolean functions in terms of sensitivity [29,30] and canalizing inputs [28]. In this paper, we elucidate the role of the coupling between local topology and Boolean rules in promoting the stability of Boolean networks. Specifically, we demonstrate that negatively correlated sensitivity to degree enhances the stability of Boolean networks. Additionally we find that coordination between high-degree nodes and canalizing inputs can enhance stability. Numerical simulations are conducted to verify our analytical predictions, revealing excellent agreement. II. THEORY The Boolean network considered in this paper consists of a set of nodes whose states are binary (i.e., on or off). The bias p i of a Boolean function is assigned to each node and Boolean variables for every combination (total of 2 ki combinations, where k i is the degree of node i) of inputs are assigned according to the bias p i . Starting from an initial state selected at random, the state of each node is updated synchronously according to its Boolean function and input signals. After a transient period, the dynamics of the Boolean network eventually arrives at a set of restricted patterns among of total of 2 N possible states, where N is the number of nodes. To simulate a perturbation, a small fraction of the nodes are randomly selected and flipped. To check the network responses to such perturbations, we define the stability of the network as its ability to eliminate damage. In a stable phase, nodes flipped by a perturbation quickly return to their initial states. However, in an unstable phase, the majority of the nodes in a system fall under the influence of perturbations and evolve to exhibit chaotic dynamics. To quantify the stability of a Boolean network, we measure the (normalized) Hamming distance H between the initial and final states following a perturbation. We define the state of the nodes as s = {s 1 , s 2 , · · · , s N }, where s i ∈ {0, 1}. The average Hamming distance between an initial (t o ) and final (t) state is defined as While H remains at zero in a stable phase within the thermodynamic limit as N → ∞, it takes on non-zero values in unstable phases. Therefore, H represents the degree of network instability. For a given bias p i of node i, the probability that node i changes its state when one of its input changes, which is referred to as sensitivity, is defined as We define H i as the probability that the state s i of node i changes based on a change in one of its neighbors. For a locally tree-like network, we can derive the following selfconsistency equations for a set of Hamming distances H i for each node i [31,32]: where ∂i represents the set of neighbors of node i. When iterating Eq. 2 from an initial value of H i , H i converges to a fixed point. We can then obtain the average Hamming distance for an entire network as follows: Note that Eq. 2 can be interpreted as a percolation process with an occupation probability of q i [32][33][34]. By expanding H i near a small value of i , we get where A ij are the elements of the adjacency matrix. It should be noted that we neglect second-and higher-order terms. Next, the critical point can be identified by calculating the inverse of the principal eigenvalue Λ of the matrix Q as follows: By using Eqs. 2-5, we can calculate the stability and critical point for a fixed network structure. By ignoring different H i values for each node in an "annealed approximation", we can treat the H i value for each node as the same value H a . In this approximation, analysis at a single node level is no longer possible, but we can easily compute the stability of a Boolean network by solving a single equation for H a with given degree and sensitivity distributions. We obtain the following equation for a degree distribution P (k, k o ), where k and k o are the in and out degrees, respectively: where q(k) is the sensitivity distribution as a function of in degree. Assuming that k and k o are uncorrelated, we get We By applying the linear stability criterion, the critical point can be identified by the condition f (0) = 0, which yields Assuming that the sensitivity has no correlation with the degree, we can recover the well-known prediction of the critical point k = 1/ 2p(1 − p) [15,16]. III. RESULTS We analyze the effects of the correlation between node degree and sensitivity using the general framework described above. First, we constructed an Erdös-Rényi (ER) graph and assign the sensitivity where p i is the bias. We consider three representative cases of the coupling between sensitivity and node degree: uncorrelated (UC), positively correlated (PC), and negatively correlated (NC). For UC case, we assign the same p to each node to obtain uncorrelated coupling. For the PC case, we assign the linearly correlated bias p i of node i to its degree k i as p i = C P k i . Here, C P determines the average bias for a given mean degree as p = C P k . In contrast, for the NC case, p i is assigned as p i = −C N k i + 1/2, where C N determines the average bias as p = −C N k + 1/2. The factor of 1/2 ensures that the maximum value of bias is 1/2. The exact linear relationships in the PC and NC cases do not sustain all possible ranges of p because 0 ≤ p i ≤ 1/2. However, the range of linear dependency is still sufficiently broad to examine the impact of the correlation. By substituting all of these parameters into Eq. 7, we can derive the self-consistency equations for H a for the three coupling cases as follows: PC: H a = 2C P k 1 − C P (1 + k ) By solving these self-consistency equations, we can obtain the average Hamming distances and identify the critical points. We implemented numerical simulations on ER networks with k = k out = 4 without any degree-degree correlation. We assigned the biases and corresponding Boolean variables according to the process described above. From initial states selected randomly, the state of each node are updated synchronously according to the Boolean variables. After a transient period, the dynamics should arrive at a steady state. To simulate a perturbation, we flipped a fraction of 0.01 of the nodes by force. When the system reached a steady state again following the perturbation, we measured the Hamming distance over all nodes. In Fig. 1, we compare the analytical predictions from Eq. 10 to the numerical simulations. The agreement between the theory and the simulations is excellent. We find that a negatively correlated sensitivity to degree enhances stability when comparing the UC and PC cases. For the NC case, the transition point of mean bias p c is delayed and H is lower than in the other cases. In contrast, the PC case exhibits an enlarged chaotic region compared to the other cases, making it more vulnerable to perturbations. These results demonstrate that the correlation between sensitivity and degree can significantly affect the stability in terms of the location of the critical point and the size of Hamming distance. To evaluate the impact of the interplay between node degree and sensitivity more clearly, we consider a transparent example with a bimodal degree distribution P (k) = (1/2)δ k,K1 + (1/2)δ k,K2 , where δ i,j represents the Kronecker delta. We assign a bias drawn from a bimodal distribution Q(p) = (1/2)δ p,φ1 + (1/2)δ p,φ2 . Similar to the analysis above, we consider three types of correlated coupling: UC, PC, and NC. For the UC case, we assign a bias to each node at random, independent of the node degrees. Positively (negatively) correlated coupling can be achieved by ensuring that higher (lower) degree nodes have greater bias values. In our examples, we use K 1 = 10, K 2 = 6, φ 1 = 2 p − 1/2, and φ 2 = 0.1, where 0.1 ≤ φ 1 ≤ 0.5. Note that the range of p is 0.1 ≤ p ≤ 0.3. By annealing the probability H i , we can , and (c,d) NC. We consider a network with P (k) = (1/2)δ k,10 + (1/2)δ k,6 and N = 10 4 , and a bimodal bias distribution defined as Q(p) = (1/2)δ p,2 p −1/2 + (1/2)δp,0.1. Figure (d) is an enlarged view of Fig. (c). The average Hamming distances for nodes with high degrees (k = 10) and low degrees (k = 6) are denoted by green and blue arrows, respectively. The average Hamming distance over all nodes is denoted by a filled circle. derive the self-consistency equation for H a as follows: where q 1 = 2φ 1 (1 − φ 1 ) and q 2 = 2φ 2 (1 − φ 2 ). As shown in Fig. 2, negatively correlated coupling is more resilient to external perturbations compared to the other types of coupling. The average Hamming distance clearly highlights the effect of the correlation between sensitivity and local node degree. Specifically, negative correlation between sensitivity and node degree enhances the global stability of Boolean networks [ Fig. 2(a)]. For the NC case, the majority of incoming links are connected to unbiased nodes (p = 1/2), leading to more stable Boolean dynamics. In contrast, for the PC case, damage can easily spread through an entire network because an adequate fraction of high-degree nodes have high sensitivity. Fig. 2(b) presents a phase diagram as functions of the mean node degree k and p , where K 1 = k + 2 and K 2 = k − 2. An increasing value of p c for the NC coupling can be observed consistently over a wide range of parameter sets. In addition to the global stability of Boolean networks, we can also assess the stability of each node in a given network topology using Eq. 2. Fig. 3 reveals perfect agreement between the numerical results H sim i and theoretical predictions H th i for the probability that a node i changes its state when a perturbation occurs. We computed the average Hamming distances H k=10 and H k=6 for nodes with high degree (k = 10) and low degree (k = 6), respectively, as indicated by the green and blue arrows in Fig. 3, respectively. The average Hamming distance over all nodes is denoted by a filled circle. One can see two clearly separated groups of nodes with different stability values and Hamming distances H i for the PC coupling. We can confirm that in the PC coupling, damage can spread through high-degree nodes with high sensitivity, which are prone to instability. However, for the NC coupling, these two groups merge and perturbations terminate quickly. From the perspective of a percolation problem, NC coupling corresponds to the case where high-degree nodes have low occupation probabilities, leading to stable dynamics, which is analogous to degree-based removal in network percolation [35]. Finally, we consider the role of the interplay between local node degree and canalizing inputs, which are observed frequently in biological systems [12,28]. Canalizing functions have a single input that forces the corresponding output to a specific value, regardless of the values of other inputs. The average Hamming distance H i with a fraction c i of canalizing inputs is calculated as where q j = 2p j (1 − p j ) and ∂j/c define a set of inputs excluding a canalizing input [32]. By definition, canalizing functions lead to stable dynamics [28] because they effectively reduce the sensitivity of non-canalizing inputs connected to nodes shared by canalizing inputs. However, the effects of canalizing inputs are not solely determined by the fraction of canalizing inputs. The topological locations of canalizing links also affect stability, which can be predicted using Eq. 11. We consider three different correlations between node degree and the locations of canalizing inputs, which are again denoted as UC, PC, and NC. For the UC case, the canalizing inputs are distributed randomly. PC and NC indicate that canalizing inputs tend to connect nodes with high and low degrees, respectively. For the sake of simplicity, we consider a random network with a bimodal degree distribution defined as P (k) = (1/2)δ k,K1 + (1/2)δ k,K2 , where K 1 = 6 and K 2 = 2. Then, we assign canalizing inputs to 1/4 of the nodes. Therefore, for the PC case nodes with high degrees have many canalizing inputs and for the NC case nodes with low degrees have many canalizing inputs, as shown in Fig. 4(a). As shown in Fig. 4(b), correlation between canalizing inputs and local node degree can increase global stability. Specifically, the PC coupling enhances the stability of Boolean networks. In contrast, the NC coupling decreases stability, leading to smaller p c and larger H values. In this example, one can see that correlation between local node degree and canalizing inputs alters Boolean dynamics significantly. When a canalizing input becomes active, all other connections are ineffective. Therefore, for the PC case, a larger fraction of non-canalizing inputs lose their influence on Boolean dynamics to the canalizing inputs. In the NC case, the effects of canalizing inputs are minimized because they only affect low-degree nodes. IV. DISCUSSION We analyzed the stability of random Boolean networks incorporating dynamic rules, as well as the topological properties of each node. We find that correlation between node degree and Boolean functions plays an important role in determining Hamming distances and critical point. Specifically, negatively correlated sensitivity to the degree of each node increases stability. We also find that a correlation between high-degree nodes and canalizing inputs can increase global stability. Our results reveal that analysis based on the naive mean-field approach may fail to predict dynamical consequences in Boolean networks with intertwined structural and functional properties, which are often observed in real-world biological systems. Further study is required to examine the effects of more complex features in Boolean networks, such as loops and feedback in network topologies [36,37], and hierarchical dynamics.
2020-05-26T01:00:44.639Z
2020-05-25T00:00:00.000
{ "year": 2020, "sha1": "8908fa905b811c185cc43663a0b20941af596768", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8908fa905b811c185cc43663a0b20941af596768", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
17600000
pes2o/s2orc
v3-fos-license
Learning from Non-Stationary Stream Data in Multiobjective Evolutionary Algorithm Evolutionary algorithms (EAs) have been well acknowledged as a promising paradigm for solving optimisation problems with multiple conflicting objectives in the sense that they are able to locate a set of diverse approximations of Pareto optimal solutions in a single run. EAs drive the search for approximated solutions through maintaining a diverse population of solutions and by recombining promising solutions selected from the population. Combining machine learning techniques has shown great potentials since the intrinsic structure of the Pareto optimal solutions of an multiobjective optimisation problem can be learned and used to guide for effective recombination. However, existing multiobjective EAs (MOEAs) based on structure learning spend too much computational resources on learning. To address this problem, we propose to use an online learning scheme. Based on the fact that offsprings along evolution are streamy, dependent and non-stationary (which implies that the intrinsic structure, if any, is temporal and scale-variant), an online agglomerative clustering algorithm is applied to adaptively discover the intrinsic structure of the Pareto optimal solution set; and to guide effective offspring recombination. Experimental results have shown significant improvement over five state-of-the-art MOEAs on a set of well-known benchmark problems with complicated Pareto sets and complex Pareto fronts. I. INTRODUCTION I N practice, a decision maker often requires to consider optimising multiple conflicting objectives. This type of optimisation problems are usually referred to as multiobjective optimisation problems (MOPs). Since the objectives of the problems usually conflict with each other, there does not exist a unique solution that can optimise all the objectives simultaneously. Therefore, a set of Pareto optimal solutions, named as Pareto set (PS), exists for an MOP [1]. A solution is considered to be 'Pareto optimal' if it is impossible to make any one objective better off without making at least another one worse off. Finding the PS often challenges greatly on computational capacity and algorithm intelligence [2]. In the last three decades, extensive research on evolutionary algorithms (EAs) have shown that the EA paradigm is very powerful in handling MOPs, in the sense that a set of solutions that approximates to the PS, named as approximated set, can be obtained in a single run without requiring much computational effort [3] [4]. EAs simulate the genetic evolution of a population of individuals to best fit their living environment [5]. To design an effective EA, effective recombination for fit offspring generation is a key. Research has shown that a problem's domain knowledge, if any, can greatly improve the search efficiency if the knowledge is properly collected or learned during the search process [6]. For an m-objective optimisation problem, it has been proved that the distribution of the PS exhibits an (m − 1)-dimensional manifold structure under mild conditions [7]. This property is often referred to as the regularity property. From the point view of EA design, an effective EA is expected if the manifold structure can be discovered and applied for offspring generation. Some EAs have been developed to combine machine learning techniques for the discovery of the intrinsic manifold structure to aid the search for the PS. For examples, in regularity modelbased estimation of distribution algorithm (RM-MEDA) [6], the local principal component analysis (local PCA) approach is applied at each generation. It uses the learned principal components to approximate the manifold structure. Some EAs adopted other machine learning techniques to approximate the manifold structure [8]. All these algorithms apply the machine learning techniques at every generation. These learning algorithms often need to visit all data several times (iterations) until converge. Thus, a considerable amount of computational resources is consumed on learning. To reduce the computational overhead, the multiobjective EA (MOEA) proposed by Zhang et al. [4] couples the population evolution and the model inference. In their MOEA, only one iteration of the learning algorithm is applied at each generation. This scheme provides an important development on saving computational resources. The evolution procedure can also be seen as a learning procedure; intrinsic PS's structure of an MOP is expected to be learned dynamically from the changing candidate solutions. However, there is a fundamental issue in this scheme. As well known, one of the main assumptions in machine learning is that sample observations are assumed to be effectively i.i.d. (independent and identically distributed) for the purposes of statistical inference. But, under the scheme in [4], along the evolution procedure, the assumption is largely violated. First, solutions at adjacent generations have rather different qualities in terms of their respective objectives, which indicate that they might not be sampled from the same underlying distribution (i.e. these solutions are not identically distributed). Second, the generation of new solutions at present generation depends on collective information from previous generation, which indicates solutions at adjacent generations are not independent. Look deeply into the data (i.e. offsprings created during the evolution search) we try to learn from, some special characteristics can be observed: 1) the structure to be discovered along evolution is temporal and changing dynamically. In other words, these data are produced by a non-stationary process 1 ; 2) the structure determined by the data is scale-variant. On a short time scale, the structure is pseudo-stationary, while on a long time scale, the structure has a sequential and converging property. That is, along the evolution process, the underlying structure is similar between adjacent generations, while the structure will finally be converging to the PS's manifold structure of the considered optimisation problem. In this paper, we present the first-ever MOEA based on an online machine learning 2 from a stream of non-stationary data. In our algorithm, a modified algorithm to the online agglomerative clustering algorithm presented in [9] is developed to learn the PS's structure addressing the above mentioned characteristics. Obvious advantages of the proposed online clustering based evolutionary algorithm (OCEA) include 1) a perfect match between the search dynamics and the nonstationary structure learning and 2) a significantly reduced computational cost on learning (data need to be visited only once in the context of online learning). To successfully implement the proposed algorithm, we need to address three main issues. First, how to modify the online agglomerative clustering in accordance with the evolution process to discover the underlying structure? Second, how to properly use the learned structure to create offsprings effectively? Finally, how to select the fittest individuals to drive the search towards the PS? These issues will be discussed in the following sections. The rest of the paper is organised as follows. The background and previous work on multiobjective evolutionary algorithms is introduced in Section II. Section III presents the proposed algorithm in detail. Experimental studies are shown in Section IV and V. The analysis of parameters effect to algorithmic performance is discussed in Section VI. Section VII concludes the paper. II. BACKGROUND AND PREVIOUS WORK A box-constrained continuous MOP can be stated as follows: 1 A process is stationary if and only if the joint distribution of the data at different time are the same. Specifically, if we let t = 1, · · · be the generations of the evolution, and yt be a n-dimensional solution. The sequence yt is a stationary stochastic process if the joint probabilistic distribution of (y t 1 +h , · · · , y t N +h ) and (yt 1 , · · · , yt N ) are the same for all h = 0, 1, · · · , and an arbitrary selection of t 1 , · · · , t N . This is obviously not the case for the stream of offsprings created during the evolution process. 2 In computer science, online machine learning methods learn patterns from data which are available in a sequential order as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. where Ω = n i=1 [a i , b i ] ⊆ R n defines the decision (search) space; a i and b i are the lower and upper boundaries of variable x i , respectively; x = (x 1 , · · · , x n ) ⊺ is a vector of decision variable; F : Ω → R m represents the mapping from search space to objective space where m objective functions f i (x), i = 1, . . . , m are to be considered. The set of all Pareto optimal solutions, denoted by PS, are named as Pareto set. The set of the objective vectors of the Pareto optimal solutions is called Pareto front, denoted by PF. The goal of an MOEA for an MOP is to find a set of approximated solutions whose objective vectors (the objective vectors constitute an approximated front) are as close to the PF as possible (i.e. the convergence requirement), and distribute along the PF as widely and evenly as possible (i.e. the diversity requirement). Great efforts have been made to deal with MOPs in the evolutionary computation community [3]. These developed approaches focus either on establishing a mechanism to balance convergence and diversity, or on developing effective recombination. MOEAs concerning the balance between convergence and diversity basically fall into three categories. In the first category, the Pareto dominance relationship is applied for promising solution selection. The nondominated sorting developed by Deb et al. [10] is the most known method. Its primary use is to drive the search towards the PF which favours convergence. It needs to incorporate other strategies, such as crowding distance [10] and K-nearest neighbor method [11], to preserve the population diversity. It has been found out that dominancebased sorting method is not able to provide enough comparability for many-objective (≥ 4 objectives) optimization problems. Typical dominance-based MOEAs include NSGA-II [10], SPEA2 [11], PESA-II [12], NSGA-III [13], and others. In the second category, MOEAs based on performance metrics, such as hypervolume (HV), R2 and ∆ p , were developed. The performance metrics embed the convergence and diversity requirements together so that they can be employed to directly guide the selection of solutions for a good balance of convergence and diversity. Representative MOEAs include SMS-EMOA [14], HyPE [15], R2-IBEA [16] and DDE [17]. The computation of the performance metrics becomes much more difficult and time-consuming in dealing with manyobjective optimisation problems. The third category is the decomposition-based MOEAs. In this category, a number of reference vectors in the objective space are used to decompose the problem into a set of single objective subproblems [18], or several simple multiobjective subproblems [19]. The convergence is controlled by the objective values of the subproblems; while the diversity is managed by computing the distances of the solutions to the refer-ence vectors. Representative decomposition-based MOEAs include MOEA/D [20], MOEA/D-DE [18], MOEA/D-STM [21], MOEA/D-M2M [19] and others. Regarding MOEAs focusing on effective recombination, they are almost all designed based on the regularity property of MOPs. The underlying assumption is that the manifold structure could be used to greatly improve the search efficiency since high-quality offsprings can be generated if the regularity structure is properly modelled and learned. The first work on applying the regularity property in designing MOEA, i.e., aforementioned RM-MEDA, was proposed in 2008 [6], where the manifold structure is approximated by the first (m − 1) principal components. This work was improved later by using help from the modelling on the PF [22]. Various regularity based MOEAs have been developed since then, such as a reducing redundant cluster based RM-MEDA [8], a RM-MEDA with local learning strategy [23], evolutionary multiobjective optimisation via manifold learning [24], and others. Moreover, in [4], a self-organising map method is incorporated within the evolution procedure to search for the manifold PS structure. III. THE ALGORITHM As discussed previously, existing regularity based MOEAs usually spend a high computational cost on learning. To reduce the consumption of computational resources, we propose to adopt an online machine learning scheme. Offsprings are considered as a stream of data since they come in order along the evolution process, and can only be accessed once or a small number of generations. Moreover, it is observed that along the evolution process, the stream of solutions is dependent, and non-stationary. Therefore, the application of online learning algorithm is able to reduce the number of visits and account for the non-stationary nature. This can significantly reduce the computational resources. Note that a finite mixture of Gaussian clusters can be used to well approximate the distribution of a set of data points statistically. 4 This motives us to approximate the manifold structure by using an online clustering algorithm. The cluster statistics, including the number of clusters, cluster mean and variance-covariance, will evolve over time. To model this nonstationary process, we propose to modify an online agglomerative clustering algorithm called AddC [9] and use it to dynamically estimate the cluster statistics. In the following, we first describe the online agglomerative clustering algorithm developed in [9] and discuss how it should be modified to adapt to the evolution process of MOEAs. The other details of the developed algorithm are then presented. A. Online Agglomerative Clustering AddC, presented by Guedalia et al. [9] in 1998, is developed for clustering a stream of non-stationary data. AddC's clustering procedures are shown in Alg. 1. From line 1 to 2, an arriving new data point y is assigned to the cluster that is closest to it at first. This step attempts to minimise the within cluster variance. Afterwards, from line 3 to 6, if there are less than K max clusters, y is employed as a centroid to create a new cluster; otherwise, from line 4 to 6, two redundant clusters which are closest to each other are merged, and y is also treated as a centroid to create a new cluster for replacing the redundant cluster (i.e. C δ in line 6). The merging operation is aimed to maximise the distances between the centroids and to remove redundant clusters. The creation of new clusters is to consider the temporal changes in the distribution of the data. In line 7, if there still exist data points to be clustered, the clustering operations are repeated. Otherwise, a post process is conducted to remove clusters with a negligible number (ǫ) of data in line 8. The post process is to eliminate outliers if any. Algorithm 1 Online Agglomerative Clustering AddC Require: an arriving new data point y, centroids z k and counters c k of m existing clusters C 1 , · · · , C m , 1 ≤ k ≤ m, and the maximum number of clusters allowed K max . Ensure: a new set of clusters. 1: The centroid which is closest to the data point y is defined as the winner, j = arg min 1≤k≤m ||y − z k ||. 2: Update the closest centroid and its counter, where c j is the number of data points in C j . 6: Initialise a new cluster C δ , z δ = y and c δ = 0. 7: If there still exist data points to be clustered, take a new point y and goto Step 1. 8: Post process: ∀k, if c k < ǫ, perform steps 6 and 7. B. Algorithmic Framework The framework of OCEA is presented in Alg. 2. In line 1 to 2, an initial population P is yielded, an external archive A is initialised to be the same as P. In the first generation, each solution is considered as a cluster where itself is initialised to be the centroid z i = x i and counter c i = 1, i = 1, · · · , N . Afterwards, at each generation, an offspring y i is generated around each solution x i (lines 7 to 9). To generate y i , a mating control parameter β ∈ [0, 1] is applied to balance exploration and exploitation. With β, the solution generation will be in favour of exploitation. That is, the reference (or parent) solutions are chosen from the cluster that x i locates. With 1 − β, the reference individuals are chosen from the global mating pool specified in line 5. This is to favour exploration. After recombination, the generated offspring y i is then used to update external archive and current population by environmental selection, and the clustering information (lines 9 and 11). The solution generation and the updating procedures for population and clusters will be described in the following subsections. Algorithm 2 OCEA framework Require: population size N , maximum evolutionary generations T , mating control parameter β. Ensure: population P. 1: Intialization P = {x 1 , · · · , x N } and an external archive A = P. 2: Take each x i ∈ P as a cluster C i with centroid z i = x i and counter c i = 1. 3: for t ← 1 to T do 4: Set m =#clusters. 5: Construct a global mating pool M by randomly choosing a solution from a C i , 1 ≤ i ≤ m. 6: for i ← 1 to N do 7: Construct a mating pool Q i for each x i as follows: where C ki represents that x i loactes in C k , rand() is a random number generator in [0, 1]. 8: Generate Set P = A and pass the clustering results of A to P. 12: end for C. New Solution Generation In this paper, the differential evolution (DE) and polynomial mutation (PM) operators are adopted to generate offsprings as presented in Alg. 3. The recombination operator takes the current solution x and its mating pool Q as input and outputs an offspring y. DE [26] is firstly used to generate a trial solution (line 2), a repair mechanism is employed to correct any component that is outside the search boundary of that component (line 3). After repair, the PM [2] operator is applied to generate a new solution (line 4). The new solution is repaired again if necessary and the final solution is returned (line 5). In Alg. 3, F and CR are the two control parameters for the DE operator, p m and η m are the parameters for the PM operator. If CR = 1, the DE operator in Alg. 3 is rotation invariant, which is of advantage to deal with complicated PS [18]. Therefore DE is selected to generate new offsprings in OCEA. Obviously, the use of other recombination operators is not limited; e.g. we could use the recombination operators in [27]. Algorithm 3 Solution generation (SOLGEN) operator Require: a current solution x and its mating pool Q Ensure: a trial solution y 1: Choose randomly two distinct parent individuals x 1 and x 2 from Q 2: Generate y ′ = (y ′ 1 , · · · , y ′ n ) ⊺ as follows: D. Updating on Population and Clusters In Alg. 2 line 9, function ESCO is applied to carry out environmental selection and clustering updating. OCEA adopts the environmental selection method proposed in SMS-EMOA [14] which is based on the hypervolume metric. The hypervolume metric is the only known unitary metric that is Pareto compliant [28]. It has shown better performance over decompositionbased and Pareto dominance-based environmental selection approaches [15]. Regarding cluster updating, we modify the online agglomerative clustering algorithm AddC (Alg. 1) so that it can be fitted into the evolutionary search mechanism. The modified AddC is fused in OCEA to update/refine the clusters to adaptively learn the PS's structure. Alg. 4 presents the details of ESOC. For each new solution y, A is updated by the hypervolume metric based environmental selection. Specifically, the fast non-dominanted sorting approach proposed in NSGA-II [10] is applied to partition the external archive A ∪ {y} into L non-dominanted fronts {B 1 , · · · , B L }, where B 1 is the best front and B L is the worst one (line 1). L > 1 which indicates that there are more than one front in A ∪ {y}. If it is the case, the solution x * in B L with the largest d(x, A ∪ {y}) value is removed, where d(x, A ∪ {y}) denotes the number of solutions in A ∪ {y} that dominates x. Otherwise, if L = 1, the solution x * that least contributes to the hypervolume, i.e. ∆ ϕ (x, B 1 ) (line 3 to 5, and 14), is excluded. The calculation of ∆ ϕ can be found in [14]. Require: a new solution y, external archive A, centroids z k and counters c k of current existing clusters C 1 , · · · , C m , 1 ≤ k ≤ m, and the maximum number of clusters allowed K max . Ensure: External archive A and its cluster information. 1: Apply the fast non-dominanted sorting approach on A ∪ {y} to obtain L fronts {B 1 , · · · , B L }. 2: if L > 1 then 3: Determine the worst solution, Determine the worst solution, if C k = ∅ then 10: Remove cluster C k , set m = m − 1. 15: Set m = m+1, construct a new cluster C m , set c m = 1, z m = y. 16: if m > K max then 17: Find two closest clusters, 18: Merge the two clusters, 22: end if If y is kept in A after environmental selection, i.e., x * = y, the online clustering operation is invoked. First, x * is removed from its cluster C * , and its cluster's centroid and counter are updated following equations in line 12. It differs from AddC where no data points are to be removed during the online clustering process. Then y is taken as a new centroid to construct a new cluster (line 15). If there are more than K max clusters in A, two clusters that are closest to each other are emerged (lines 16 to 18) to complete the clustering operation. E. Notes on OCEA It is necessary to emphasize that: • The evolution procedure of OCEA is also an online clustering procedure working on a stream of offsprings which are created and updated during the evolution process. We would expect that the clustering structure is to be gradually emerged during evolution and finally gets well shaped at termination. Among these algorithms, MOEA/D-DE decomposes the MOP into a set of single-objective problems with uniformly distributed weights. It might be not able to obtain approximated fronts with good diversity for MOPs with complex PFs. TMOEA/D transforms the objective functions into those that are easy to be addressed by MOEA/D. This is to make MOEA/D perform well on MOPs with complex PFs. RM-MEDA is developed based on the regularity property. It learns some local principle components at each generation, and uses the principle components to approximate the manifold structure. SMS-MOEA uses the hypervolume metric as the selection criterion. NSGA-II, on the other hand, uses the Pareto dominance relationship among individuals and crowding distance to carry out environmental selection. These comparison algorithms cover all the main streams of MOEAs in the literatures. A. Test Instances and Performance Metrics MOPs with complex PF and complicated PS structures are particularly focused in this paper. The GLT test suite from [4] are used in the comparison experiments. The test suite includes a variety of problems with various characteristics that challenge MOEAs greatly. Those characteristics include disconnected PF, convex PF, nonlinear variable linkage, etc. Two commonly-used performance metrics, inverted generational distance (IGD) [6] and hypervolume (HV) [30], are employed to measure the algorithm's performance. These two metrics can measure both the convergence and diversity of the final approximated fronts found by MOEAs. Lower IGD and larger HV metric values imply better performance of MOEAs. B. Experimental Settings It has been well acknowledged that for the GLT test instances, the DE and PM operators are more able to produce promising solutions than other operators [4]. Therefore, to make a fair comparison, the recombination operators in NSGA-II and SMS-EMOA are replaced by the DE and PM operators used in this paper. Furthermore, all parameters in the experiments are adjusted through preliminary experiments for optimal performance on these test instances. All algorithms are implemented in Matlab and tested in the same computer. The parameter settings for these algorithms are as follows: • Common parameters: -population size: N = 100 for bi-objective and 105 for tri-objective instances; -search space dimension: n = 10 for GLT1-GLT6; -runs: each algorithms independently runs each test instance for 33 times; -termination: maximum evolutionary generation T = 300. Wilcoxon's rank sum test at a 5% significance level is also performed to test the significance of differences between the mean metric values of each instance obtained by each pair of algorithms. In the tables, " †", " §", and "≈" are used to denote that the mean metric values obtained by OCEA is better than, worse than, or similar to those achieved by the comparison algorithm, respectively. C. Comparison Study To study the statistical performance of OCEA, Table I Table I denotes that OCEA performs the best overall on the GLT test suite. To observe the search efficiency of OCEA, Fig. 1 shows the evolution of the statistics of the IGD metric values obtained by the six algorithms on GLT1-GLT6. From the figure, it can be seen that for GLT1 and GLT3-GLT6, OCEA reaches the fastest to the lowest mean IGD metric values. For GLT2, OCEA has the slower, similar and faster speed in comparison with RM-MEDA, TMOEA/D and the other algorithms, respectively. Moreover, when dealing with GLT2, OCEA actually performs better than RM-MEDA at the early stage compared with RM-MEDA. From the evolution of the standard deviations of the metrics, it also can be observed that within 300 generations, OCEA has achieved robust performance on all the instances except for GLT3. Fig. 1 indicates that OCEA approaches the fastest to the PFs and maintains the most diverse populations among the comparison algorithms on average. To reveal the search processes, Fig. 2 plots the evolution of the approximated fronts obtained by RM-MEDA, NSGA-II, MOEA/D-DE and OCEA on GLT4. It is noted that the evolution of the approximated front obtained by each algorithm plotted in the figure is representative. The representative evolution of an algorithm here indicates the final approximated front yielded by the evolution is with the median IGD metric value in 33 independent runs. It can be seen from the figure that, at the 100th generation, the approximated front yielded by OCEA has reached the PF completely, and almost covered the whole PF. After 300 generations, it has reached the approximated front with excellent convergence and diversity. On the other hand, after 300 generations, the final approximated fronts obtained by RM-MEDA, NSGA-II, MOEA/D-DE still cannot cover the whole PF, are not distributed unevenly. Fig. 2 shows that OCEA can indeed greatly improve the search efficiency. To further investigate the effect of OCEA, Fig. 3 plots the final approximated fronts obtained by RM-MEDA and OCEA on GLT1-GLT6. All the final approximated fronts of each instance obtained by RM-MEDA and OCEA, are plotted in Fig. 3(a) and 3(b). The final approximated front of each instance with median IGD metric value (called representative front) obtained by RM-MEDA and OCEA, respectively, over 33 independent runs are plotted in Fig. 3(c) and 3(d). From Fig. 3(a) and 3(b), it can be seen that through 33 independent runs, the final approximated fronts of each instance achieved by RM-MEDA and OCEA, respectively, both can cover the whole PF of that instance. However, compared with RM-MEDA, OCEA performs more stably. From Fig. 3(c) and 3(d), it is observed that the representative fronts of GLT5-GLT6 yielded by RM-MEDA do not reach the PFs. For GLT1-GLT4, although the representative fronts yielded by RM-MEDA all reach the PFs, the PFs are not completely covered. By contrast, the representative fonts obtained by OCEA for each instance all converge to the PFs and distributed well over them. Fig. 3 implies that for the GLT test instances, OCEA is stable and robust in terms of convergence and diversity. In summary, we may conclude that OCEA has shown an excellent performance for dealing with MOPs with complicated PSs and complex PFs. A. Performance on WFG test suite To deeply understand the performance of OCEA, OCEA is also applied to the WFG test suite [31] and compared with the five algorithms mentioned above. It is well known that the WFG test instances have complex PFs and are with various complicated characteristics, such as nonseparable, multimodal, degenerate, deceptive, etc. In this section, 9 bi-objective WFG test instances with 30 dimensional decision variables are taken as the test-bed. The maximum evolutionary generation is set as 450. Through preliminary optimisation over parameters, part of the parameter settings of these algorithms are listed in Table II; while the rest is the same as in Section IV-B. Again 33 independent runs of these algorithms are carried out on each test instance. Table III shows Table III shows that OCEA achieves 9 out of the 18 best mean metrics. The rest five algorithms obtain only 9. The performance of these algorithms ranked from the best to the worst is OCEA Table III, we may conclude that OCEA performs very well in solving the WFG test instances. It also indicates that OCEA is able to deal with MOPs with complex PFs and with complicated problem characteristics. VI. PARAMETER SENSITIVITY ANALYSIS The sensitivity of OCEA to its parameters is analysed in this section. The GLT test suite is used for the analysis. A. Maximum Number of Clusters To test how K max affects the performance of OCEA, K max = {4, 5, 7, 10, 20} are chosen to do analysis. The rest parameters are the same as those in Section IV-B. OCEA was run on each test instances independently 22 times with different K max values. Fig. 4(a) shows the mean and standard deviation values of the IGD metric values obtained by OCEA. From Fig. 4(a), it can be seen that for GLT2, GLT5-GLT6, OCEA can always achieve similar performance robustly for different K max values. But for GLT1, GLT3-GLT4, different K max leads to relatively large performance differences. Especially, when K max is large, the performance of OCEA is not well enough. In general, a small K max can result in good search results by OCEA on the GLT test instances. This implies that OCEA is not very sensitive to the K max values on the GLT test instances. Therefore, K max = 7 is chosen in Section IV to carry out the comparison. It should be noted that the optimal K max depends on individual problem. B. Clustering Effectiveness Analysis The evolution procedure couples naturally with the online clustering procedure in OCEA. It is expected that the approximated set will present a clustering effect when the evolution procedure has converged. To justify the effectiveness of the online clustering, Fig. 5 plots the clustering results in the first 3-dimensional search space on the GLT1-GLT6 test instances. In the figure, the solutions in each different cluster are marked with different colors and symbols. It can be seen that the final approximated sets are partitioned into 7 clusters clearly (note that K max is set as 7). This figure indicates that OCEA can indeed approximate the clustering structure effectively. C. Mating Restriction Probability To test the sensitivity of the OCEA's performance to the mating control parameter β, β = {0.5, 0.6, 0.7, 0.8, 0.9} are used for the analysis. The rest parameters are the same as those in Section IV-B. Again, for different β value, OCEA independently run 22 time on the test instances. Fig. 4(b) shows the statistics of the obtained IGD metric values. From Fig. 4(b), it is observable that for GLT5 and GLT6, different β values bring a similar performance for OCEA; but for GLT1-GLT4, OCEA with different β values performs very differently. Nevertheless, when β = 0.6, OCEA has relatively better performance for all the instances. The observation in Fig. 4(b) indicates that OCEA is not so sensitive to the IGD AND HV METRIC VALUES OF FINAL APPROXIMATED FRONTS OBTAINED BY MOEA/D-DE, TMOEA/D, RM-MEDA, NSGA-II, SMS-EMOA AND OCEA ALGORITHMS OVER 33 INDEPENDENT RUNS ON THE WFG TEST SUITE. test instances MOEA/D-DE TMOEA/D RM-MEDA NSGA-II SMS-EMOA OCEA IGD setting of β in solving the GLT test instances. Therefore, β = 0.6 is chosen in Section IV for the controlled comparison experiments. Again, it is necessary to point out that an optimal β setting should depend on the problem characteristics. D. Control Parameters of Differential Evolution Operator The effect of the DE parameters, i.e., F and CR, are to be evaluated in this section. Fig. 4(c) shows that the F value has a crucial effect on the OCEA performance for GLT1, GLT3-GLT4, and a large F value can lead to a good OCEA performance. However, for GLT2, GLT5-GLT6, different F settings do not affect the OCEA performance acutely. Fig. 4(d) shows that the CR value has a significant effect on OCEA for GLT1-GLT4, and a small CR value is better. But OCEA performs rather stably for GLT5-GLT6 with different CR values. In case F = {0.6, 0.8} (CR = 1) and CR = {0.9, 1} (F = 0.6), OCEA can always find good IGD metric values for all the GLT test instances. In general, Figs. 4(c) and 4(d) denote that OCEA is not very sensitive to the F and CR settings. VII. CONCLUSION This paper presented a first-ever MOEA that incorporate an online clustering to address the non-stationary nature of the evolutionary search. The underlying consideration is 1) to learn the manifold structure of the PS (i.e. the so-called regularity property of MOPs) through clustering; and 2) to adapt to the non-stationary search dynamics. The online agglomerative clustering approach developed in [9] is modified to accommodate the evolution search dynamics. Experimental study has shown that the online clustering can address the nonstationary search process well, and is able to adaptively learn the clustering structure of the PS. The comparison against five well-known MOEAs has also shown that the structures learned adaptively by the online clustering can indeed improve the search efficiency (in terms of search speed) and effectiveness (in terms of the quality of the final approximated sets and fronts). Future work includes 1) the development of intelligent recombination operators that can be well fitted in the online learning mechanism; 2) the development and/or incorporation of other online learning strategies; and 3) the study of the developed framework for many-objective optimisation problems.
2016-06-16T12:58:43.000Z
2016-06-16T00:00:00.000
{ "year": 2016, "sha1": "ac907f1f992ead6cb3b611f94cceca4b85ea1e6e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ac907f1f992ead6cb3b611f94cceca4b85ea1e6e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
9040132
pes2o/s2orc
v3-fos-license
PRKACB is downregulated in non-small cell lung cancer and exogenous PRKACB inhibits proliferation and invasion of LTEP-A2 cells Protein kinase cAMP-dependent catalytic β (PRKACB) is a member of the Ser/Thr protein kinase family and a key effector of the cAMP/PKA-induced signal transduction involved in numerous cellular process, including cell proliferation, apoptosis, gene transcription, metabolism and differentiation. In the present study, the expression pattern of PRKACB in non-small cell lung cancer (NSCLC) and the effect of PRKACB upregulation on cell proliferation, apoptosis and invasion were investigated. PRKACB mRNA and protein expression was analyzed in the NSCLC tissue and corresponding normal tissues of 30 cases, using quantitative RT-PCR and western blot analysis. A plasmid containing full-length PRKACB was transfected into LTEP-A2 cells to further investigate the effects of PRKACB overexpression on proliferation, apoptosis and invasion of the transfected cells, which were examined using 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), colony formation, flow cytometry and Transwell assays. The results revealed that the NSCLC tissues exhibited much lower levels of PRKACB mRNA and protein compared with their corresponding normal tissues. The upregulation of PRKACB decreased the numbers of proliferative, colony and invasive cells, while the apoptotic rates of transfected cells were increased. These data indicate that PRKACB is downregulated in NSCLC tissues and that upregulation of PRKACB may be an effective way to prevent the progression of NSCLC. Introduction Lung cancer is the most commonly diagnosed type of cancer in males and the leading cause of cancer mortality in each gender in economically developed and developing countries (1). Non-small cell lung carcinoma (NSCLC) accounted for ~85% of the all lung cancer cases (2). Standard lung cancer treatment modalities include surgery, chemotherapy, targeted therapy and radiation therapy; however, not all patients benefit from routine therapy. The overall 5-year survival rate of lung cancer patients remains relatively low at ~15% (2). Therefore, the identification of useful biomarkers and exploration of novel therapeutic targets are necessary and demanding tasks. The protein kinase cAMP-dependent catalytic β (PRKACB) gene is located at chromosome site 1p31.1 and encodes cAMP-dependent protein kinase A (PKA) catalytic subunit β. The PRKACB protein is a member of the Ser/ Thr protein kinase family and a key effector of the cAMP/ PKA-induced signal pathway that is involved in numerous cellular processes, including cell proliferation, apoptosis, gene transcription, metabolism and differentiation (3). Typically, PKA is an inactive holoenzyme consisting of two catalytic (C) subunits bound to a regulatory (R) subunit dimer. When four cAMP molecules bind the R subunits, the C subunits are released (4) and free active catalytic subunits phosphorylate serine and threonine residues on specific substrate proteins, which include C-Raf, RhoA, Src and CUTL1, that are involved in cellular proliferation, apoptosis, differentiation and invasion (5)(6)(7)(8). In the human enzyme, four different R subunits (RIα, RIβ, RIIα and RIIβ) and four different C subunits (Cα, Cβ, Cγ and PrKX) have been identified (3). In total, ten different splice variants encoded by the PRKACB gene have been found and a certain number of these were revealed to be expressed in human brain, lymphoid and neuronal tissues (9)(10)(11). Multiple PRKACB subunits have also been observed in human prostate specimens and it appears that the PRKACB variants play varying roles in proliferation and differentiation of prostate cancer progression (12). It has been demonstrated that transcription of PRKACB may be directly activated by c-MYC, which is associated with tumorigenesis by the promotion of cell proliferation (13). It has also been shown that a variant of PRKACB phosphorylates the p75 neutrophin receptor (p75NTR) and regulates its localization to lipid rafts (14). PRKACB was identified as a candidate gene that is directly or indirectly involved in apoptosis in human mantle cell lymphoma (MCL) tumors (15). In addition, a novel interaction between PRKACB, the cell cycle and apoptosis regulatory protein-1 (CARP-1) was identified and confirmed by glutathione-S-transferase (GST) pull-down experiments in brain tissue (16). However, limited information is known with regard to its expression and role in human NSCLC. The present study aimed to assess the role of PRKACB in the development and progress of human NSCLC. The mRNA and protein expression patterns of PRKACB were first examined in the NSCLC and corresponding normal tissues. Moreover, plasmid vectors containing full-length PRKACB and transfected human adenocarcinoma LTEP-A2 cells were constructed to increase the PRKACB expression. The effects of PRKACB upregulation on cell proliferation, clonogenicity, apoptosis and invasion were then investigated in the LTEP-A2 cells. Materials and methods Tissue samples and patients. NSCLC tissues (12 cases of lung squamous cell carcinoma tissues, 18 cases of lung adenocarcinoma tissues; 22 of these 30 cases presented with lymph node metastasis) and their corresponding normal tissues (30 cases) were collected from 30 patients who underwent surgery at the Department of Thoracic Surgery, The Fourth Affiliated Hospital of China Medical University, Shenyang, Liaoning, China, between 2008 and 2012. All tumor tissues were diagnosed histopathologically by at least two trained pathologists. Written informed consent was obtained from all patients prior to surgery and the study protocol was approved by the Institutional Review Board for the use of Human Subjects at China Medical University (Shenyang, China). None of the patients received pre-operative chemotherapy or radiation therapy. Surgically-removed tumors and matched normal tissues were immediately frozen in liquid nitrogen and kept at -80˚C until the extraction of the RNA and protein. RNA extraction and real-time RT-PCR. Total RNA from the frozen tissues was isolated using TRIzol reagent (Takara Bio Inc., Dalian, Liaoning, China). Quantitative real-time polymerase chain reaction (QPCR) was conducted using SYBR Premix Ex Taq (Takara Bio Inc.) in a total volume of 20 µl using a 7300 Real-Time PCR System (Applied Biosystems, Foster City, CA, USA). The PCR conditions were; denaturation at 95˚C for 30 sec, followed by a further 40 cycles of denaturation at 95˚C for 5 sec, and finally annealing at 60˚C for 31 sec. The sequences of the primer pairs are as follows: PRKACB forward, 5'-AGTGGTTTGCCACGACAGATTG-3'; and reverse, 5'-TTGCTGGTACCAGAGCCTCTAA-3'; GAPDH forward 5'-GCACCGTCAAGGCTGAGAAC-3'; and reverse, 5'-TGGTGAAGACGCCAGTGGA-3' . GAPDH was used as the reference gene. The relative levels of gene expression were calculated using the 2 -∆Ct method (∆Ct = Ct of PRKACB -Ct of GAPDH ) and the fold change of gene expression was calculated by the 2 -∆∆Ct method. All experiments were repeated in triplicate. Western blot analysis. The total protein from the frozen tissues was extracted in a lysis buffer (Beyotime Biotechnology, Haimen, Jiangsu, China) and the protein content was determined using the bicinchoninic acid (BCA) assay (Beyotime Biotechnology). A total of 80 µg total protein was separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred onto polyvinylidene fluoride (PVDF) membranes. Subsequent to blocking with 5% bovine serum albumin (BSA), PRKACB antibody (1:500; Santa Cruz) and GAPDH antibody (1:500; Santa Cruz) were incubated on membranes for PRKACB and GAPDH protein overnight at 4˚C. The membranes were then incubated for 2 h at 37˚C with goat anti-rabbit IgG (1:4000; Beijing Biosynthesis Biotechnology Co., Ltd., Beijing, China). Immunoreactive strips were identified using the enhanced chemiluminescence (ECL) system (Beyotime Biotechnology) following the manufacturer's instructions. The DNR Imaging System (DNR Bio-Imaging Systems, Israel) was used to identify the specific bands, and the optical density of each band was measured using Image J software (NIH, Bethesda, MD, USA). The ratio between the integrated optical density (IOD) of PRKACB and GAPDH of the same sample was calculated as the relative content and expressed graphically. Cell culture and transfection. Lung adenocarcinoma LTEP-A2 cells were obtained from the Shanghai Cell Bank (Shanghai, China). The cells were grown in RPMI-1640, supplemented with 10% fetal bovine serum (FBS; Hyclone, USA) and placed in an incubator with 5% CO 2 at 37˚C. To increase the PRKACB expression for subsequent experiments, the LTEP-A2 cells (60-70% confluence) were transfected with a plasmid containing full-length PRKACB (pEGFP-C1-PRKACB) and the vector control (pEGFP-C1; Takara Bio Inc.) for 48 h using Lipofectamine LTX with PLUS reagent (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions. The experiments were repeated at least three times. The efficiency of the transfection in the experiments was >50%. Following 36-48 h of transfection, the cells with high PRKACB expression were confirmed by real-time RT-PCR and western blot analysis. 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. The MTT assay was used to evaluate the proliferation of the transfected cells. The cells were detached and seeded into five 96-well plates (5x10 3 cells/100µl/well) in parallel and transfected with PRKACB and the vector control. During the following 4 days, the absorbance of one indicated plate was examined each day, and the cells in the other plates were cultured continuously. A total of 20 µl MTT (5 mg/ml) was added to each well of the indicated plate, and 4 h later the liquids were removed and 150 µl dimethyl sulphoxide (DMSO) was added. Following 10 min of agitation, the absorbance was measured using a microplate reader (TECAN, Männedorf, Switzerland) at 492 nm. The results were plotted as the mean ± SD of five determinations. Colony formation assay. The cells were transfected with PRKACB and the vector for 24 h. Thereafter, 200 cells were planted into 6-cm cell culture dishes and incubated for 14 days. The plates were stained with Giemsa, and colonies with >50 cells were counted. Statistical analysis. The SPSS for Windows version 17.0 statistical analysis software (SPSS, Inc., Chicago, IL, USA) was applied to complete the data processing. A paired-samples t-test was used to compare the differences between the PRKACB expression in the NSCLC and corresponding normal tissues. One-way ANOVA was used to compare the differences in PRKACB expression in the transfected LTEP-A2 cells or controls. All data are represented as the mean ± SD. P<0.05 was considered to indicate a statistically significant difference. Expression of PRKACB mRNA and protein in human NSCLC tissues and their corresponding normal tissues. The PRKACB mRNA expression was first quantitatively determined in the clinical samples using real-time RT-PCR. Of the 30 patients, 25 (83.3%) demonstrated a lower expression level of PRKACB mRNA in the NSCLC tissues compared with the corresponding normal tissues (Fig. 1A). In addition, the mean expression value of the PRKACB mRNA in NSCLC tissues (relative ratio of PRKABC/GAPDH; 0.007677±0.004608) was significantly weaker than the value in the normal tissues (0.031936±0.018996; P<0.05; Fig. 1B). Consistent with the mRNA level, the protein levels of PRKACB were downregulated in the NSCLC tissues compared with the normal tissues (0.350±0.124 vs. 0.964±0.245, respectively; P<0.05; Fig. 1C). The study also demonstrated that PRKACB protein expression was downregulated in lymph node metastasis tissues (data not shown). PRKACB upregulation inhibits proliferation and clonogenicity in NSCLC cells. To elucidate the biological role of PRKACB during carcinogenesis, the physiological effects of PRKACB upregulation on cell proliferation and clonogenicity were examined using the LTEP-A2 cells. Fig. 2A shows the overexpression of PRKACB in the transfected cells. The study showed that 3 days after PRKACB transfection, the absorbance values in the PRKACB, vector and control groups were 0.93±0.08, 1.41±0.12 and 1.36±0.09, respectively (one-way ANOVA, P<0.05). The growth curve shows that the cells transfected with pEGFP-C1-PRKACB grew more slowly than the empty vector-transfected cells and control group cells, indicating that PRKACB inhibits proliferation in NSCLC cells (Fig. 2B). The colony formation efficiencies of the LTEP-A2 cells transfected with PRKACB and the vector control for 24 h were Fig. 2C). These results showed that the increased expression of PRKACB significantly inhibited the colony formation efficiencies of the LTEP-A2 cells. Collectively, these data suggest that PRKACB may act as a negative regulator of cell growth and that its downregulation plays a significant role in NSCLC carcinogenesis. Elevated apoptotic rate in PRKACB transfected cells. PRKACB has been considered to prevent the overgrowth of cells by inducing cell apoptosis (15,16). Therefore, apoptosis was examined following PRKACB transfection using Annexin V-PE/7-AAD assay and flow cytometry. It was confirmed that PRKACB was upregulated in the transfected cells. The apoptotic rates of the LTEP-A2 cells in the PRKACB, vector and control groups were 24.43±3.42, 4.39±1.63 and 3.48±1.44%, respectively (one-way ANOVA, P<0.05; Fig. 3). The results showed that apoptosis was significantly induced in the PRKACB overexpressed cells. Effect of PRKACB upregulation on the invasive potential of transfected cells. It has been acknowledged that PKA may inhibit RhoA signaling, which has been implicated in the process of tumor cell invasion and metastasis (6). To determine whether PRKACB expression further affects the invasion of LTEP-A2, the present study compared the invasive ability of the three cell groups. The number of invasive cells in the PRKACB, vector and control groups were 83.6±9.5, 156.9±13.7 and 154.2±12.9, respectively (one-way ANOVA, P<0.05; Fig. 4). These results show that the increased expression of PRKACB significantly inhibited the invasion of the LTEP-A2 cells, as demonstrated by the Matrigel invasion assay. Discussion The PRKACB gene is located at the 1p31.1 chromosome site and encodes PKA catalytic subunit β, which is a member of the Ser/Thr protein kinase family. As a key effector of the cAMP/PKA-induced signaling pathway, the free C subunits phosphorylate serine and threonine residues on specific substrate proteins and regulate a wide range of cellular processes. Previous studies have identified the loss of 1p31.1 in MCL patients and the MCL cell line. PRKACB has been identified as an apoptotic candidate gene and it appears that decreased expression of PRKACB is implicated in human MCL (15). PRKACB tissue-specific expression has also been found in human brain, neuronal, lymphoid and prostate cancer tissues, and has been reported to be correlated with cellular proliferative or differentiation processes (9)(10)(11)(12). However, there are no studies investigating the role of PRKACB in lung cancer. In the present study, the mRNA and protein levels of PRKACB were downregulated in the human NSCLC tissues compared with their corresponding normal tissues. These results suggest that PRKACB has a critical effect in the tumorigenesis and aggression of NSCLC. A recent study discovered a novel interaction between PRKACB, the cell cycle and CARP-1; this was confirmed by GST pull-down experiments in brain tissue (16). A study has also demonstrated that PRKACB interacts with p75NTR, which phosphorylates p75NTR at Ser304 (14). In the majority of cases, the most prominent biological function of p75NTR is that it induces cell death and induces the activity of the JNK-p53-Bax apoptosis pathway and other proteins that regulate cell death, such as NRIF (17). PKA-mediated phosphorylation at Ser304 has been shown to promote the translocation of p75NTR to lipid rafts and to regulate the downstream signals of p75NTR, including the inactivation of RhoA, which has been implicated in the process of tumor cell invasion and metastasis. In addition, PKA may also directly inhibit RhoA signaling; when Ser188 is phosphorylated, RhoA becomes inactive and thereby induces characteristic morphological changes, causing cell rounding (6). These data suggest that decreased PRKACB is associated with cellular apoptosis, invasion and metastasis. With the aim of assessing the role of PRKACB in the development and progress of human NSCLC, the present study examined the effects of exogenously-transfected PRKACB on the apoptosis and invasion of LTEP-A2 cells. Consistent with the aforementioned findings, the present study concluded that the upregulation of PRKACB increased the number of apoptotic cells and decreased the number of invasive cells. The results demonstrate the potential role of PRKACB in the development and progression of human NSCLC. As previously described, PKA was able to induce the signal pathway that is involved in numerous cellular process, including cell proliferation, apoptosis and gene transcription (3). cAMP-mediated PKA activation has been shown to have anti-proliferative effects in a number of cell types, including thyroid papillary carcinoma, ovarian epithelial cancer, breast cancer and malignant glioma cells (18)(19)(20)(21)(22)(23)(24)(25)(26). These anti-proliferative effects are mainly associated with the negative regulation of the Ras-Raf-MEK-ERK signaling pathway by interfering with the activation of Raf-1 directly or via Ras in the Raf-1 pathway (5,24,27). Several other mechanisms have been proposed to explain the anti-proliferative effects of activated PKA on various other cells and tissues, including a decrease in the expression level of cyclin D3 and an upregulation of the amount of p27kip1 (26). PKA is able to inhibit CUTL1-mediated proliferation and migration (8), as well as the LPA stimulation of SRF by promoting the dissolution of F-actin (19). In this study, we further examined the effects of exogenously transfected PRKACB on the proliferation of LTEP-A2 cells. The observation that the upregulation of PRKACB induces decreased proliferation of the LTEP-A2 cells is consistent with a negative role for PKA in the proliferation of these cells. Exogenously expressed PRKACB may effectively inhibit the progression of lung cancer. However, the fact that the excess of free PRKACB subunits may generate signals different from those generated by the cAMP/PKA-induced signal pathway cannot be excluded. It has also been previously shown that the activation of PKA has either proliferative or anti-apoptotic effects in cultured cells, and that these opposite responses may be due to the existence of cell type-specific targets of this signaling pathway (12,13). The present study demonstrated that PRKACB was downregulated in human NSCLC tissues. Decreased PRKACB appears to be associated with cellular apoptosis, invasion and proliferation. However, the molecular mechanisms for these processes remain primarily unknown. Increased PRKACB expression is possibly an effective inhibitor of lung cancer. The upregulation of PRKACB may provide a useful strategy for future NSCLC inhibitory therapies.
2018-04-03T00:00:36.909Z
2013-04-08T00:00:00.000
{ "year": 2013, "sha1": "c1c659465763c157e49c710e68ac4cecb1aab449", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/ol/5/6/1803/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c1c659465763c157e49c710e68ac4cecb1aab449", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
249109191
pes2o/s2orc
v3-fos-license
Clinical Value of PLR, MLR, and NWR in Neoadjuvant Chemotherapy for Locally Advanced Gastric Cancer Objective . The clinical value of platelet-to-lymphocyte ratio (PLR), monocyte-to-lymphocyte ratio (MLR), and neutrophil-to-white blood cell ratio (NWR) in predicting the prognosis of patients with locally advanced gastric cancer after neoadjuvant chemotherapy (NACT) was studied. Methods . A total of 131 patients with locally advanced gastric cancer treated with neoadjuvant chemotherapy in our hospital from May 2015 to June 2018 were selected as the study subjects, and all were treated with neoadjuvant chemotherapy. The relationship between the values of PLR, MLR, and NWR and the e ffi cacy of neoadjuvant chemotherapy and clinical staging was analyzed; all patients were followed up for 3 years. Patients were divided into death group and survival group according to the survival of patients. The predictive value of PLR, MLR, and NWR values for patients ’ prognosis was analyzed, and the survival rates of patients with di ff erent PLR, MLR, and NWR values were compared. Results . The e ff ective rate of neoadjuvant chemotherapy in patients with locally advanced gastric cancer was 62.60% (82/131), and the PLR, MLR, and NWR values in the e ff ective group were lower than those in the ine ff ective group ( P < 0 : 05 ). The AUC of combined PLR, MLR, and NWR in evaluating the e ffi cacy of neoadjuvant chemotherapy was greater than that of PLR and NWR alone ( P < 0 : 05 ). The PLR value of patients with stage IIIa, IIIb, and IIIc was greater than that of patients with stage II, the MLR value of patients with stage IIIb and IIIc was greater than that of patients with stage II and IIIa, and the NMR value of patients with stage IIIc was greater than that of patients with stage II, IIIa, and IIIb ( P < 0 : 05 ). PLR, MLR, and NWR values were positively correlated with clinical stage ( P < 0 : 05 ). The PLR, MLR, and NWR values in the survival group were lower than those in the death group ( P < 0 : 05 ). The AUC of combined PLR, MLR, and NWR in predicting the prognosis of patients was greater than that of MLR and NWR alone ( P < 0 : 05 ). The survival rate of patients with PLR ≥ 162 : 11 (36.21%) was lower than that of patients with PLR < 162 : 11 (80.82%), and the survival rate of patients with MLR ≥ 0 : 31 (42.86%) was lower than that of patients with MLR < 0 : 31 (74.67%), and the survival rate of patients with NWR ≥ 0 : 62 (45.00%) was lower than that of patients with NWR < 0 : 62 (74.65%) ( P < 0 : 05 ). Conclusions . PLR, MLR, and NWR values are correlated with clinical stage, and the combined detection has value in evaluating the clinical e ffi cacy of neoadjuvant chemotherapy and predicting the prognosis of patients with locally advanced gastric cancer. Introduction Gastric cancer is a common malignant tumor of the digestive system, with a high incidence in East Asia. Surgery is the most effective method for the treatment of gastric cancer. The 5-year survival rate after radical resection of early gastric cancer can reach over 90%, but early gastric cancer usually has no obvious symptoms, which cannot cause patients to pay attention to it. When patients can feel obvious symptoms, the lesions are already in advanced stage [1]. Neoadjuvant chemotherapy (NACT) has become an important means for the treatment of intermediate and advanced cancer. The use of NACT in the treatment of locally advanced gastric cancer can effectively play a role in tumor downstaging, improve the rate of radical surgery for patients with advanced gastric cancer, improve the quality of life of patients, and improve the prognosis of patients [2,3]. According to the diagnosis and treatment guidelines of the Collaborative Professional Committee of Clinical Oncology of the Chinese Anti-Cancer Society, clinical diagnosis and prognosis assessment of gastric cancer patients mainly rely on imaging, pathological, gastroscopy, and immunohistochemical examinations [4]. However, whether there are more convenient indicators to evaluate the prognosis of patients deserves further discussion. Inflammatory response is closely related to the occurrence and development of tumors. Inflammatory cells can regulate the tumor microenvironment by releasing a variety of cytokines, promote the proliferation of tumor cells, inhibit their apoptosis, promote the distant metastasis of tumor, and affect the prognosis of tumor patients [5]. According to relevant studies, the white blood cells, platelets, lymphocytes, peripheral blood neutrophils, platelet-to-lymphocyte ratio (PLR), and neutrophil-to-white blood cell ratio (NWR) are closely related to tumor prognosis, PLR can reflect the level of systemic inflammation in patients and can be used to evaluate the prognosis of gastric cancer patients [6]. Since the lymph node stage (pN) of UICC clinical tumor stage (TNM) may have obvious "stage deviation" phenomenon, which can directly affect the prognosis evaluation of gastric cancer, some studies have used the monocyte-to-lymphocyte ratio (MLR) to predict the prognosis of gastric cancer patients, which has certain predictive value [7]. The purpose of this study was to investigate the clinical value of PLR, MLR, and NWR in predicting the prognosis of patients with locally advanced gastric cancer treated with neoadjuvant chemotherapy, which is reported as follows. General Information. A total of 131 patients with locally advanced gastric cancer who received neoadjuvant chemotherapy in the hospital from May 2015 to June 2018 were selected as the study subjects, including 79 males and 52 females. The age ranged from 41 to 65 years, with an average of 52:17 ± 6:09 years. This study was approved by the hospital ethics committee. Exclusion Criteria. The exclusion criteria are as follows: (1) patients with serious complications during hospitalization; (2) patients who had received chemotherapy before enrollment; (3) patients with incomplete clinicopathological data; (4) patients with serious infections or immune system diseases; (5) patients combined with other malignant tumors; (6) patients combined with severe organ dysfunction such as the heart, liver, and kidney; and (7) patients with estimated survival time of < 3 months 2.4. Methods 2.4.1. Treatment Methods. All patients were given neoadjuvant chemotherapy. The chemotherapy regimen was MFOL-FOX6, oxaliplatin 85 mg/m 2 was given intravenously for 2 hours on the first day, fluorouracil 0.4 g/m was given continuously intravenously (after calcium tetrahydrofolate), and fluorouracil 2.4 g/m 2 was given continuously intravenously for 46 hours (perfusion by chemotherapy pump). It was repeated every 2 weeks, and the lesions were evaluated by CT after 2-3 cycles. For patients with reduced lesions, surgery was performed after 2 weeks of rest. For patients with intraoperative ascites, the ascites was extracted for centrifugation, and exfoliated cytology was performed. For those without ascites, the abdominal cavity was rinsed with normal saline and then centrifuged with washing fluid. If no cancer cells were found, surgical resection was performed. The resection range included the whole stomach and left lobe of the liver, with the resection margin referring to the principle of radical tumor resection of the organ where the tumor was located. Infection prevention and nutritional support were performed in the perioperative period, and postoperative treatment was performed by an oncologist based on the original protocol for a total of 12 cycles. For those who respond to chemotherapy, symptomatic treatment was performed. Clinical Efficacy Evaluation. According to the "Response Evaluation Criteria in Solid Tumors (RECIST)" [10], patients were divided into complete remission, partial remission, stable disease, and disease progression. Partial remission means that the reduction of the tumor by more than 50% (in the case of a single tumor, the product of the longest diameter of the tumor and its largest vertical diameter is reduced by more than 50%; in the case of multiple tumors, the sum of the areas of multiple tumors is reduced by more than 50%). Stable disease means that the tumor area decreases by less than 50% or increases by less than 25%. Partial remission means that the tumor increases by more than 25% or new lesions appear. Complete remission and partial remission were defined as effective. According to this, the enrolled patients were divided into effective group and ineffective group. Prognosis Evaluation. The survival of patients was followed up for 3 months, 6 months, 1 year, 2 years, and 3 years by telephone or outpatient follow-up. Patients were divided into death group and survival group according to 3-year survival after treatment. Detection of PLR, MLR, and NWR Values. Fasting venous blood was collected from patients before treatment; platelet, lymphocyte, neutrophil, white blood cell, and monocyte counts were detected by Sysmex XE-2100 hematology analyzer (Sysmex Corporation, Japan); and PLR, MLR, and NWR values were calculated. Observation Indicators. (1) The PLR, MLR, and NWR levels of the effective group and the ineffective group were compared, and the evaluation value of PLR, MLR, and NWR levels on clinical efficacy was analyzed. (2) The patients were staged according to the TNM staging system for gastric cancer promulgated by the International Union 2.6. Statistical Methods. All data in this study were input into EXCEL table by two people without communication and were analyzed and processed by statistical software SPSS24.0. Measurement data were expressed as mean ± SD ( x ± s). The data consistent with normal distribution and homogeneity of variance were statistically analyzed by t-test, and one-way ANOVA was used for data comparison between multiple groups. Counting data were described by n and %, chisquare test was used for comparison between groups, and rank sum test was used for comparison of ranked data. ROC curve was used to analyze the evaluation value of PLR, MLR, and NWR levels on the clinical efficacy of neoadjuvant chemotherapy in patients with locally advanced gastric cancer. GraphPad Prism5 was used for image and survival curve analysis; logrank χ 2 test was used to analyze the survival rate between the two groups. All were two-sided tests, and P < 0:05 was considered statistically significant. The PLR value of patients with stage IIIa, IIIb, and IIIc was higher than that of patients with stage II, the MLR value of patients with stage IIIb and IIIc was greater than that of patients with stage II and IIIa, and the NMR value of patients with stage IIIc was greater than that of patients with stage II, IIIa, and IIIb (P < 0:05), see (Note: Comparison of PLR, MLR, and NWR values between the two groups, * indicated P < 0:05) 3.6. Predictive Value of PLR, MLR, and NWR Values for Prognosis. The AUC of combined PLR, MLR, and NWR in predicting the prognosis of patients was greater than that of MLR and NWR alone (P < 0:05), as shown in Table 2 and Figure 6. Discussion Locally advanced gastric cancer refers to gastric cancer that only invades the liver, pancreas, and other surrounding organs or has local lymph node metastasis, localized to the periphery of the tumor, but without distant lymph node metastasis. In the past, surgical resection was generally abandoned and only gastric jejunostomy was used for conservative treatment, but the therapeutic effect was poor [11]. Relevant studies have pointed out that the 3-year survival rate of palliative resection for localized advanced gastric cancer is low and the prognosis of patients is poor [12]. Therefore, how to convert inoperable or potentially resectable gastric cancer into operable resection through appropriate preoperative treatment is the most urgent problem in the treatment of gastric cancer. Neoadjuvant chemotherapy can not only shrink the primary tumor and lymph nodes but also eliminate potential metastatic lesions, thereby reducing the stage, prolonging the survival period of patients, and improving the quality of life of patients [13]. Studies have found that preoperative neoadjuvant chemotherapy and postoperative combined organ resection can significantly improve the survival rate of patients with tumor invading surrounding organs and distant metastasis [14,15]. In this study, neoadjuvant chemotherapy was applied in the clinical treatment of locally advanced gastric cancer, and it was found that the effective rate of neoadjuvant chemotherapy was 62.60%, which was not much different from relevant studies [16], but much higher than palliative treatment efficacy. Therefore, it may be feasible to apply this therapy to locally advanced gastric cancer. In this study, for patients with stable efficacy or disease progression confirmed by CT reexamination after neoadjuvant chemotherapy treatment, considering that tumor cells may be resistant to neoadjuvant chemotherapy drugs, chemotherapy regimen should be changed in the follow-up treatment to improve clinical benefit rate and prognosis of patients. Therefore, for patients receiving neoadjuvant chemotherapy, CT examination should be conducted regularly before and during treatment, and treatment plan should be adjusted timely according to the patient's situation. In recent years, studies have found that inflammatory response and tissue carcinogenesis share similar molecular targets and signaling pathways [17]. Inflammation is involved in the construction of tumor microenvironment by changing tumor tissue homeostasis. In recent years, more and more scholars have found that abnormal levels of inflammatory cells and immunomodulatory molecules exist in various tumor microenvironments, which can affect tumor progression and metastasis [18]. Inflammation can lead to abnormal immune function of the body, resulting in a decrease in the number of lymphocytes. Relevant reports point out that the production of a large number of inhibitory immune cells, neutrophils, etc. can promote inflammatory responses, mediate tumor cell proliferation and angiogenesis, and lead to further tumor infiltration or metastasis [19,20]. Neutrophils can promote tumor growth 0 II IIIa IIIb IIIc II IIIa IIIb IIIc II IIIa IIIb IIIc PLR 50 Computational and Mathematical Methods in Medicine and metastasis by upregulating the expression of related proteases and cytokines. In addition, neutrophils can promote the growth of tumor cells by reshaping extracellular matrix and releasing reactive oxygen species. Effective chemotherapy can eliminate the influence of tumor on the body to a certain extent. At present, changes in tumor size are often used to evaluate clinical treatment effect, and changes in tumor microenvironment are also closely related to tumor growth and reproduction. Therefore, this study suggests that the tumor environment before treatment may also be related to the therapeutic effect of neoadjuvant chemotherapy. This study found that PLR, MLR, and NWR values in the effective group were lower than those in the ineffective group, indicating that the inflammatory state of the body was related to the clinical treatment effect, which is mainly related to the aggravated inflammatory response, and can promote further tumor metastasis, and then affect the clinical treatment [21,22]. Further analysis in this study found that the AUC of combined PLR, MLR, and NWR in evaluating the efficacy of neoadjuvant chemotherapy was greater than that of PLR and NWR alone, and the AUC value was greater than 0.8, indicating that the combined detection has evaluation value for the efficacy of neoadjuvant chemotherapy, so it may be applied in the treatment evaluation of locally advanced gastric cancer. Tumor-associated inflammatory cells can release a series of inflammatory mediators, cytokines, and enzymes, resulting in changes in vascular permeability, which can aggravate local inflammatory responses and can cause oxidative damage and changes in the tumor microenvironment by releasing inflammatory mediators, thus promoting the proliferation and metastasis of tumor cells [23]. Relevant studies have pointed out that changes in local inflammatory state of the body may be related to tumor progression [24]. The results of this study showed that PLR, MLR, and NWR values were positively correlated with clinical stage, indicating that the increase of PLR, MLR, and NWR values may be related to tumor progression. It is currently believed that neutrophils and platelets in the tumor microenvironment are involved in the occurrence and development of tumors and play an important role in tumor-related inflammation and immunity. Relevant reports pointed out that the increase of peripheral blood neutrophils is related to the hematopoietic cytokines produced by tumors [25]. Platelets can promote the growth and metastasis of cancer cells by promoting angiogenesis and producing adhesion molecules, reduce the damage of immune attack and mechanical injury to cancer cells, assist cancer cells to escape from immune, and then promote tumor progression or metastasis. With tumor progression and metastasis, neutrophil and platelet counts in patients increase, and the body's inflammatory response and antitumor immunity are abnormal, which promotes tumor cell infiltration and metastasis. Rele-vant reports indicate that changing the preoperative inflammatory state and immune state can effectively improve the long-term prognosis of patients with malignant tumor [26]. Cancer cells can induce platelet aggregation, and tissue factor secreted by cancer cells can also promote platelet production and activation. Neutrophils can promote angiogenesis and tissue infiltration by secreting vascular endothelial factor and matrix protease, thereby promoting tumor occurrence, invasion, and metastasis, and the increase of neutrophil count can directly affect the body's NWR value. NLR can reflect the body's tumor inflammation and immune status. Relevant studies have pointed out that high level of NLR is conducive to promoting tumor cell proliferation and metastasis, resulting in poor prognosis [27,28]. In this study, the PLR, MLR, and NWR values in the survival group were lower than those in the death group, and the survival rates of patients with PLR ≥ 162:11, MLR ≥ 0:31, and NWR ≥ 0:62 were lower than those with PLR < 162.11, MLR < 0.31, and NWR < 0.62, respectively, suggesting that the prognosis of patients was related to the changes of PLR, MLR, and NWR values. The reason is that when NLR and PLR increase, the body's effective defense is weakened, and the barrier against malignant tumor cells is destroyed, thus affecting the prognosis of patients [29]. In addition, the results of this study showed that the AUC of combined Computational and Mathematical Methods in Medicine PLR, MLR, and NWR values in evaluating the prognosis of patients was greater than that of MLR and NWR alone, indicating that combined detection has predictive value for the prognosis of patients. The patients with different PLR, MLR, and NWR values were followed up to the end point of the study, and the survival curve analysis was performed. The results showed that patients with PLR ≥ 162:11 or MLR ≥ 0:31 or NWR ≥ 0:62 had a lower survival rate (follow-up 3 years). The results provide some guidance for predicting the prognosis of patients with locally advanced gastric cancer by neoadjuvant chemotherapy using PLR, MLR, and NWR. However, this study still needs a larger sample size and more in-depth research to support this conclusion. In conclusion, PLR, MLR, and NWR values are correlated with clinical stage, and combined detection has evaluation value for the clinical efficacy of neoadjuvant chemotherapy and prediction value for the prognosis of locally advanced gastric cancer patients. However, there are still shortcomings in this study. This study is a singlecenter retrospective study, and the statistical results may be biased. Therefore, multicenter analysis is needed to further explore the relationship between PLR, MLR, and NWR values and this disease. Data Availability The labeled datasets used to support the findings of this study are available from the corresponding author upon request.
2022-05-28T15:20:31.739Z
2022-05-26T00:00:00.000
{ "year": 2022, "sha1": "584c1b72de90edce873c5c169801deaaa12aec05", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2022/8005975", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8f54bca46a48680fbe59731b5989c63a15e65117", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252876220
pes2o/s2orc
v3-fos-license
Drug adherence and psychological factors in patients with apparently treatment‐resistant hypertension: Yes but which ones? Abstract The aim of the study was to assess drug adherence, as well as association of psychological factors with both drug adherence and severity of hypertension in two subtypes of patients with apparently treatment‐resistant hypertension (ATRH): younger patients with uncomplicated hypertension (YURHTN) versus patients ≥60‐year‐old and/or with a history of cardio‐ or cerebrovascular complication (OCRHTN). Drug adherence was assessed in urine by targeted Liquid Chromatography‐Mass Spectrometry. The severity of hypertension was assessed by 24‐h ambulatory blood pressure adjusted for the number of antihypertensive drugs and for drug adherence. Psychological profile was assessed using five validated questionnaires. The proportion of totally non‐adherent patients was three times higher (24.1 vs. 7.1%, P = 0.026) in the YURHTN (n = 54) than in OCRHTN subgroup (n = 43). Independent predictors of drug adherence in YURHTN were ability to use adaptive strategies, male sex and family history of hypertension, accounting for 39% of variability in drug adherence. In the same subgroup, independent predictors of severity of hypertension were somatization and lower recourse to planification, accounting for 40% of variability in the severity of hypertension. In contrast, in the OCRHTN subgroup, independent predictors of drug adherence and severity of hypertension were limited to the number of yearly admissions to the emergency room and the total number of prescribed drugs. In conclusion, poor drug adherence and altered psychological profiles appear to play a major role in younger patients with ATRH devoid of cardiovascular complication. This subgroup should be prioritized for chemical detection of drug adherence and psychological evaluation. adherence. In the same subgroup, independent predictors of severity of hypertension were somatization and lower recourse to planification, accounting for 40% of variability in the severity of hypertension. In contrast, in the OCRHTN subgroup, independent predictors of drug adherence and severity of hypertension were limited to the number of yearly admissions to the emergency room and the total number of prescribed drugs. In conclusion, poor drug adherence and altered psychological profiles appear to play a major role in younger patients with ATRH devoid of cardiovascular complication. This subgroup should be prioritized for chemical detection of drug adherence and psychological evaluation. K E Y W O R D S drug adherence, psychological profile, resistant hypertension INTRODUCTION Resistant hypertension has been defined as the failure to achieve an office blood pressure (BP) < 140/90 mm Hg and a 24-h ambulatory BP < 130/80 mm Hg on optimal doses of at least three antihypertensive medications from different classes (ideally one of which is a diuretic). 1 It is characterized by a higher prevalence of target organ damage 2 and a higher incidence of cardiovascular events 3 compared with other forms of hypertension. Many patients with seemingly treatment-resistant hypertension are in fact pseudo-resistant due to poor drug adherence. 4,5,6 However, whatever the approach used, drug adherence is difficult to assess and varies over time. 4,7 Therefore, the frontier between truly and pseudo-resistant hypertension is fluctuating, and patients with severe hypertension may shift from one group to the other and vice versa. Accordingly, many authors prefer to consider these patients in a single category, that is, "apparently treatment-resistant hypertension" (ATRH) 4,8-11. Beyond classic demographic and health-related characteristics, we have demonstrated that psychological factors, mostly related to somatization and expression of emotions, are strong, independent predictors of both drug adherence and severity of hypertension in ATRH but not in controlled hypertensive patients. 12,13 On the other side, it is well known that drug resistance is also influenced by arterial stiffness and vascular damage. 2,3 The scope of the current study was to help identifying those patients in whom psychological factors play a predominant role in the pathogenesis of ATRH, either directly or through the mediation of poor drug adherence versus patients in whom drug resistance may primarily or secondarily result from vessel-related mechanical factors. In order to achieve this aim, we split our cohort of patients with ATRH in patients aged 60 or older and/or with a history of cardio-or cerebrovascular complication (OCRHTN) versus patients without these characteristics, that is, patients < 60-year-old with uncomplicated hypertension (YURHTN) and assessed drug adherence and predictors of both drug adherence and severity of hypertension in these two subgroups. 24-h ambulatory BP monitoring Twenty-four-hour ambulatory BP values were measured using an auto- Full drug adherence, partial drug adherence and total non-adherence were defined as presence of all, part or none of prescribed drugs in the urine, respectively. Drug adherence was defined as the percentage of prescribed antihypertensive drugs which were effectively detected in the urine. Psychological analysis In order to evaluate the psychological profile of hypertensive patients, five validated questionnaires were used: the Emotion Regulation Questionnaire (ERQ) [16][17][18] ; the Cognitive Emotion Regulation Questionnaire (CERQ) 19,20 ; The Toronto Alexithymia Scale (TAS-20) 21 ; The Brief Symptom Inventory (BSI) 22 and the Post Traumatic Diagnostic Scale (PDS). 23 The CERQ was applied only in the Brussels cohort, because not available in Italian version. More details on the questionnaires and their interpretation are provided in references. 12,13 Statistical analysis Statistical analyses were performed using IBM SPSS Statistics ver- Characteristics of patients with OCRHTN versus YURHTN Between October 2017 and June 2021, a total of 97 patients were enrolled. After taking into account age and medical history, 43 were assigned to the OCRHTN group, and 54 to the YURHTN group. Compared to the OCRHTN group, younger patients without a history of cardiovascular disease, that is, belonging to the YURHTN group were almost 20 years younger (P < .001), were three-time more often smokers (P = .038), tended to be more overweight (P = .074), had a higher office and 24-h ambulatory diastolic BP (P < .001 for both) despite prescription of a higher number of drugs (P = .026) and were more frequently admitted to the emergency room (P = .026) ( Table 1). Drug adherence in patients with OCRHTN versus YURHTN Mean drug adherence was significantly lower in patients with YURHTN versus OCRHTN, with a mirror distribution between fully and partly adherent patients, and completely non-adherent patients. In particular, the proportion of totally non adherent patients was three-fold higher in the YURHTN group (P = .026) ( Table 1). Correlations between drug adherence and demographic, health-related and psychological variables In the group of younger patients devoid of cardiovascular complication (YURHTN), drug adherence was correlated with male sex (r = TA B L E 2 Correlations between socio-demographic, health-related and psychological variables and adherence level Table 2). Correlations between severity of hypertension and demographic, health-related and psychological variables In the group of younger patients with uncomplicated hypertension (YURHTN subset), the severity of hypertension defined as mean 24-h systolic BP adjusted for adherence level and the number of antihypertensive drugs prescribed, was correlated with male sex (r = .463, Table 3). By contrast, as reported for drug adherence, in older patients / patients with a history of cardio-or cerebrovascular complication (OCRHTN group), none of all analyzed psychological parameters were associated with the severity of hypertension, but only the number of visits in emergency per year (r = .560, P = .001) and the total number of drugs prescribed per day (r = .360, P = .023) ( Table 3). Predictive analyses All demographic, health-related and psychological parameters significantly correlated with the variable of interest (drug adherence and severity of hypertension) were included in the models as potential predictors in predictive analyses. Regression analysis on the adherence level In the subset of younger ATRH patients without a history of vascular complication (YURHTN), among the thirteen variables initially included in the model to predict adherence level, three of them remained as independent predictors: the "adaptive strategies" of the CERQ, sex and history of hypertension in the family. This final model accounted for 39.2% (adjusted R 2 ) of the variability in adherence level (Table 4). In the OCRHTN group, three variables (those which were significantly correlated) were initially included in the model to predict adherence level. Two of them finally remained as predictors of drug adherence: the number of visits in emergency department per year as well as a history of cardiovascular events in the family, both accounting for 28.1% (adjusted R 2 ) of the variability in adherence level (Table 4). TA B L E 3 Correlation between socio-demographic, health-related and psychological variables and severity of hypertension Regression analyses on severity of hypertension (based on 24-h systolic ABPM) In the subset of younger ATRH patients without a history of vascular complication (YURHTN), among the fifteen variables initially included in the model to predict the severity of hypertension, two of them remained as independent predictors: the "planification" of the CERQ and the somatization subscale of the BSI. This final model accounted for 39.0% (adjusted R 2 ) of the variability in severity of hypertension (Table 4). In the OCRHTN group, two variables (those which were significantly correlated) were initially included in the model and remained as predictors of severity of hypertension: the yearly number of admissions to the emergency department as well as the total number of drugs prescribed, jointly accounting for 35.0% (adjusted R 2 ) of the variability in severity of hypertension (Table 4) was mostly related to increased arterial stiffness and vascular damage. Nevertheless, due to the cross-sectional nature of the study, we cannot exclude that a proportion of these patients were previously not adherent and became so only with time or after a cardiovascular complication occurred, often in association with adoption of a healthier lifestyle including losing weight and/or quitting smoking. It is also possible that psychological disorders secondary or not to an ancient trauma and subsequent PTSD played a role in the pathogenesis of hypertension and subsequent accumulation of vascular damage, eventually leading to drug resistance, but that these hidden features cannot easily resurface using simple auto-questionnaires. Besides its crosssectional character, other limitations of our study include lack of direct assessment of arterial damage and relatively small sample size, limiting the ability to perform subgroup analysis. CONFLICT OF INTEREST None.
2022-10-14T06:17:14.711Z
2022-10-12T00:00:00.000
{ "year": 2022, "sha1": "8c70b65b08c17112a4318e14531f39b8c392eab8", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "94ecd675af8ba86deea640f4d713aa2bba42c25a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
6525226
pes2o/s2orc
v3-fos-license
Accuracy and efficiency define Bxb1 integrase as the best of fifteen candidate serine recombinases for the integration of DNA into the human genome Background Phage-encoded serine integrases, such as φC31 integrase, are widely used for genome engineering. Fifteen such integrases have been described but their utility for genome engineering has not been compared in uniform assays. Results We have compared fifteen serine integrases for their utility for DNA manipulations in mammalian cells after first demonstrating that all were functional in E. coli. Chromosomal recombination reporters were used to show that seven integrases were active on chromosomally integrated DNA in human fibroblasts and mouse embryonic stem cells. Five of the remaining eight enzymes were active on extra-chromosomal substrates thereby demonstrating that the ability to mediate extra-chromosomal recombination is no guide to ability to mediate site-specific recombination on integrated DNA. All the integrases that were active on integrated DNA also promoted DNA integration reactions that were not mediated through conservative site-specific recombination or damaged the recombination sites but the extent of these aberrant reactions varied over at least an order of magnitude. Bxb1 integrase yielded approximately two-fold more recombinants and displayed about two fold less damage to the recombination sites than the next best recombinase; φC31 integrase. Conclusions We conclude that the Bxb1 and φC31 integrases are the reagents of choice for genome engineering in vertebrate cells and that DNA damage repair is a major limitation upon the utility of this class of site-specific recombinase. Background Serine integrases are phage-encoded site-specific recombinases that promote conservative recombination reactions between short (40-60 bp) DNA substrates located on the phage (phage attachment site, attP) and bacterial (bacterial attachment site, attB) chromosomes [1]. The product of attP × attB recombination is an integrated prophage flanked by two new recombination sites, attL and attR, each containing half sites derived from attP and attB. In the absence of accessory factors the integrases mediate unidirectional recombination between attP and attB with greater than 80% efficiency. In the presence of a phage-encoded accessory protein, the recombination directionality factor (RDF) the attP × attB recombination is inhibited and the attL × attR recombination is stimulated [2,3]. In this way integration (attP × attB) and excision (attL × attR) of the phage genome are under strict controls and in tune with the phage life cycles. The unidirectional activity, short substrate sites and functional autonomy of these recombinases has made them a useful complement to the widely used reversible recombinases of the tyrosine recombinase family such as Cre and Flp for genome engineering reviewed in [1]. In particular the unidirectional activity has made them valuable for the promotion of DNA integration by recombinase-mediated cassette exchange reactions and for the development of iterative recombination approaches [4][5][6][7]. To date five serine integrases derived from phages φC31 [8], φBT1 [9], Bxb1 [10,11] and R4 [11,12] have been shown to be capable of promoting site-specific integration of DNA into mammalian genomes while TP901-1 [13], A118, FC1 and φRV [14] have been shown to promote site-specific recombination in an extra-chromosomal environment in mammalian cells. With one exception these studies have, however, been carried out largely independently of one another, in different cell lines, cells of different species and using different protocols. The exceptional study was that of Yamaguchi and colleagues [11] who compared the activities of the φC31, Bxb1, TP901-1 and R4 integrases in mediating site-specific recombination into a human artificial chromosome (HAC) isolated in hamster cells. This study exploited a promoter trap strategy and thus relied upon selection to assay recombination products. Importantly, however, the products were not analyzed at the level of DNA sequence. It was therefore neither possible to determine the total level of recombination promoted by these different enzymes nor to determine the fraction of recombination events that had proceeded by reciprocal and conservative site-specific recombination. The discovery that site-specific recombination mediated by the φC31 integrase is sometimes accompanied by DNA damage in vertebrate cells identified [15] posed the question as how far integrase associated DNA damage limits the use of the serine integrases as genome engineering reagents. Damage of the type seen with the φC31 integrase, has not been detected with the tyrosine recombinases, is likely to be a consequence of the DNA cleavage and strand exchange mechanism of the serine recombinases (reviewed in [16]). The recombination pathway begins with binding of integrase to the attachment sites, which are then brought together by protein:protein interactions to form a synaptic tetramer. The reaction then proceeds by the formation of concerted double strand breaks in both of the DNA substrates prior to subunit rotation and recombination. It seems likely that the damage that accompanies the activity of these recombinases in vertebrate cells arises as a consequence of these double strand breaks being recognized by the mammalian cell double strand break repair pathways. It should be noted that to date no damage has ever been observed to accompany the action of the serine integrases in bacteria and it may therefore be the eukaryotic chromatin environment or the nature of the mammalian repair pathways that leads to the damage seen in mammalian cells. The frequency and extent of the damage would seem likely to reflect the time spent by the integrases in the covalently linked, cleaved DNA complex as this is most likely to be the target for the repair pathways. In total there are fifteen phage-encoded serine integrases for which both of their attachment sites are known. Nine of these fifteen integrases have been characterised in reactions in mammalian cells, E. coli or in vitro (φC31 [17], Bxb1 [18], φBT1 [19], φC1 [14,20], MR11 [21], TP901-1 [22], R4 [12], A118 [14], and φRV [14], [23]) while six (TG1, φ370.1 [24], Wβ [25], BL3, SPBc and K38 ) have not yet been shown to be active outside their native hosts. In total there are ten integrases whose utility as tools for integrating DNA into mammalian genomes has not been investigated. We have therefore set out to rank the activities of all fifteen of these serine integrases for which the sites are known by the criteria of both accuracy and efficiency in two different mammalian cell lines; human HT1080 cells and mouse ES cells. These studies have provided us with a clear rank order for the utility of this important class of enzymes as tools for vertebrate genome engineering with Bxb1 integrase mediating the most efficient and accurate site-specific recombination in this heterologous environment. The φC31 integrase comes a close second. For the remaining integrases, we demonstrate that DNA damage is an important factor in limiting the utility of members of this class of enzymes for promoting integration reactions in mammalian cells. Results and discussion Fifteen unidirectional 'phage integrases are active in E. coli Fifteen unidirectional phage integrases have been described for which the attachment sites are known. We first set out to confirm that each integrase and their cognate attachment sites were active by expression of each integrase gene in E. coli in the presence of a reporter plasmid. Fifteen reporter plasmids (conferring chloramphenicol resistance) were constructed in which the lacZα gene was flanked by attB and attP sites for each of the respective integrases ( Figure 1A). These plasmids conferred exclusively blue colony when transformed into an E. coli strain containing the lacZΔM15 mutation and plated on selective agar in the presence of X-gal and IPTG ( Figure 1B, top row). We next introduced genes encoding each one of the integrases tagged at the N-terminal end by a StrepII tag and at the C-terminal end by a SV40 large T antigen nuclear localization signal into the E. coli expression plasmid pET21a (conferring ampicillin resistance). Initially we introduced the integrase expression plasmids into DH5α E. coli K12 strains containing a cognate reporter plasmid and scored the transformants as white or light blue, indicative of recombination, or blue, indicative of no recombination. Active integrases were from phages φC31, Bxb1, TG1, TP901-1, A118, SPBc, Wβ, φBT1 and φ370.1 in this assay ( Figure 1B, middle row). The transformants containing integrase genes that only gave rise to blue or light blue colonies were picked and restreaked to single colonies and white colonies were observed from strains expressing BL3, FC1 and K38 integrases whereas no white segregants were observed from strains containing MR11, φRV and R4 int genes ( Figure 1B, bottom row). As the expression of the integrases might be limiting in bacteria containing the MR11, φRV and R4 integrase genes, we repeated the recombination assay in E. coli BL21(DE3), a strain that should over-express the integrase genes on induction with IPTG. Competent BL21(DE3) containing the different reporter plasmids were prepared and the integrase containing expression plasmids introduced by transformation with selection for ampicillin and chloramphenicol resistance. Expression of integrase was induced by addition of IPTG to logarithmically growing cells and cultures were further incubated overnight at 20°C. Plasmid DNA was extracted from 1 ml of each culture and used to transform plasmid free DH5α scoring for white and blue colonies. All of the plasmids extracted from the E. coli BL21(DE3) cells expressing the integrases gave exclusively white colonies with the exception of strains that had contained MR11 or φRV integrases, which yielded 50% and 25% unrecombined plasmid, respectively ( Table 1). The control BL21(DE3) strains that contained the reporter plasmids and the empty expression plasmid (pET21a) remained stable with no loss of the lacZα gene. Accurate site-specific recombination for all of the integrases was confirmed by PCR and sequencing of the attL sites recovered from plasmids present in the white colonies. Cells from the same cultures were used to assay the level of expression of the StrepII-tagged integrases by western blots (Table 1). A StrepII-tagged integrase of the expected molecular weight was detected from all the BL21 (DE3) strains expressing an integrase except those containing the Wβ, Bxb1, BL3 or R4 integrase genes. The amount of protein present was The amount of StrepII tagged integrase detected in a Western blot from 1 ml of culture. 2 The % of plasmid extracted from the cultures that had undergone recombination. Figure 1 Assaying integrase activity in E.coli. A. The reporter plasmid used to assay activity of the integrases in E. coli. This plasmid, derived from pACYC184, contains lacZα gene encoding the LacZα peptide flanked by integrase attachment sites. The intact reporter plasmid confers β-galactisidase activity on a strain containing the ΔlacZM15 allele and therefore the colonies appear blue on agar containing presence of X-Gal and IPTG. Active integrase promotes site-specific recombination between the attP and attB sites resulting in deletion of the lacZα gene and the colonies appear white. B. The appearance of E.coli containing the reporter plasmid with or without an integrase expression plasmid in the presence of X-Gal and IPTG. determined by comparing the intensities of the bands from the western blotting with those from a series of standards (Table 1). While there was no simple relationship between the amounts of protein detected in the cultures and the level of recombination, this experiment demonstrated that all the integrases and their attachment sites were active in E. coli. Assaying the unidirectional phage integrases in vertebrate cells The activities of the integrases were then measured in vertebrate cells. In order to do this we constructed three reporter plasmids that could be used to assay the ability of all of the different integrases to mediate either deletion or integration in mammalian cells. Each of the reporter plasmids included arrays containing attP or attB sites for all 15 integrases arranged head-to-tail. For the deletion assay we used a plasmid, called 'attP array CCAG HyTK attB array' (Figure 2A), in which a counter selectable marker gene, a fusion between a gene conferring resistance to hygromycin and a herpes simplex virus thymidine kinase gene that confers sensitivity to the nucleoside analogue gancyclovir, HyTK, was placed between arrays of attB and attP sites in a head-to-tail orientation ( Figures 2B and 2C). If integrase mediated recombination occurs between its cognate attP and attB sites flanking the HyTK gene, the cells become resistant to gancyclovir. The assay for integration activity was based upon the use of two plasmids. The first of these, called 'attP array CCAG HyTK attP array' , contained the docking attP sites, flanking the counter selectable marker (Figure 2A) and the second, called 'attB array CCAG neo attB array' , contained the incoming attB sites flanking a gene encoding resistance to the antibiotic G418 ( Figure 2D). Cassette exchange between the incoming attB sites and the docking attP sites is expected to yield chromosomes containing attR and attL flanking the integrated neo gene and the cells display both gangcylovir and G418 resistance. In both assays the integrases were expressed using plasmids ( Figure 2E) Figure 2 Assay system for integrases in mammalian cells. The reporter plasmid design (A) used to assay either site-specific deletion or integration promoted by serine integrases in vertebrate cells. In the deletion reporter construct, called attP array CCAG HyTK attB array, the counter selectable gene CCAG HyTK was placed between an array of attB sites (B) and an array of attP sites (C). In the integration or recombinase-mediated cassette exchange constructs, the docking construct, attP array CCAG HyTK attP array, had the CCAG HyTK gene flanked by two arrays of attP sites and the reporter construct, termed attB array CCAG neo attB array (D) contained the CCAG Neo gene conferring resistance to G418 flanked by arrays of attB sites. The integrase expression constructs are shown schematically in (E) each containing an int gene modified at the 5′ and 3′ ends to encode a StrepII tag and a nuclear localization signal, respectively, and placed down-stream of a CCAG promoter and upstream of an internal ribosome entry site and a dominant selectable marker conferring either zeocin or xanthine resistance (ecogpt). been codon optimized and tags placed at the 5′ and 3′ ends encoding the StrepII tag and the nuclear localization signal. An internal ribosome entry site (IRES) followed by an antibiotic resistance gene was placed downstream of each integrase gene. The use of arrays of attachment sites in the reporter plasmids had three merits: Firstly it enabled a strategy that ensured that comparisons between the different integrases were not compromised by position effects arising as a result of the target sites for the different integrases being integrated at different positions in the vertebrate genome. Secondly it allowed us to determine whether there was any cross-reactivity between the different integrases and their attachment sites. Finally this approach proved to be efficient, requiring only a small number of cell lines to be used for all of the necessary assays. However as a precaution, we wanted to exclude the possibility that placing the attachment sites within an array altered their activity and so we compared the in vitro activity of one of the integrases, φC31 integrase, using substrates in which the attachment sites were either isolated or present in the arrays ( Figure 3A-E). These experiments showed that the recombination efficiencies of the attachment sites were similar whether they were isolated or present in the arrays. Identification of eight unidirectional phage integrases promoting site-specific deletion in mammalian cells Firstly we wanted to determine which of the fifteen integrases were active on substrates integrated into vertebrate genomes. The most sensitive way in which to detect activity is to assay recombination between attB and an attP sites in cis because the proximity of the sites favours the kinetics of synapsis, the process by which integrase bring the substrates together in a tetramer prior to DNA cleavage. We therefore transfected the construct designed to assay deletion, attP array CCAG HyTK attB array, into human HT1080 cells by electroporation and selected for stable transfectants using hygromycin. We screened 96 stably transfected clones for the integrity of the integrated DNA using PCR, checked for single copy integrants by restriction enzyme analysis and filter hybridization and then confirmed the integrity of the attP and attB arrays by sequencing. In this way we recovered four independent, stably transfected clones from two independent transfections, each containing a single copy of the integrated deletion reporter construct, attP array CCAG HyTK attB array. In order to compare the activities of the different integrases we first transiently transfected 10 5 HT1080 cells containing the attP array CCAG HyTK attB array reporter with expression plasmids for each of the integrases using Lipofectamine and assayed for recombination activity by selecting for resistance to gancyclovir, a drug which is selectively toxic for cells expressing the HyTK fusion. None of the integrases gave a significant increase in the number of gancyclovir-resistant cells as compared to the empty expression vector (Adiitional file 1: Table S3) and so we assayed pools of the resistant cells for the presence of the recombinant attR site using PCR. Recombination activity was detected in populations of cells transfected with the R4, φC31, φBT1, Bxb1, SPBC and Wβ integrase expression constructs (not shown). However the low level of activity overall made it impossible to conclude anything about the integrases that did not yield attR in the PCR reactions as the integrases may be active but causing damage that removed the attR primer binding sites, may simply be slow in promoting site-specific recombination or completely inactive. Moreover the variability of the relative numbers of gancyclovir resistant clones generated in different experiments made an accurate comparison between the active integrases impractical. We therefore used electroporation to transfect linearized integrase expression constructs into each of the two independent cell lines containing the attP array CCAG HyTK attB array reporter used in the transient expression experiments and selected for clones containing stably integrated, integrase expression constructs (Additional file 1: Table S4). The yield of clones generated by the integrase expression constructs was at least an order of magnitude lower than those recovered following transfection with the empty expression vector suggesting that all of the integrases were toxic to various degrees. The TG1 and φ370.1 integrases appeared particularly toxic by this criterion because these consistently gave us no stably transfected clones. We divided the clones that had been stably transfected with each of the integrases into two groups of approximately equal sizes. We used the first of these groups to estimate the ability of the integrases to promote site-specific recombination of the attP array CCAG HyTK attB array reporter construct within the first two weeks of exposure to integrase. In order to do this we applied gancyclovir selection to these clones and then assayed individual gancyclovir resistant clones for one of the recombinant products, attR, by PCR. The six integrases that had shown clearly detectable activity in the transient transfection experiment (R4, φC31, φBT1, Bxb1, SPBc and Wβ) showed activity after stable expression of the integrase ( Figure 4A). The remaining integrases showed no evidence for site-specific recombinase activity following gancyclovir selection. We wanted to determine how accurately the six active integrases were mediating site-specific recombination. We therefore sequenced at least 7 PCR products containing recombinant attR sites derived from each of the six active integrases and showed that, with the notable exception of the R4 integrase, all exclusively yielded products that were consistent with conservative site-specific recombination ( Table 2). In the case of the R4 integrase only 2/9 PCR products contained intact attR sites, demonstrating that this enzyme seems particularly damage prone, at least in human cells. We wanted to know whether the complete failure to detect deletion activity after two weeks of growth in the presence TP901-1, FC1, φ370.1, K38, φRV, A118, BL3 and MR11 integrases was because these recombinases were slow or because they were damaging the target sites. We therefore applied hygromycin selection to the second group of clones that had been stably transfected with the integrase expression construct but not exposed to gancyclovir, then relaxed hygromycin selection and analysed the clones for any detectable recombination after a further two weeks by PCR. The results were clear; TP901-1 integrase showed detectable recombination after further culture indicating that it was indeed slow but the remaining integrases showed no evidence of recombination (Additional file 1: Table S4). As before we determined the sequence of the PCR products containing the predicted attR site generated by the TP901-1 integrase; 4 were intact attR sites and 1 was damaged thus showing that TP901-1 integrase is like the R4 integrase and prone to site damage. We wanted to know whether the remaining 8 integrases that had failed to show productive recombination were attempting recombination but were in fact damaging the attachment sites. We therefore sequenced their substrate attachment sites (attP and attB) in the integrated reporter plasmids in two of the hygromycin resistant clones that had been transfected with the respective integrase expression constructs. None showed evidence of site damage. Thus we concluded that the R4, φC31, φBT1, Bxb1, SPBc, TP901-1 and Wβ integrases are active on substrates integrated in to the genome of HT1080 cells Figure 4 Comparing the activity of fifteen serine recombinases in human HT1080 cells and in mouse ES cells. A. HT1080 cells containing a single integrated attP array CCAG HyTK attB array reporter construct were stably transfected with an integrase expression plasmid selecting for antibiotic resistance. The transfected clones were then subjected to gancyclovir selection to identify those that had lost the CCAG HyTK marker gene and then screened for recombinant attR sites to identify those that had undergone integrase-mediated site-specific deletion of CCAG HyTK. The two bars shown for each integrase reflect the results of two independent experiments. (Additional file 1: Table S4). None of the other eight integrases yielded recombinant gancyclovir resistant clones and they are not shown in this part of the figure. B; ES cells containing a attP array CCAG HyTK attB array reporter integrated at the ROSA26 locus were transiently co-transfected with an integrase expression plasmid and a linearized PGKneo construct to normalize for differences in the efficiency of transfection. Gancyclovir resistant clones were screened for attR sites. Two experiments were carried out and the bars reflect the means of the normalized activities (Additional file 1: Table S5). The analyses were carried out twice on a single attP array CCAG HyTK attB array reporter cell line and thus the data is shown as the mean of the two experiments. C; HT1080 cells that had been transfected with the empty expression vector (CCAG iresZeo) or with the indicated integrase expression vector were transiently transfected with the attP array CCAG HyTK attB array reporter and scored for site-specific recombination by PCR. For ϕ370 and TG1 integrases the integrase expression vector was co-transfected with the reporter construct, the remaining experiments were carried out with HT1080 cell lines that had been stably transfected with the respective integrase expression construct. although the TP901-1 integrase promotes recombination slowly and that the FC1, φK38, RV, A118, BL3 and MR11 integrases are not detectably active on integrated substrates. We cannot make any statement about the TG1 and φ370.1 integrases because we were unable to recover clones expressing these integrases for experimental analysis. We attempted to use western blotting to assay for expression of the integrases using with an antibody to the N-terminal StrepII tag but we were unable to detect any signal with any of the integrases suggesting that they are all expressed at low levels. Rank ordering of the deletion activities of the integrases in this experiment indicated the following Bxb1 = φC31 = φBT1 > R4 = Wβ > SPBc > TP901-1. The utility of the R4 integrase would seem to be limited by its liability to site damage. The purpose of this part of our project was to identify those enzymes that were active in vertebrate cells and we did not investigate the background of GANC r attRclones. Although such clones were seen in the cells transfected with the empty vector and may arise from background silencing of the attP CCAG HyTK attB indicator gene or from loss of the chromosome carrying this gene, they occur at a higher level in the clones that had been transfected with integrases (Additional file 1: Table S4). We cannot exclude the possibility that they arise as a result of recombinase-mediated target site damage, although the accurate recombination activities seen with five of the seven active integrases would suggest that this is unlikely. The source of these background clones is therefore unclear. It is also clear that not all of the clones that were successfully transfected with a construct expressing an active integrase yielded recombinant products. Thus even with the Bxb1 and φC31 integrases about 30% of the clones that were resistant to the antibiotic used to select for the presence of the expression construct failed to yield clones that were resistant to gancyclovir and contained an attR site. One hypothesis was that these clones failed to express sufficient integrase to bind and synapse the substrates, processes that are dependent on the affinity of integrase for its attP and attB sites and the expression level of integrase. We tried to test this idea by using the StrepII epitope with which we had tagged all of the integrases but, as before, were unable to do so for all of the integrases because the StrepII epitope tag was insufficiently sensitive and no signal was obtained in the western blots. However we had specific polyclonal antibodies for the φC31 and φBT1 integrases that we expected would be more sensitive and indeed they allowed us to measure the presence of the respective integrases by western blotting,. The results of this analysis was consistent with the notion that at least for some integrases the failure of site-specific recombination was associated with inadequate or low levels of expression (Additional file 1: Figure S2). The success of the western blotting using the polyclonal antibodies also demonstrates that the previous failure to detect integrase expression in the human cells using the StrepII tag and antibody was due to the relatively low sensitivity of this system. The results obtained with the human HT1080 cells posed the question of whether they were generally true for vertebrate cells. Genome engineering of mouse embryonic stem cells (ES cells) is widely practiced and so we used a deletion strategy to assay the activities of the integrases in mouse ES cells. We used sequence targeting to introduce the attP array CCAG HyTK attB array deletion reporter cassette into the ROSA26 locus (Additional file 1: Figure S1) and then assayed for deletion by gancyclovir resistance following transient transfection with the set of integrase expression constructs described above. In order to avoid problems with toxicity that would compromise the practical significance of any results we chose to use transient assays and an internal control to carry out the experiment. In this ES cell system there appeared to be a clearer difference between the numbers of gancyclovir-resistant clones seen with the empty vector and those with expressing integrase than was observed in the HT1080 cells. The difference was not absolute however and identification of active integrases also required PCR analysis for the presence of a recombinant attR site. The results (Additional file 1: Table S5 and Figure 4B) were similar but not identical to those seen in the HT1080 cells; R4, φC31, φBT1, Bxb1, SPBc, Wβ and TG1 integrases showed detectable activity but the TP901-1 integrase did not. The activity seen with the TG1 integrase was, however, weaker than seen with the others with only one out of eight clones containing a detectable attR site. Rank ordering of the deletion activities of the integrases in this experiment indicated the following Wβ > Bxb1 > φC31 > SPBc > R4 > φBT1. We analyzed the accuracy of the site-specific recombination mediating the deletion reaction in two gancyclovir-resistant clones generated by the Bxb1, R4, Wβ and SPBc integrases. These results demonstrated that the Bxb 1, φBT1 and Wβ integrases all mediated site-specific recombination The results show the sequences of the recombinant sites detected following integrase mediate site specific recombination in human HT1080 or mouse ES cells. Underlined and in lowercase in the case of one sequence from a deletion event occurring in ES cells following expression of the SPBc integrase is the sequence of a flanking ϕ370.1 attP site discussed in the text. accurately but that the R4 integrases was associated with a deletion in one of the two recombination products analyzed and that both of the SPBc products were deleted. In the case of the SPBc integrase the region deleted 171 bp in the attP array extending as far as the φ370.1 attP site raising the possibility of a lack of specificity in attP site recognition by the integrase. This however seems unlikely because the breakpoint in the φ370.1 attP site is not at the proposed recombination junction but 6 bp 3′ of the point of symmetry defining the pseudo-palindrome of this attachment site. The failure to detect recombinase activity mediated by the φ370, FC1, TG1, RV, FC1, φK38, MR11, A118 and BL3 integrases in HT1080 cells using deletion substrates integrated into the vertebrate genome posed the question as to whether these integrases were active in HT1080 cells or whether they were simply prevented from acting by the fact that the target sites were integrated into the genome. We therefore carried out a series of transient tranfection experiments in which the deletion substrate plasmid was transiently transfected into HT1080 cells that had been stably transfected with an expression plasmid for one of these integrases or, in the case of the φ370 and TG1 integrases,(where such stably transfected cell lines did not exist) co-transfected the respective expression plasmid and deletion substrate plasmid and then analyzed the extracted DNA for deleted plasmid after 72 hours by PCR. The recombinants were assayed using one of two PCR reactions for which the Bxb 1 or Wβ integrases acted as positive controls. The results ( Figure 4C) demonstrate that the RV, φK38, MR11, A118 and BL3 integrases were in fact active in the HT1080 cells and thus we conclude that these integrases are unable to promote site-specific recombination when their substrates are integrated into the genome of the HT1080 cells but can do so when they are present extra-chromosomally. We can make no statement about the φ370, FC1, TG1 integrases as we have no evidence to determine whether they are expressed. Comparative integration activities of seven unidirectional phage integrases in mammalian cells assayed by recombinase mediated cassette exchange Site-specific integration, and in particular the exchange of marker cassettes, collectively termed recombinase mediated cassette exchange, is an important technique for the precise introduction of DNA into cells, in particular for the comparative analysis of different genes integrated at the same docking site. The uni-directional serine integrases are ideally suited to this application as they require only simple attachment sites and no host accessory proteins. We wanted to determine which of the seven integrases that we had identified above as being active in promoting site-specific deletion in human cells, also functioned in promoting site-specific cassette exchange. Our previous experiments had shown the merits of using cell lines that contained arrays of attachment sites that were then stably transfected with different integrase-expressing plasmids as reagents for accurate comparisons of integrase activity. We first introduced an integration target; attP array CCAG HyTK attP array expressing the hygromycinthymidine kinase fusion flanked by arrays of attP sites into human HT1080 cells by electroporation and, as before, selected structurally intact, single copy integrants. We then transfected these cells with individual integrase expression plasmids for each of the seven integrases that had been shown to be active in the deletion assay and again selected for stable integrants. In order to compare the cassette exchange activities of the different integrases we then transiently transfected two completely independent clones with the integration substrate, attB array CCAG Neo attB array ( Figure 2). The work described above indicated that the integrases were toxic to varying degrees (Additional file 1: Table S4) and so we expected that there would be different numbers of resistant clones derived from the transfection experiments of clones expressing different integrases, despite similar amounts of plasmid being used in the transfection. We controlled for such differences by transfecting the integrase-expressing clones with a linearized CCAG Neo plasmid in a parallel experiment carried out at the same time as they were transfected with the circular attB array CCAG Neo attB array plasmid. We selected for G418 resistance in both cases and normalized the experimental transfections using the yield of clones generated following transfection with the linearized CCAG Neo. We then assayed for cassette exchange in the experimental clones by selecting the G418 resistant clones for gancyclovir resistance to identify those clones that had lost the CCAG HyTK marker and by PCR in order to identify the attL and attR products of a site-specific recombination reaction. We carried out three such assays on two independent attP CCAG HyTK attP, integrase-expressing clones. The results (Additional file 1: Table S6, Figure 5A) of these experiments revealed consistent and significant differences between the seven integrases in terms of their abilities to mediate site-specific integration. At one extreme were the Bxb1 and φC31 integrases which promoted efficient and accurate site specific integration and at the other were the three integrases Wβ, SPBc and TP901-1 which generated clones that had the phenotypes expected of cassette exchange through site-specific recombination i.e. they were gancyclocvir -resistant and G418-resistant, but for which we could not obtain PCR products for attL or attR. We sequenced the attL and attR sites generated by site-specific integration in the Bxb1, φC31, R4 and φBT1 integrase-expressing clones. The Bxb1 and φC31 integrases products were as predicted but in three cases the attR product generated by the R4 integrase was damaged and similarly one of the three attR sites generated by the φBT1 integrase was also damaged. The experimental results shown in Additional file 1: Table S6 could be interpreted to suggest that there is a high background of non-specific integration of the circular plasmids into the genome of the HT1080 cells in the absence of any integrase. Such an interpretation would only be valid however if it the cells containing the empty expression vector were transfected with equal efficiency to those expressing any of the integrases. Given the toxicity of the integrases seen in the results shown in Additional file 1: Table S4 this is almost certainly not the case and thus the relatively apparently high background of nonspecific integration simply reflects the fact that the cells that do not express integrase are just more easily transfected than those that do. We wanted to understand how the gancyclocvir-resistant, G418-resistant, attL -, attRclones had been generated by the Wβ, SPBC and TP901-1 integrases. First of all we wanted to confirm that indeed these integrases were active in the human cells and in order to investigate this we assayed extra-chromosomal site-specific recombination following transient transfection. We therefore took one of the clones expressing each of these integrases and Bxb1 integrase as a control and then transiently transfected the cells with the integration substrate plasmids either alone or together and then confirmed site-specific recombination by PCR for each of the respective attR and attL sites ( Figure 5B). We supposed that the gancyclocvir-resistant, G418-resistant, attL -, attRclones generated by these Figure 5 Comparing the activity of seven different serine recombinases in human HT1080 cells for their utility in recombinase mediated cassette exchange. A. Cell lines containing a single integrated attP array CCAG HyTK attP array reporter construct and stably expressing the indicated integrase were transfected with the attB array CCAG neo attB array integration reporter construct using lipofectamine. The experiment was carried out three times using two independent cell lines for each of the seven integrases. The number of G418-resistant clones generated by the cassette exchange reaction promoted by the different integrases was normalized by transfection using lipofectamine with a uniform amount of linearized CCAG neo plasmid. Open bars correspond to the number of colonies generated with the reporter plasmid divided by the number of colonies generated with the linearized CCAG neo (for the raw data see Additional file 1: Table S6). Between seven and ten colonies of each transfection were picked and assayed for site-specific recombination by PCR and the total yield of colonies generated by site-specific recombination is represented by the filled bars. B. Cell lines that had been stably transfected with the indicated integrase expression construct were transiently transfected with the indicated integration reporter construct and after three days assayed for site specific recombination by PCR. Two different reactions were used for the assay; one for the Bxb1 integrase and the other for the remaining integrases. The gel on the final panel shows the PCR reaction products obtained when the indicated reporter constructs were transfected into HT1080 cells that expressed no integrase and assayed for site-specific recombination by either of the two reactions. recombinases arose as a result of DNA damage arising as a result of abortive recombination. We therefore characterized the structure of the remains of the docking site (the attP array CCAG HyTK attP array plasmid) and the integrated attB array CCAGNeo attB array plasmid in 10 clones derived by transfection of either the Wβ or TP901-1 integrase expressing clones ( Figure 6A). This analysis showed evidence of target site damage with extensive deletion of the HyTK coding region and flanking sequences. In the case of the TP901-1 integrase six of the clones (numbers 3, 5, 7, 8, 9 and 10) the internal sequences (2, 3 and 4 in Figure 6A) were deleted but flanking sequences (1 and 5 in Figure 6A) were intact suggesting the possibility that the two attP sites had simply ligated together after deletion of the intervening DNA. We therefore analysed the residual attP sequences in three clones: two, numbers 4 and 9, showed evidence of deletion of the residual attP sequences ( Figure 6B) while in a third, number 8, the residual attP sequence was intact. For both integrases the incoming attB array CCAG Neo attB array plasmid also showed evidence of DNA damage, but in this case the damage was associated with the DNA flanking the cognate attachment sites. Thus in eight out of ten of the Wβ derived clones one or other of the attachment sites showed evidence of damage and three out of the ten TP901-1 derived clones showed evidence of damage to one or other of the attachment sites. The conclusion that we draw from these studies is that DNA damage is limiting the ability of the Wβ, SPBc and TP901-1 integrases to mediate site-specific recombination and cassette exchange in vertebrate cells and compromises the Figure 6 Analysis of G418-resistant, gancyclovir-resistant, attR -, attLclones generated by the Wβ and TP901 integrases. A. Ten G418resistant, gancyclovir-resistant, attR -, attLclones generated by one or other of these two integrases were analysed by PCR across the indicated sequences in each of the two reporter plasmids used in the transfection. The numbers below the plasmid maps indicate the PCR reactions assayed in the table. Primer sequences used for these assays are listed in the (Additional file 1: Table S7). B. Sequence analysis of damaged sites in the three indicated clones. The diagram shows the regions deleted in the remnant of the flanking arrays found in each of three clones. potential utility of the R4 and φBT1 integrases. However we are able to rank the utility of the other four enzymes: Bxb1 > φC31 > R4 > φBT1. As before we also tried to assay the ability of these seven enzymes to mediate site-specific integration in mouse cells using transient expression of the integrase. None were detectably active suggesting that as in human cells detectable integration with this configuration of attachment sites requires stable expression of the integrase. Conclusions The major conclusion following from this work is that although all of the fifteen unidirectional serine integrases for which both attachment sites have been identified are active in E.coli, only four of these; Bxb1, φC31, R4 and φBT1 are able to mediate accurate site-specific integration into genomic DNA in human cells and rank such that Bxb1 is marginally better than φC31 and both are better than R4 and φBT1 integrases. The three integrases from the phages Wβ, SPBc, TP901-1 are active in vertebrate cells as judged by their ability to mediate sitespecific deletion and by their ability to mediate integration reactions extra-chromosomally but fail to complete successful site-specific integration when attempting an integration by cassette exchange because they damage one or other of the participating DNA molecules. Although there are differences (Figure 4) between the activities seen with the different integrases in human and mouse cells the data the overall pattern of activites in the two cell types is such that it would seem prudent to adopt the B×b 1 integrase as the first choice in both. We have not investigated the causes of the different activities but they must reflect interactions between the respective integrases and host encoded proteins. Our observations have three practical implications and pose one question. The practical implications are first that the B×b 1 integrase should be the first choice for any genome engineering in vertebrate cells that requires the use of a serine integrase. Second, that screening for more serine integrases that can be used in vertebrate cells is likely to have a low success rate and, third, that it is necessary that all site-specific integrants, particularly those generated by the R4 and φBT1 integrases should be checked for the fidelity of the recombination reaction. The results obtained in the integration experiments with the Wβ, SPBc, TP901-1 integrases pose the question of how such damage arises. One possible mechanism is set out in Figure 7. In this schema one or other of the attachment sites on the incoming donor plasmid, attB array CCAG neo attB array forms a synaptic complex with an attP site in the integrated docking cassette located on attP CCAG HyTK attP. The attP and attB sites are then cleaved but because the two participating sites are embedded in chromatin, strand exchange and/or re-joining of the DNA backbone are inhibited. Possibly the intermediate complex, containing the covalent attached integrase subunits to the cleaved DNA, is unstable and dissociates, or the process of strand exchange by subunit rotation is interrupted. The cleaved DNA covalently linked to integrase subunits leads to resection of the target attP site by DNA repair pathways which in turn leads to loss of the counter-selectable HyTK gene ( Figure 7A). The concomitant double strand break in the attB array CCAG neo attB array plasmid enables this plasmid to integrate efficiently elsewhere in the genome ( Figure 7B). However not all of the gancyclovir-resistant, G418-resistant, attL -, attRclones recovered following transfection of the Wβ or TP901-1 integrase expressing lines showed evidence of damage at one or other of the donor attB sites and one assumes that these clones arise by a similar but more complicated mechanism in which the transfected cell takes up more than one of the these plasmids and that it is one of the plasmids that have not participated in the abortive attempt at site specific integration that integrates into the host cell genome. If chromatin is inhibiting strand exchange by the serine integrases, one might also have reasonably expected chromatin to alter other activities of integrase such as siteselection. In vitro φC31 integrase only recombines attP × attB and is never active on other pairs of attachment sites including attP with attP or attB with attB. This siteselectivity is explained as only integrase dimers bound to attP and to attB can tetramerise to form the synaptic complex. Within this complex activation of DNA cleavage occurs. In all of our experiments we did not observed altered site-selectivity or DNA damage arising from some attempt at recombination of two attP sites or two attB sites. The absence of any change in site-selectivity in eukaryotic chromatin implies that the proposed conformational differences between integrase bound to an attP site and to an attB site are robust to withstand any fortuitous protein-protein interactions arising with chromatin. The differences seen between the activities of the integrases in E. coli and in vertebrate cells on extrachromosomal substrates on one hand and on substrates integrated into the genome on the other and the explanation for the site damage seen in the integration reactions both suggest that chromatin and other DNA binding proteins are important factors in limiting the activity of sitespecific recombinases in vertebrate cells. It may therefore be of value in future experiments to determine how the activity of the integrases vary according to the position of the target sites in the genome. Methods Bacterial strains, plasmids and molecular biology E. coli K12 DH5α was used as a general cloning host and propagated on LB medium containing appropriate supplements or antibiotics (ampicillin (100 μg/ml), chloramphenicol (50 μg/ml), X-gal (120 μg/ml), IPTG (40 μg/ml). E. coli BL21 (DE3) was used as a protein over-expression host. To construct the integrase expression plasmids, the integrase genes were amplified by PCR from either phage templates or from plasmids and inserted into pET21a vector (Novagen) using either the In-Fusion cloning system (Clontech) or T4 ligase and compatible restriction sites. The native sequences were used as the templates for the amplification of all integrase genes except for φC31, A118, FC1 and φK38 for which templates were derived from synthetic, codon optimised forms (Genescript). Each integrase gene was modified to include a StrepII tag at the 5′ end and a nuclear localisation sequence (NLS) at the 3′ end. Reporter plasmids containing cognate attP and attB sites for all the integrases were cloned into pACYC184. PCR was used to amplify the lacZα gene using forward and reverse primers that contained the attB and attP sites (both approximately 50 bp in length) in head-to-tail orientation (Additional file 1: Table S1). All of the constructed plasmids were verified by sequencing (Dundee Sequencing Service). The sequences of all PCR plasmids and primers used in the bacterial work are listed in Additional file 1: Table S1 and Additional file 1: Table S2. Recombination assays in E. coli The activities of the cloned integrases in E. coli were assessed by two assays, which relied on different expression regimes for the integrases. The expression vector pET21a has a T7 RNA polymerase promoter to drive the transcription of the integrase genes and this was used to ensure expression of the integrase genes in the E. coli host BL21(DE3). In the absence of T7 RNA polymerase, as is the case in E. coli DH5a, expression of integrase is dependent on the recognition of the T7 promoter by host RNA pol ymerase, which will result in lower expression. DH5α cells containing the attB/attP reporter plasmids were transformed with the appropriate integrase expression plasmid and plated out on LB agar plates containing ampicillin (100 μg/ml), chloramphenicol (50 μg/ml), X-gal (120 μg/ml), IPTG (40 μg/ml) and incubated overnight at 37°C. Recombination between the att sites deletes the lacZα gene from the reporter plasmid leading to white colonies and therefore recombinants (white colonies) were scored amongst a background of non-recombinants (blue colonies). The transformation plates derived from several integrases gave rise to only blue or light blue colonies. Restreaking of these colonies was performed to determine if white colonies (containing recombinant reporter plasmids) could segregate. BL21(DE3) cells containing the attB/attP reporter plasmids were transformed with the appropriate integrase expression plasmid and plated out on LB agar plates containing ampicillin and chloramphenicol. After an overnight incubation a single colony was picked and grown in 5 ml 2YT containing ampicillin and chloramphenicol for 5 h at 37°C. The cells were induced with IPTG and grown overnight at 20°C after which 1 ml of this culture was used to prepare plasmid and the pellet from 4 ml was used in Western blotting to estimate the expression of integrase. Plasmids extracted from the overnight culture were used to transform DH5α cells and colonies growing on plates containing chloramphenicol, X-gal and IPTG after overnight at 37°C were examined. The proportion of recombinant versus non-recombinant plasmids was scored from the proportion of white versus blue transformants. PCR was carried out on 5 individual colonies for each integrase used in the assay to amplify and sequence the attL sites, remaining in the recombinant plasmids. Cell pellets from the overnight cultures were resuspended in LB, incubated in SDS sample buffer and the proteins separated on a 4-12% SDS gel (Expedeon, UK). The integrase protein was detected by Western blotting using a monoclonal antibody against the StrepII tag conjugated to horseradish peroxidise (IBA, Germany). The amount of protein present was determined by comparing the intensities of the bands from the Western blotting to those from a dilution series of purified StrepII-tagged φC31 integrase. The intensity of the bands was determined using ImageJ (NIH) and the results were expressed as μg of total protein per ml culture. In vitro recombination assay Integrase recombination activity was assayed in vitro using substrates in which the attachment sites were present either isolated from other attachment sites or within the arrays containing the attachment sites for all 15 integrases of interest. The combination of substrates was as follows: pRT702 (φC31 attP site) and pRT600 (φC31 attB site), pRT702 and pUC57_attB array, pUC57_attP array and pRT600, pUC57_attP array and pCCAG_attB array, pCCAG_attP array and pUC57_attB array. The substrates were incubated in recombination buffer (10 mM Tris pH 7.5, 100 mM NaCl, 5 mM DTT, 5 mM spermidine, 4.5% glycerol, 0.5 mg/ml BSA) and φC31 integrase (0 -700 nM) for 1 h at 30°C [26] After heat inactivation at 80°C for 10 min the reaction was digested with HindIII or BamHI and the recombination molecules detected by gel electrophoresis. Gel images were analysed using ImageJ (NIH); the intensity of the bands was determined (after subtraction of base line intensities) and used to quantify the depletion of substrates and appearance of products. Human cell culture Human HT1080 fibrosarcoma cells were grown as described previously except that RPMI 1640 rather than Dulbecco's modified Eagles Medium was used as the unsupplemented medium. G418, Hygromycin and gancyclovir (Invitrogen) were used at 400 mg/ml, 100 mg/ml and 20 μM respectively for selection. Cells were transfected with DNA either by electroporation using a BTX ECM 630 at 400 V, 50 Ω and 250 μF or using Lipofectamine (Invitrogen) as recommended by the manufacturers. We used electroporation with 500 μg of linearized DNA and zeocin selection for the stable introduction of the integrase expression constructs and lipofection of 1.6 μg of closed circular DNA for the transient introduction of either substrate or integrase expression construct. ES cell manipulation and analysis Targeting vector construction and gene targeting The insertion of the attP-Hygro-TK-attP array at the ROSA26 locus was performed by gene targeting in E14tg2a ES cells (Additional file 1: Figure S1). A targeting vector for the ROSA26 locus, pRosa26.10 [27] was adapted by the insertion of an AscI-NsiI-SacII polylinker, made by oligonucleotide annealing, into the AscI and SacII sites 3′ and 5′ of the homology arms, creating plasmid pRosa26-PL. The final targeting vector, pRosa26-PHB containing the attP-Hygro-TK-attB arrays, was constructed by inserting the arrays from pBSattPCCAGHyTKattB into pROSA26-PL via the 5′ AscI and the 3′ NsiI sites. The targeting vector was linearized by XhoI digestion and electroporated into 1×10 7 E14tg2a cells at 500 V, 3 uF. Cells were plated on gelatin and recombinant clones were recovered by selection in Hygromycin 75 ug/ml. Targeted clones were identified by long range PCR screening using primers 5′-GGCACTACTGTGTTGGCGGA-3′ and 5′-GGCCAGCTTATCGATACCGT-3′ for the 5′ end; 5′-AGCGAGGGCTCAGTTGGGCTGTTT-3′ and 5′-CTC AGTGGCTCAACAACACTTGGTCA-3′ for the 3′ end. Single copy integration events were confirmed by Southern blotting using an EcoRV digest and an internal Hygromycin probe. Transient assays of integrase-mediated deletion in ES cells 1 × 10 6 attP-Hygro-TK-attB ES cells were electroporated with 5 μg of the integrase vectors and 200 ng of a control plasmid containing a functional neomycin cassette to control for transfection efficiency. Electroporation was performed using the Neon transfection system (Invitrogen) (3 × 1400 V, 10 ms). Cells were plated on 2 wells of a 6 well plate and selected in either 3 μM Gancyclovir or 350 μg/ul G418. On the 8 th day of selection resistant colonies were stained in methylene blue and counted. Deletion efficiencies were calculated from the number of gancycolvir resistant colonies normalised against transfection efficiency by assessing the number of G418 resistant clones in the replica plating. All comparison transfection experiments were repeated twice using 2 independently targeted ES cell clones. Additional file Additional file 1: Table S1. Plasmids used to assay integrase activity in E.coli, Table S2. Primers used to construct expression plasmids and assay integrase activity in E.coli, Table S3. The results of two transient transfection experiments in which the integrase shown in the first column was introduced into each of two cell lines containing a single copy of the deletion assay reporter plasmid located intact at a single site in the genome of HT1080 cells. Table S3. Assaying integrase activity in human HT1080 cells by deletion activity following transient transfection of integrase expression plasmid, Table S4. Assaying integrase activity in human HT1080 cells by deletion activity following genomic integration of integrase expression plasmid, Table S5. Assaying integrase activity in mouse ES cells by deletion activity following transient transfection of integrase expression plasmid, Table S6. Assaying site-specific integration activity of seven integrases able in human HT1080 cells, Table S7. Primers used to analyse re-arrangements associated with cassette exchange integrations: illustrated in Figure 6 of main text, Figure S1. (A) Targeting the ROSA26 locus in mouse ES cells with a deletion reporter construct. (B) the targeted locus before deletion (C) the targeted locus after deletion. Sequences of the Del-Rosa primers are in the main text, Figure S2. western blot analysis of φC31 and φBT1 integrase expression in HT1080 clones that contain a single integrated copy of the attP array CCAG HyTk attB array reporter and have been stably transfected with an integrase expression plasmid but fail to delete the HyTk gene.
2017-04-19T23:39:16.114Z
2013-10-20T00:00:00.000
{ "year": 2013, "sha1": "c1abd10fce25b74ea7fa01ea65cdf93eb78429da", "oa_license": "CCBY", "oa_url": "https://bmcbiotechnol.biomedcentral.com/track/pdf/10.1186/1472-6750-13-87", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9638744b9979dcd5173108ed3f4c0cefce7e726", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
52283030
pes2o/s2orc
v3-fos-license
A case of paroxysmal atrioventricular block–induced cardiac arrest Premature ventricular contractions (PVCs) have been shown to cause paroxysmal first-degree,1, 2 second-degree,3, 4 and complete5, 6, 7 atrioventricular (AV) block. We present a case of PVC-induced paroxysmal AV block (PAVB) resulting in cardiac arrest that was managed with implantation of a leadless pacemaker. Introduction Premature ventricular contractions (PVCs) have been shown to cause paroxysmal first-degree, 1,2 second-degree, 3,4 and complete 5-7 atrioventricular (AV) block. We present a case of PVC-induced paroxysmal AV block (PAVB) resulting in cardiac arrest that was managed with implantation of a leadless pacemaker. Case report A 66-year-old man with a past medical history of hyperlipidemia and left bundle branch block ( Figure 1) was admitted to the hospital for an esophagectomy for esophageal cancer. The postoperative course was complicated by acute renal failure requiring renal replacement therapy. On postoperative day 5, he suffered an asystolic cardiac arrest requiring cardiopulmonary resuscitation (CPR). The rhythm strip from the arrest revealed sinus tachycardia and left bundle branch block, then a series of PVCs, followed by a 20-second period of ventricular asystole due to complete AV block ( Figure 2). The patient regained spontaneous circulation after 2 minutes of CPR. Prior to the cardiac arrest, his labs were notable for a pH of 7.29 and a potassium level of 5.6 mmol/L. The same day he was taken for an exploratory laparotomy for abdominal distention, at which viable small and large bowel were found. The patient remained on mechanical ventilation and was closely monitored in the intensive care unit. A transthoracic echocardiogram performed later demonstrated normal left ventricular chamber size and function with no regional wall motion abnormalities. Cardiac magnetic resonance imaging showed no evidence of infiltrative, inflammatory, or ischemic heart disease and no delayed myocardial enhancement. During his stay at hospital he had infrequent PVCs but no further episodes of asystole, syncope, or dizziness. The PAVB had occurred in the setting of preexisting bundle branch block, suggestive of underlying conduction disease. Due to an unpredictable recurrence long term, a decision was made to place a single-chamber permanent pacemaker. He had a prior tunneled right internal jugular vein port for chemotherapy and a recent intravenous line-associated left upper extremity deep vein thrombosis extending to the axillary vein, thus limiting options for traditional transvenous pacing. Therefore, a leadless pacemaker (Micra, Medtronic Inc, Minneapolis, MN) was placed in the right ventricle (RV) to treat future PAVB episodes ( Figure 3A and B). There were no procedural complications. Discussion PAVB is an abrupt, unexpected, repetitive block of atrial impulses as they propagate to the ventricles. It is likely a rare cause of sudden cardiac death. The true incidence of this phenomenon is unknown. It has been observed in association with distal conduction disease at baseline, most commonly right bundle branch block. 8 PAVB has been reported as both bradycardia and tachycardia dependent. 9 It occurs in the setting of a diseased conduction system, in which cells with a less negative resting membrane potential have impaired excitability, and postrepolarization refractoriness. 9 The mechanism remains unsettled, with phase 3 block, phase 4 block, and concealed conduction all proposed. 8-10 Management requires permanent pacing due to the unpredictable nature of recurrence of the block and the associated risk of sudden cardiac death. Radiofrequency ablation of the PVC in a patient with PVCinduced first-degree AV block has been previously described. 1 KEY TEACHING POINTS Paroxysmal atrioventricular block (PAVB) is an uncommon cause of cardiac arrest. Permanent pacing is indicated in most cases due to its unpredictable course. A leadless pacemaker can successfully be used to manage PAVB in case of limited vascular access. This was not possible in our case due to significant comorbidities, a paucity of PVCs, and the severe nature of the AV block resulting in cardiac arrest that necessitated CPR. A traditional transvenous pacemaker was not an ideal choice in our case due to the presence of a thrombus in the left axillary venous system and a chronic indwelling catheter in the right internal jugular vein. Leadless pacemakers can be implanted using a single femoral site puncture and have emerged as a promising alternative in patients with limited vascular access. 11 They have a significantly reduced risk of complications compared to historical transvenous controls, but currently offer RV-only pacing. 12 Chronic RV pacing is known to increase the risk of atrial fibrillation and cardiomyopathy in patients with intact AV conduction. 13 Dualchamber leadless pacemakers are under development, but are not currently available. In our patient, apparently infrequent, relatively brief, but potentially catastrophic AV block was followed by normal AV conduction, suggesting that long-term single-chamber antibradycardia support was sufficient to prevent adverse clinical events. During 1-month follow-up the ventricular pacing was less than 1% and there were no symptoms to suggest pacemaker syndrome. To our knowledge, this case is the first to use a leadless pacemaker to manage PAVB.
2018-09-24T14:56:15.316Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "d1fe2615577da0cb1ec78da2f9262c80d18e835b", "oa_license": "CCBYNCND", "oa_url": "http://www.heartrhythmcasereports.com/article/S221402711830037X/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d1fe2615577da0cb1ec78da2f9262c80d18e835b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228981893
pes2o/s2orc
v3-fos-license
Female Student Migration: A Brief Opportunity for Freedom from Religio-Philosophical Obedience : Vietnamese Confucian religio-philosophical ideals regulate social order in the family, community, and nation state. As a result, women’s duties to their husbands, fathers, ancestors, and Vietnam powerfully permeate all aspects of gendered life. This study of 20 Vietnamese women explored their experiences as international students in Australia. Primary focus was on how their gendered Confucian histories compelled their migratory journeys, influenced changes to their intimate partner experiences while in Australia, and the reimagining of identity, hopes and dreams on looking forward at their future returns to gendered life in Vietnam. The application of Janus Head phenomenology enabled understanding of how the women’s temporality became influenced by fascinations of future change, mixed with feelings of uncertainty and limbo that arose when forward facing hopes were thwarted by their looking back realities. There was an intense sense of unresolve as time drew closer to the end of their studies, in which the women associated feelings related to returning to Vietnam’s strict Confucian informed gender order as a “living Hell.” Introduction Confucianism was introduced to Vietnam around two millennia ago and it remains the nation's most practiced religio-philosophy. The Vietnamese version, however, hosts a synchronous blend of folk religions, customs and beliefs derived from Buddhism, Taoism and the origins of Chinese Confucianism itself (Hieu 2015). Confucianism has survived feudal conflict, Vietnam's colonization by the French, warfare, and socio-political events, as well as communism's insistence with Marxist-Leninist atheism (Evans 1985;Phan 2019). Enmeshed religio-philosophical beliefs permeate filial structure, social aesthetics and political life, rituals involving spirit worship and ancestor veneration (Rydstrøm 2017;Van 2019). Vietnamese Confucianism regulates social order and gender roles, emphasizing conformity to behavioral correctness of women and of men (Nyitray 2004). Underlined by notions of family unity, obligations, hierarchal relations and filial piety (Gao 2003;Ha 2014;Hoang and Yeoh 2012), women's inferiority and men's superiority powerfully permeate all aspects of Vietnamese family, social, and political life. Men, as master and authority, set the rules for their households and social ethics, in business and political governance (Ma and Marquis 2016;Rhee et al. 2012). Vietnamese women are usually portrayed as quiet, devoted, caring and providing endless love to their husbands and children, and politically shy (Capps et al. 2010;Dyson et al. 2013;Hoang and Yeoh 2011;Kim et al. 2013;Lim and Lim 2004). In addition, women hold the responsibility to organise family rituals aimed at pleasing spirits, and for ancestor worship according to their own and partner's patriarchal lineage (Long and Van 2020;Vu and Yamada 2020). gender responsibilities when temporarily migrating with their male partners and children, and separated from extended families and the socio-political pressures upon them. What transpired was the women's expression of freedom from the shackles of their Confucian religio-philosophical ideals that, in Vietnam, had formerly defined and constrained them. Methods Twenty Vietnamese women studying in Australia participated in one-on-one, semi-structured, face-to-face research interviews. The duration of each interview was between one to two hours. Interviews were conducted in the Vietnamese language, audio-recorded, transcribed and translated to English. Research ethics approval was granted by the Flinders University Social and Behavioural Research Ethics Committee (Project Number 7511), which is a properly constituted research ethics committee in accordance with Australia's National Health and Medical Research Council. Standard research ethics conventions were applied, including informed consent, voluntariness, confidentiality, safe storage of data and di-identification in transcriptions and the use of pseudonyms in reporting. At the time of interviewing, all the women who participated in interviews were postgraduate university students in Australia on Student Visa subclass 500. The conditions of this visa allow students to bring their immediate family members with them, either a spouse or de-facto partner and/or children under the age of 18 years (Department of Home Affairs 2019). In accordance with participant criteria, all the women had been studying in Australia for at least 6 months. Each of the women was accompanied by their partner and children, however, three women were separated from their partners at the time of interviewing. Two women had returned their children to Vietnam, to be cared for by extended family, while they stayed on in Australia with their partners to complete their studies. Adopting an emancipatory research ethic allowed the participants to have some control over the direction of interview discussion, within the broader research focus (Rose and Glass 2008;Strier 2007). Accordingly, many participants changed discussion focus to what they felt was important about context or phenomena being researched, namely their international student, parenting and intimate partner relationship experiences. As it transpired, the women interviewed constantly re-focused their responses to the performative nature of Vietnamese gendered life, the pervasiveness of Vietnamese Confucianism in filial structure, and compared this with the relative freedoms they experienced in Australia. Interview data were transcribed, deidentified and analyzed. Data analysis involved the initial manual coding of interview transcripts, using an iterative process of data familiarisation, searching for patterns, reviewing, and clustering codes into themes (Braun et al. 2019). Themes were then subjected to interpretive analysis. This involved the researchers' application of Janus Head phenomenology to generate a sense of the subjective, lived experiences of the other. In providing a brief methodological explanation, interpretive phenomenology is concerned with the meaning of human lived experience; how people perceive their lived experience and understanding of themselves (Smith and Osborn 2007). The Janus Head is a Greek mythological figure with two faces, one looking back at war and one looking forward in the search for peace. In application, Janus Head phenomenology assisted in making sense of subjective meanings arising from the interaction between one's history and their perceived future, in the context of the present human experience (Mathews 2011). Janus Head phenomenology, therefore, allowed for interpretations of the participants' perceptions of living in the present, as influenced by both their pasts and the obscurity of their imagined futures-and how current experiences may become altered by fascinations in achieving future change and/or losing projected hopes and desires. Findings The Janus Head lens offered phenomenological interpretations of temporality associated with participants' past, present lived experiences, and imagined futures. Findings are organized into three subsections to assist in narrating patterns identified from across the data according to these experienced or imagined times. Women's Looking Back at the Gender Inequity Access to social media and stories from friends offered participants insights into how life could be for themselves and their children, if they were to study abroad. In seeing it, they dreamed about it and wanted it, with many going to great lengths to secure scholarships. Others sold properties, cars and household goods to self-fund their student migratory experiences, and to show sufficient funds to support themselves in satisfaction of student visa financial requirements. Participants spoke of the many opportunities they preconceived about going abroad; in particular, the chance to escape the multiple burdens that Vietnamese Confucianism imposed upon women and their children. Intense feelings were expressed when the women spoke about Vietnamese Confucian religio-philosophical traditions, culture and societal norms. Women reflected on the division of labour and Vietnam's social gaze over the simplest of tasks, and spoke about how shame and stigma ensured that every facet of gender order was held strictly in place. For example, Quyen said: Men from there [Vietnam] . . . always think that going to buy groceries is a women's job. Especially my husband, who was quite extreme and patriarchal back in Vietnam. If he went to the market, he would be very embarrassed because people would think that he was "scared of his wife." The term scared of his wife is an insulting term in Vietnam. Through a Confucian lens, it represents the notion of a weak man who is not performing according to the social order. Such stereotypes risk bringing whole families to shame. As a result, the stealth of Confucianism holds women responsible for their men's social image. Men in Vietnam are discursively constructed as having no role to play in household shopping or other reproductive work. They are not to prepare meals, do housecleaning or tend to children. Participants spoke about how this legitimises men's departure to engage in leisure activities while women engage in arduous chores, as there is nothing in the home for these men to do. Many of the women advised that their male partners typically went to the bar every night or played sports with their male friends until late. As a representative example, Kim presented a snapshot of her intimate life in Vietnam: Back there, my husband would just come home from work and then go to play sports. I didn't really like it that way. I hardly knew how many meals he would have at home. . . . He worked and played sports, so he had very little time for our family. Due to multigenerational living, the women advised of the pressure from mothers-in-laws for them to keep busy in the house. Coincidentally, men were expected to keep out of the way. Women interviewed shared how the multigenerational grooming of them to abide by the gender order commenced as soon as they married. This is when, according to Vietnamese Confucian ways, that ownership of the bride passes from her family to that of the groom. Harsh practices towards brides guarantee that they know their place in the family according to the Confucian gendered social order of things. Tan described her wedding day, which is a typical practice still prevalent in Vietnam: As for me, after having smiled to nearly 600 guests having meals, I still have my bride's white on and went to the well to wash the mountain of dishes. My face was still full of make-up, my hair was curled with flowers on . . . suffering so much. If I did not wash the dishes, my bride's new reputation would be really bad, as lazy and shameful for my husband's family. In becoming a daughter in-law, participants yearned not to be ridiculed. Due to social pressures to conform, many perceived that it was easier in life to be praised as a hard-working daughter-in-law, than to endure shame, blame and guilt associated with refusals to slave. Some of the participants spoke about their male partners' attempts to help with the children, to cook or wash dishes after meals. Ha's mother-in-law told her off for not doing her own women's duties when her partner tried to help her. At any moment Chi had spare, her mother-in-law would say, "How come no one is getting him a cup of tea?" Examples like these highlighted the ways in which mother-in-laws controlled the distribution of every household task among the women, while gatekeeping to ensure their sons and other family men did none. Ha's and Chi's partners, rather, went out with friends every night as there was nothing to do in the home. The men's socialising was not questioned or spoken about when living in Vietnam. For the women who engaged in productive work, they were able to afford the employment of domestic helpers. Women negotiated and organised these workers to assist with the care of their children and households. They added that paid domestic help was more so for the convenience of the male partners, so as not to interfere with his work, recreational activities and other absences from the home. For example, Thoa said: My daughter mostly slept with my mother or one of our domestic workers. Another worker would just do the housework. So, my husband could pay full attention to [himself]. Thoa was influenced to relinquish her husband from family life, which enabled him to socially do whatever he chose. Women, like Thoa, were not necessarily alleviated of reproductive burdens upon being responsible to find, hire and supervise domestic helpers and be accountable when these arrangements went wrong. Finally, marriage automatically made the women responsible for the labour associated with Vietnamese Confucian religio-philosophical practices and traditions. This was in addition to the productive and productive labour already endured. As a form of indentured servitude to the men's families, they organized, performed, and managed these often-torturous burdens that were held strong by their mother-in-laws, aunts, and the social gaze of others. The prospect of international study excited participants as it provided hope to break free from the intergenerational cycle of gender inequity that is characteristic of Vietnam's discursive religio-philosophical and socio-political ideals. Women's Australian Experiences Women identified a range of underlying feelings in which they hoped for, and fought for, change upon coming to Australia. They wanted intimate relationship transformation, resourceful family dynamics, and freedoms from the burdens of their Vietnamese lives. They wanted their partners to see the pressures of their intense study loads, women's difficulties in "doing it all" and to feel compelled to help. However, some of the men simply refused, as Kim said: Back in Vietnam, his mum took care of our children when they were younger. So, my husband didn't even know what feeding or taking care of his kids was like. Now, there's no family here to help us. I also have studying to do and the children to take care of. Likewise, Oanh expressed the struggles of having multiple burdens as a Vietnamese woman engaging in traditional filial practices, as a wife and mother, when also a student in Australia: I really wanted to study more. But at night, I put my son in bed at 9.00 p.m., he lay there and didn't sleep until 10:30. I started studying at 11.00. When I had assignments, I didn't go to bed until 3.00 am. Taking care of my son and studying . . . No, I can't just do everything badly. I didn't have a lot of time. I was so tired and exhausted. With no perceived alleviation from their burdens, some of the women sent their partners back to Vietnam or divorced them, seeing this as an opportunity to achieve relative freedoms or to search for an Australian lover to further de-shackle their burdens. Quynh dreamed of breaking the intergenerational gender, expressed as hopes for her daughter: I would like her marrying a Westerner. That's what I hope for . . . a culture of sharing and helping each other in the family. The husband and wife are happily living and enjoying the family life together, and with their kids too. Most of the women, however, experienced changes in themselves and their partners. For example, Ha gained the courage and confidence to speak out and demand support from her partner and son: I would tell my husband to, together with our son, clean the house and take care of the garden. I believed that telling them to share housework with me would be better, would be the first step, rather than not having that determination in my thinking at all. As well, many participants made relationships between sharing reproductive work and strengthening their intimate partner relationships. Kim explained: To be honest, I do want a family to be able to share the tasks and have a balance. It's a way of bonding also, not just the work. If we share tasks, then we understand each other better. Some of the women observed the influence of social image over their partners, including the importance of avoiding shame. While once these men would not help with grocery shopping or in the home due to stereotypes of being "weak men", in Australia these partners feared being labelled as disrespectful to the women. Kim continued: The society here [Australia] respects women more than in our Vietnamese society. Their thinking is different. The men respect the women more and Vietnamese men here look around them. Like my husband, he observes how the other couples take care of each other and that has an influence. Ha expressed this change in Vietnamese men's behaviour, and the relative joys she experienced in terms of family life, in motherhood and in her intimate partner relationship. Consistent with other women, she was completely aware that her partner's change in behaviours was due to his environment: They can see that any men here [Australia} would do the housework, it's their responsibility and duty. My husband too. He saw it and started getting involved in doing chores, cleaning and looking after our son. As interviews progressed, most women bared truths about their experiences in Australia. Many of the men were performing chores as directed, not necessarily sharing the burdens. While the women were usually pleased with these small changes, due to not being under the Confucian gaze of mother-in-laws, they also believed that sustained change on returning to Vietnam would be unlikely. When interviewing women, there were many hesitancies and silences. We suspect that these silences may have represented much more than was said-a cautiousness in the protection of feelings and dignity due to the four virtues expected of Vietnamese women being so discursively strong. As well, there was monotonous reiteration by the women on the joy they experienced when their partners helped them. We suspect that this was safer for the women than to speaking about the Vietnamese Confucian religio-philosophy and freedoms that student migration may have provided from political obedience. Women's Imagining Their Futures In her statement, Quyen represented nearly all the women's voices: I really want to maintain our lifestyle here, but I'm also worried that everything will just go back to the old ways. I can only hope, but I can't be sure. Finishing their studies marked an end to the women's transitory migration experiences in Australia, and potentially and end to their temporary alleviation of gender burden. Worry and uncertainty for their futures heightened as their international student experiences drew to closer to an end. During interviews, the Janus Head action of looking back at life in Vietnam and looking forward did not appear to bring hope among the women. In fact, this action multiplied and intensified women's fears about returning to their Vietnamese Confucian-informed family lives. For example, Quyen said: I really want to maintain our lifestyle here, but I'm also worried that everything will just go back to the old ways. I can only hope, but I can't be sure. Kim also explained: I don't have much hope. Here, my husband can understand me more. But, going back to Vietnam will make it difficult because it also depends on the family, friends and networks around him. I really don't have hope. Well, not much. Others feared for their daughters' wellbeing in Vietnam, after having experienced relative freedoms in Australia, such as in one of the woman's words (Duong): Our concerns are always there, since we are worried about our daughter being back after 2 years. What would happen? "What would happen?" This very question was repeated in almost every interview. Participants looked back at their lives and expressed their state of uncertainty in going forward, for themselves, their children, and their intimate partner relationships. Using Dante's (in Alighieri 1954) language of "limbo" to describe their uncertainty, Duong and others communicated that returning to Vietnam would be a "living Hell." When the Janus Head actions of looking forward in the search for their own peace failed, participants dreamt up new identities for their daughters. Quynh said: I would like her marrying a Westerner. That's what I hope for. I think that he will have a culture of sharing and helping each other in the family. The husband and wife are happily living and enjoying the family life together, and with their kids too. While the incidence and prevalence of gender inequity and gender-based violence in Australia is high, across every race and creed (McLaren and Goodwin-Smith 2016;Zannettino and McLaren 2014), participants hopes and dreams were founded upon their experiences and perceptions of family life in Australia. When the women could not find peace on looking forward at life in Vietnam, they then invoked hopes for the next generation. Facing forward to the future, as a return to Confucian religio-philosophy and its hold over gender order, manifested as an emotional inner war that seemed endless for these women. Discussion Drawing on the work of Moser (2012), it was the triple burden of productive, reproductive and regulated community/religio-philosophical labour that the women in this study sought to escape. The intergenerational pressures on them to conform to the Vietnamese ways had pressured and tormented them. Becoming international students offered the women perceptions of a temporary escape. In tasting a different life as international students in Australia, however, they feared, even more, their returns to socio-political expectations of domestic servitude of women in conjunction with allegiance to Vietnamese Confucian protocols. Existing literature is limited in making these associations between women's experiences in collective traditions, such as in Vietnam's Confucian religio-philosophy, and their migratory intentions as a perceived escape. The Janus Head actions of looking back and looking forward, however, exposed the interplay between women's want for a different life in becoming international students in Australia, as well as the prevailing sadness associated with their looking back and forward-looking realities. When the lives they sought were not tangible, temporal existences compelled the women to reimagine their lives and identities; both their own and that of their children. Erichsen (2009) and Tran and Gomes (2017) wrote about how international students reinvent their identities in an attempt to resolve uncertainty through transforming ones' self into something else. While this can be imagined, it does not mean that transforming ones' self will necessarily be achieved or sustained. Accordingly, the women in our study engaged in a process of becoming lost, in liminality, and by redefinition; then, a set of better-fitting contexts emerged in which to discover their new selves, losing old and hoped for selves, and dreaming new selves. This experience, however, gave rise to the agony associated with limbo when identity reinvention was impeded by the imperatives to returning to their old lives. When tormented by the limbo, dreaming-up new selves, and new identities, became an unresolved constant. The findings of the current study presented experiences of a sample of Vietnamese women who were determined to achieve greater gender balance in life, while simultaneously navigating and shrugging off the Confucian-imposed patriarchy that they lived and brought with them to Australia. These women used spoken words to describe feelings of success, associated with their perceived shifts in gender balance in parenting, their intimate partner relationships, and the family. Temporary living in Australia, as international students, enabled the enjoyment of being supported by their partners at levels they had not experienced before. Studying in Australia created opportunities for family, parenting, and relationship changes. All participants expressed positive feelings for the changes they had experienced, so far. Many of them expressed how these changes were only possible by moving to Australia. The women fought hard to engage their husband's support in reproductive labour. Some of the men resisted the reproductive labour, but eventually conceded when shamed by the Australian environment and embarrassed for not helping. For the women whose partners refused to change, temporary separation or divorce allowed them to experience relief from their burdens when in Australia as international students. Under Confucian religio-philosophy, mothers are expected to ensure their children are unconditionally obedient, are not allowed to confront parents, and must demonstrate respect to parents and elders. Some of the women were uncertain about their own and their children's ability to return safely to Vietnam's Confucian religio-philosophical order after the liberties of life experienced in Australia. This was especially so, of daughters whose conduct in accordance with the four traditional virtues expected of women (Ngo et al. 2008) would be demanded by elders, socio-cultural and political systems, and ancestors (Bertram 2004;Laporte and Guttman 2007;Lim and Lim 2004;Su and Hynie 2011). As a result, our study touched on the worries that women had for their children's futures. As part of their unresolved limbo, the women invented images for their children's futures that included dreams of marrying an Australian man and other wishes for their next generation. Conclusions The sample of women provided insights into their lived experiences of collectivist multigenerational living and the multiple burdens endured, as a reason for seeking student migration as a temporary escape. They enjoyed changes to their relationships in Australia, including their partners' help in reproductive labour when not under the gaze and control of Confucian order, mother-in-laws, and the gender expectations in Vietnamese society generally. Contrasting experiences between the women's pasts and their Australian experiences, including imagined hopes and realities of their futures, drew on the Janus Head concept to expose the inner turmoil experienced when unable to imagine better gender equity in their futures. The mix of uncertainties in going back to Vietnam, following their studies, remained heavily fuelled with hope, doubt and uncertainty, and was experienced as limbo-always. The findings of this study cannot be generalised to other populations of women in Vietnam or other Confucian societies, and nor does it attempt to prove a hypothesis or answer any particular question. Rather, this study contributes to theorising and dialogue aimed at challenging gender roles and partakes in the ongoing exposure of gender inequity. Specific to women seeking freedom from Vietnam's Confucian religio-philosophical order, the narrated findings highlight the emotional, inner turmoil that may be experienced by international students when fleeing something in their lives and fearing going back. Student wellbeing and educational achievements may be at stake, which calls for educators, counsellors and university personnel to recognise Vietnamese women's suffering and to find new ways of supporting and intervening.
2020-10-29T09:08:05.867Z
2020-10-27T00:00:00.000
{ "year": 2020, "sha1": "61cefc0a715fd69dbf13c07cbcc281acf5810229", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1444/11/11/556/pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "a16905e82465b8261462059cc19909c7bf6544e0", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
256167829
pes2o/s2orc
v3-fos-license
Virome Profiling of an Eastern Roe Deer Reveals Spillover of Viruses from Domestic Animals to Wildlife Eastern roe deer (Capreolus pygargus) is a small ruminant and is widespread across China. This creature plays an important role in our ecological system. Although a few studies have been conducted to investigate pathogens harbored by this species, our knowledge of the virus diversity is still very sparse. In this study, we conducted the whole virome profiling of a rescue-failed roe deer, which revealed a kobuvirus (KoV), a bocaparvovirus (BoV), and multiple circular single-stranded viruses. These viruses were mainly recovered from the rectum, but PCR detection showed systematic infection of the KoV. Particularly, the KoV and BoV exhibited closely genetic relationships with bovine and canine viruses, respectively, highly suggesting the spillover of viruses from domestic animals to wildlife. Although these viruses were unlikely to have been responsible for the death of the animal, they provide additional data to understand the virus spectrum harbored by roe deer. The transmission of viruses between domestic animals and wildlife highlights the need for extensive investigation of wildlife viruses. Introduction China is one of the countries with the most diverse and abundant wildlife resources, in which dwells over 7300 vertebrate species, comprising~11% of the world's total wildlife species [1]. Owing to a series of national regulations and laws to ban illegal hunting and overexploitation, and the commitments of the Chinese government and scientific community to biodiversity conservation, the living status of many wild animals has been greatly improved [1]. However, there are still many other indigenous species at risk of extinction, for example, a piece of unfortunate information from the most recent International Union for Conservation of Nature and Natural Resources (IUCN) report has formally announced the extinction of a specific paddlefish (Psephurus gladius) of the Yangtze river [2]. Therefore, the wildlife conservation has a long way to go in China. Among the challenges faced by wildlife conservation, pathogens constitute an overlooked but substantial portion. Indeed, they pose a great risk to wildlife, for example, influenza viruses in birds [3], African swine fever viruses in wild boars [4]. Wildlife-borne pathogens also severely threaten global public health. Particularly, spillovers of some deadly wildlife-borne viruses, such as the Nipah virus, Marburg virus, and Rabies virus, have caused several outbreaks of emerging/re-emerging infectious diseases (EIDs) in humans [5]. Thus, investigation of wildlife viruses is not only an important measure in wildlife conservation, but also helps to control and prevent the potential outbreaks of wildlife-originated zoonosis [6]. Belonging to the genus Capreoplus within the family Cervidae, Eastern roe deer (Capreolus pygargus) are one of the most widespread and abundant free-living ungulates in China, and its population status serves as a biological indicator of the environmental health [7]. They inhabit different types of deciduous and mixed forests and forest-steppes, and are apt to frequently contact with domestic animals, providing ample opportunities for pathogens to transmit between roe deer and other animals. Recent investigation has shown that infectious diseases are an important factor causing the death of roe deer, including parasitic infections and bacterial diseases [8]. Besides, a few serological and molecular investigations have revealed that the creatures are also infected by the tick-borne encephalitis virus, hepatitis E virus, Schmallenberg virus, and so forth, indicating that roe deer are also involved in the circulation of these zoonotic viruses [9]. However, all those investigations used pathogen-specific approaches and are fragmental, and apparently, the complete spectrum of pathogens harbored by this species remains largely undetermined. In this study, we examined the complete virome of a dead roe deer using a DNA-specific multiple displacement amplification (MDA) and an RNA-specific meta-transcriptomic (MTT) method. Results provide additional knowledge of the genetic diversity of roe deer viruses. Sample Collection In February 2021, an injured female adult roe deer was found in the field in Xunke county, Heilongjiang province, and was transported to the Provincial Wildlife Disease Monitoring Station of Shuanghe for rescue. The animal was very thin and suffering severe injuries in its hip and died very soon after its arrival. A necropsy was immediately performed, which revealed no abnormal lesions in its internal organs but a lack of food in its gastroenteric track was noted. Its rectum, cervical lymph nodes, kidney, lung, brain, and liver were sampled and cryo-transported to the laboratory for a viromic analysis. The species was morphologically identified by the staff at the station and further confirmed by sequencing the mitochondrial cytochrome coxidase subunit I gene (COI) [10]. Sample Pretreatment and High-Throughput Sequencing A small piece (~0.2 g) of each tissue was cut and homogenized with sterile PBS. After centrifugation, supernatants were passed through 0.45-µm-pore-size membranes (Millipore, Boston, MA, USA), and digested with nuclease to eliminate the contamination of foreign nucleic acids. For MTT sequencing, RNA was extracted using Trizol reagent (Invitrogen, Carlsbad, CA, USA), and subjected to rRNA depletion using an Ribo-ZeroTM Magnetic Gold Kit (Epicentra Biotechnologies, Madison, WI, USA), followed by RNA-sequencing on an Illumina NovaSeq sequencer using an NEBNext ultra-directional RNA library prep kit (NEB, Ipswich, MA, USA). However, for MDA processing, DNA was extracted using a DNeasy Blood and Tissue kit (Qiagen, Hilden, Germany) and amplified using an illustra GenomiPhi V2 DNA amplification kit (GE, Fairfield, CT, USA) as per the manufacturer's manual. The products were purified using a QIAquick PCR Purification Kit (Qiagen, Hilden, Germany) with one µg used to Illumina pair-end (150 bp) sequencing at an Illunima NovaSeq 6000 sequencer. To inspect any possibilities of cross-contamination during sample pretreatment and sequencing, samples of an Amur leopard cat were simultaneously processed [11]. Virome Annotation The raw data were processed using fastp version 0.19.7, and subjected to host sequence removal by mapping against the whole genomic assembly of Capreolus pygargus (accession number: GCA_012922965.1) using bowtie2 version 2.4.1, followed by a rapid metagenomic classification of bacterial, archaeal, and fungal genomes using kraken2 version 2.0.9. The remaining reads of RNA and DNA viromes were respectively mixed together and de novo assembled using metaSPAdes version 3.14.9. These contigs were annotated using blastn and diamond blastx searching (e-value ≤ 1 × 10 −10 ) against our refined eukaryotic viral reference database (EVRD)-nt/aa version 1.0 [12]. To examine the authenticity of these virus-like contigs (VLCs), reads were mapped back to VLCs, and the vertical and horizontal coverages were determined using samtools version 1.10. PCR Validation and Gap-Filling To validate the viromic results and fill the genome gaps, primers (Supplementary Table) were designed using Primer Premier5 targeting the contigs of bocaparvovirus and kobuvirus. Viral RNA was extracted by a RNeasy Mini Kit (Qiagen, Hilden, Germany) and reverse transcription was affected with a first cDNA synthesis kit (TaKaRa, Dalian, China) according to the manufacturer's protocol. DNA extraction is described above. Doubledistilled water was used as a negative control. PCR amplification was conducted by a 2× Rapid Taq Master Mix (Vazyme, Nanjing, China) with the following program: 95 • C for 5 min, 40 cycles of denaturation at 95 • C for 15 s, annealing at 59 • C for 15 s (or adjusted according to different primer pairs) and extension at 72 • C for 25 s, and a final extension at 72 • C for 5 min. The expected products were directly sequenced on an ABI 3730 Sanger sequencer (Comatebio, Changchun, China). Genomic Characterization and Phylogenetic Analyses Open reading frames (ORFs) of viral genomes were predicted using Geneious version 4.8.3. The genomic structure was illustrated using Seqbuilder version 7.1.0. The stem-loop of each genome was predicted by mFold. Alignments of nucleotide (nt) and amino acid (aa) sequences with other representatives of known viruses were conducted using MAFFT version 7.471. TrimAI version 1.2 was used to clipping ambiguous alignments and a model finder was used to predict the best model. Phylogenies were inferred by IQ-TREE version 1.6.8 with 1000 bootstrap replicates. Overview of the Virome The dead roe deer provided a rare opportunity to investigate the virus diversity harbored by this creature. Therefore, a combination of MDA and MTT technologies was employed to profile the whole eukaryotic virome, which eventually generated 37.8 (for DNA) and 28.8 (for RNA) gigabase (GB) reads with 6.3 ± 2.7 and 4.8 ± 1.0 GB per DNA and RNA library, respectively. The viromic annotation revealed 81 potentially eukaryotic VLCs with 306-6272 nt in lengths covering Parvoviridae, Circoviridae, Smacoviridae, Genomoviridae, and Picornaviridae (Figure 1), among which 16 were complete genomes corresponding to 14 circular single stranded DNA (cssDNA) viruses, one bocaparvovirus, and one kobuvirus. Comparison with the virome of an Amur leopard cat did not find any contigs related to the viruses identified from Amur leopard cat [11], indicating no cross-contaminations as well as contamination from reagents in sample processing. Mapping reads back to these contigs showed that the rectum sample had the most abundant viruses with~10 6 reads related to the genus Gemykibivirus within the family Genomoviridae. On the contrary, these solid organs were very low in richness and abundance of virus species (Figure 1). It is reported that bocaparvovirus (BoV) and kobuvirus (KoV) were associated with certain diseases in various animal species [13,14], hence we validated their presence in these samples by using PCR/RT-PCR detections. The detection results were largely consistent with the viromic analysis, that is, the samples that had reads for a virus were also positive in the specific PCR/RT-PCR detection. However, PCR/RT-PCR revealed more positives. In particular, BoV was detected in almost all sampled tissues except the liver. This discrepancy should be ascribed to how viromic analysis at such sequencing depth is less sensitive than the conventional PCR method, which can be improved by ultra-deep sequencing [15]. particular, BoV was detected in almost all sampled tissues exce ancy should be ascribed to how viromic analysis at such sequen than the conventional PCR method, which can be improved [15]. Genomic and Phylogenetic Characterization of Kobuvirus Kobuvirus is a small, spherical, and non-enveloped pic stranded positive-sense RNA as its genome. The infection of K mans, rodents, pigs, carnivores, and ruminants. Among the six the genus Kobuvirus, members of the species Achivirus A can cau humans and the bovine kobuvirus from the species Achivirus B cattle [16]. The MTT data generated two contigs, which were fur RT-PCR, eventually resulting in an 8299 nt-long sequence th Online blastn search of the sequence against Genbank showed tical with a bovine KoV identified from a diarrheal calf in Heb netic analysis based on the entire ORF nt sequences revealed t is highly related to their hosts. Although detected in different nents, KoVs of bovine, caprine and porcine respectively fell into while CpKoV/XK/CHN/2021 identified in this study is closely c KoVs within the species Aichivirus B (Figure 2a). Genomic and Phylogenetic Characterization of Kobuvirus Kobuvirus is a small, spherical, and non-enveloped picornavirus with a single-stranded positive-sense RNA as its genome. The infection of KoV is common among humans, rodents, pigs, carnivores, and ruminants. Among the six species (Aichivirus A-F) of the genus Kobuvirus, members of the species Achivirus A can cause acute gastroenteritis in humans and the bovine kobuvirus from the species Achivirus B might lead to diarrhea in cattle [16]. The MTT data generated two contigs, which were further joined by a gap-filling RT-PCR, eventually resulting in an 8299 nt-long sequence that covers the entire ORF. Online blastn search of the sequence against Genbank showed that it was 93.3% nt identical with a bovine KoV identified from a diarrheal calf in Hebei province [17]. Phylogenetic analysis based on the entire ORF nt sequences revealed that the evolution of KoVs is highly related to their hosts. Although detected in different countries and even continents, KoVs of bovine, caprine and porcine respectively fell into three independent clades, while CpKoV/XK/CHN/2021 identified in this study is closely clustered with those bovine KoVs within the species Aichivirus B (Figure 2a). Genomic and Phylogenetic Characterization of Bocaparvovirus BoVs, belonging to the family Parvoviridae, are a group of single-stranded DNA viruses. They have a wide range of hosts, including humans, cats, dogs, pigs, sheep, and cow, and can cause respiratory and gastrointestinal tract diseases in juvenile animals and humans [18]. By de novo assembly and gap-filling PCR, we obtained a nearly full-length (4923 nt) CpBoV/XK/CHN/2021 genome. The sequence comparison and phylogenetic analysis of the NS1 gene both showed that it was tightly clustered with a group of canine BoVs with a nt similarity as high as 97.9% with strains 14Q209 and 17CC0312 within the species Carnivore bocaparvovirus 2, where the former was detected from a dead Korean dog with unknown causation in 2014 [19], while the latter is a canine BoV but was recovered from a cat in Northeast China in 2017 [20] (Figure 2b). Genomic and Phylogenetic Characterization of Bocaparvovirus BoVs, belonging to the family Parvoviridae, are a group of single-strande ruses. They have a wide range of hosts, including humans, cats, dogs, pigs, cow, and can cause respiratory and gastrointestinal tract diseases in juvenile a humans [18]. By de novo assembly and gap-filling PCR, we obtained a nearly (4923 nt) CpBoV/XK/CHN/2021 genome. The sequence comparison and p analysis of the NS1 gene both showed that it was tightly clustered with a grou BoVs with a nt similarity as high as 97.9% with strains 14Q209 and 17CC0312 species Carnivore bocaparvovirus 2, where the former was detected from a dead with unknown causation in 2014 [19], while the latter is a canine BoV but wa from a cat in Northeast China in 2017 [20] (Figure 2b). Genomic and Phylogenetic Characterization of CRESS DNA Viruses The cssDNA viruses revealed here are a group of circular Rep-encod stranded (CRESS) DNA viruses, which are ubiquitously distributed with a sources, such as plants, bird feces, and animal tissues [21]. We obtained 1 CRESS DNA virus genomes based on their overlapping ends, with 1512 lengths. Almost all of them encoded two ORFs, corresponding to replication capsid (Cap) proteins, with the majority arranged bidirectionally, but o CRESS/CpXKC1, had two genes arranged unidirectionally (Figure 3a). Of not genome of GmV/CpXKC1 has four ORFs, with three of them encoding protein Genomic and Phylogenetic Characterization of CRESS DNA Viruses The cssDNA viruses revealed here are a group of circular Rep-encoding singlestranded (CRESS) DNA viruses, which are ubiquitously distributed with a variety of sources, such as plants, bird feces, and animal tissues [21]. We obtained 14 complete CRESS DNA virus genomes based on their overlapping ends, with 1512-2707 nt in lengths. Almost all of them encoded two ORFs, corresponding to replication (Rep) and capsid (Cap) proteins, with the majority arranged bidirectionally, but one, that is, CRESS/CpXKC1, had two genes arranged unidirectionally (Figure 3a). Of note is that the genome of GmV/CpXKC1 has four ORFs, with three of them encoding proteins associated to replication. Especially, the canonical Rep is split into two small portions. The odd genomic structure was confirmed by PCR validation. We also found stem-loop structures of CRESS DNA viruses between Rep and Cap ORFs, which initiate the rolling-circle replication of these genomes (Figure 3a). These genomes are very divergent from each other with 64-99% aa identities in Rep. However, the online blastn searches revealed various genetic distances to their known reference sequences. For example, five genomes, that is, GmV/CpXKC1-5, showed very close relationships (95-99% in nt) with some genomoviruses, but the remaining genomes were distantly related to those unclassified CRESS DNA viruses with 29.0-88.8% aa identities. To infer their phylogeny, a total of 143 Rep protein sequences were aligned with the counterparts of the 14 genomes, followed by a maximum likelihood phylogenetic analysis. These sequences from the families Geminiviridae, Genomoviridae, Smacoviridae, and Circoviridae are well segregated from each other to form independent phyloclades, indicating a robust phylogenetic result (Figure 3b). The 14 sequences were dispersed into Pathogens 2023, 12, 156 6 of 9 different clades with above mentioned five ones clustered closely with genomoviruses with origins of plant, bird feces, and silkworms (Figure 3b). Whereas the rest cannot be assigned to any approved families but are genetically related to those unclassified CRESS DNA viruses from fish, giant panda feces, and spiders (Figure 3b). Discussion Here we report the viromic profiling of a rescue-failed roe deer followed by PCR/RT-PCR validation. In order to obtain the complete spectrum of viruses harbored by this animal, we employed a combination of MDA and MTT methods, which has been proven be robust to capture the whole virome [15]. Although we generated more than 60 Gb of data for the animal, very limited known eukaryotic viruses were uncovered, which should undoubtedly be attributed to the small sample size and the inability of the viromic annotation method used here to identify those remote viruses. Considering the important role of roe deer in the ecological system [7] and the paucity of our understanding of their virus diversity, viromic investigation of this species with more individuals in a broader area should be intensified in the future. Although the virome and PCR/RT-PCR detection revealed KoV and BoV in the rectum sample, both of which are potential causative agents for gastroenteritis or diarrhea, its internal organs looked very normal, with no sign of diarrhea at sampling except for the injures in its hip. Therefore, we cannot arbitrarily ascribe the death of the roe deer to any pathogen infections, but rather, it is reasonable to conclude that the injury and malnutrition led to its death. In addition, these viruses were largely limited to the intestine and the lymph node, but we noted the wide distribution of gemykibivirus, especially at high loads in the intestine and the lymph node ( Figure 1). However, some tissues, like the brain, are recognized as virus-free in a healthy condition. We set a control to inspect the possible contamination events during sample pretreatment for metagenomic sequencing and found no cross-contamination happened, but we cannot rule out the possible contamination between organs by blood during necropsy. Interestingly, the CRESS DNA viruses revealed here were very genetically diverse and showed as high as 99% nt identities to their genetic neighbors of plant-, animal fecesand silkworm-origin. Recently, the diversity of CRESS DNA viruses has been greatly expanded, partially due to the wide application of high-throughput sequencing-based virome studies [22]. They are very intricate because of their ubiquitous distribution and broad association to environmental samples, plants, and animal feces [23]. Thus, the diversity of CRESS DNA viruses should be largely explained by food and environmental factors rather than their genuine infection. Of particular note is that we revealed the sign of cross-transmission of KoV and BoV among domestic animals and roe deer. The phylogeny of KoVs is partially shaped by their hosts (Figure 2a), but the roe deer KoV fell into the clade of bovine KoVs with as high as 93.3% nt identity to a virus of diarrheal calf (Figure 2a), suggesting a possible spillover of bovine KoV to roe deer. This speculation was further verified by the BoV, which was phylogenetically surrounded by canine viruses and showed ≥93% nt identities to two viruses ( Figure 2b). Especially, one of its genetic neighbors, that is, 17CC0312, was also detected in northeast China [19]. Accordingly, it is highly suggested that there be a spillover event of canine BoVs to roe deer. Such a phenomenon was also observed in our previous viromic examination of an Amur leopard cat, in which the anelloviruses and bocaparvovirus were possibly transmitted from other feline species including domestic cats [11]. Taken together, these findings conclusively show the frequent cross-transmissions of viruses from domestic animals to wildlife, which poses a substantial obstacle in wildlife conservation as well as in disease control and prevention in domestic animals. On one hand, some viruses are lethal to wildlife and would cause catastrophic disasters to wildlife. For example, the spillover of African swine fever viruses from domestic pigs to wild boars has killed a great number of wild boars [4]. On the other hand, the cross-transmission would result in the circulation of some viruses in wildlife, which carries the potential risk of reintroduction of these viruses into domestic animals, making the eradication of related diseases very difficult [24]. To tackle these issues, it is critical to stop the transmission route of viruses between domestic animals and wildlife. For such purposes, it is feasible to set up wildlife reserves and popularize house feeding of domestic animals. Besides, active vaccination of wildlife can be considered for some cases. For example, vaccination of carnivores against rabies virus have been proven to be an effective way to eliminate
2023-01-24T17:10:01.969Z
2023-01-18T00:00:00.000
{ "year": 2023, "sha1": "fb02f8d13915c34d93aa7185713c54fcf83ae68d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/12/2/156/pdf?version=1674028282", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c05ef013fcb743ed9cc6137b724971e5314f1bc5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
245554849
pes2o/s2orc
v3-fos-license
Risk Evaluation Study of Urban Rail Transit Network Based on Entropy-TOPSIS-Coupling Coordination Model , Introduction Urban rail transit is a kind of public transportation that transports a large number of passenger flows in the form of corresponding vehicles through a dedicated track structure, and it occupies a very important position in the urban system. A reasonable and safe urban rail transit network can greatly improve the efficiency and economic level of urban development. e construction of a reasonable and safe urban rail transit network cannot be achieved without an accurate evaluation of the risk of the existing urban rail transit network. erefore, in order to establish a reasonable risk evaluation system of urban rail transit network, a risk evaluation method of urban rail transit based on the entropy-TOPSIS-coupling coordination model is proposed in this paper, aiming to accurately evaluate the risk situation of urban rail transit network, so as to lay a solid theoretical foundation for optimizing and perfecting urban rail transit network. Relevant scholars have conducted a series of research on the evaluation of urban rail transit planning and have achieved certain research results. e research content mainly focuses on the exploration, improvement, and optimization of the evaluation model; for example, Yixiu Song [1] used the Analytic Hierarchy Process (AHP) to evaluate the rail transit planning and compared each evaluation index of the transportation planning to obtain its relative importance. Jing Li [2] established a comprehensive evaluation model of traffic network by combining the relative theory of topology and entropy for the shortcomings of the existing urban rail transit network evaluation index system. Chaoxia Su [3] proposed a comprehensive evaluation method of urban traffic network planning and constructed the comprehensive evaluation model of urban rail transit network by fuzzy evaluation method. Yong Jiang [4] believed that the material element analysis model can provide an operable method for VFM evaluation of urban rail transit PPP projects, reduce the subjectivity in the evaluation process, and improve the scientific and reasonable evaluation results. Xin Liu [5] proposed an evaluation model based on the extension cloud theory for the characteristics of urban rail transit safety evaluation, taking advantage of the uncertain reasoning of the cloud model and the quantitative analysis of the extension theory. Xu XD [6] evaluated the potential benefits and limitations of deploying eco-driving strategies on different transit services, service areas, fleet composition, and road terrain. Bin S [7] proposed a quantitative method to evaluate the performance of urban subway network under different damage scenarios. Hy et al. [8] proposed a vague fuzzy matter-element model for the risk assessment of urban rail transit projects by combining vague set and matter-element theory. Hu et al. [9] provided an improved DS/AHP method in this study for the evaluation of hazard source for urban rail transit risk evaluation with incomplete information. Aydin [10] proposed a fuzzy-based multidimensional and multiperiod service quality evaluation outline for rail transit systems. Wang et al. [11] used the grey incidence method for evaluating the hazards of urban rail transit dynamic operating systems and conducting quantitative analysis of risks in the operation process. At the same time, scholars prefer to use various types of evaluation models in combination, so that the evaluation method is more efficient and reasonable. For example, Ying Wang et al. [12] used AHP method and entropy method to determine index weight and applied TOPSIS method to determine the ideal scheme of line network and selected the optimal scheme by comparing the gap between each scheme and the ideal scheme. Bingyi Qian et al. [13] combined entropy method and expert method to determine the evaluation indexes, combined with TOPSIS method to calculate the pros and cons of each scheme and the optimal solution to determine the optimal scheme, and verified by practical case. Zhifeng Zhou [14] proposed the correlation function as the qualitative index for screening in order to evaluate the service level of transfer stations more reasonably. Ruisong Zhao [15] analyzed the basic form of rail network layout and the suitable form of rail network layout, improved the calculating method of existing rail network scale, and also improved the Analytic Hierarchy Process (AHP) network layout method, established the evaluation index system of line network layout, and constructed the evaluation model. Xie [16] constructed an evaluation model based on ISM-ANP-Fuzzy to evaluate the interface risk in urban rail transit PPP projects. Huang et al. [17] proposed a technique for order preference entropy by similarity to ideal solution (TOPSIS) method to evaluate the operational performance of urban rail transit systems from the perspective of operators, passengers, and government. Wu et al. [18] evaluated the urban rail transit operation safety based on an improved CRITIC method and cloud model. Bouraima MB [19] used a combined SWOT matrix and Analytic Hierarchy Process (AHP) to evaluate the priority factors and to employ them in developing strategies for the railway transportation system. Most of the above literature only makes quantitative analysis on the single index factor in the evaluation of urban rail transit; the influence situation and coupling condition between index factors were not calculated and analyzed. erefore, based on the discussion of the above literature, the entropy-TOPSIS-coupling coordination model evaluation method of urban rail transit is proposed by combining entropy weight method, TOPSIS method, and coupling coordination model. Combined with the relevant traffic system data of Shanghai from 2000 to 2016, the system risk and coordination degree are quantitatively analyzed and comprehensively analyzed, and the development of Shanghai's rail transit system in these ten years is obtained. Identification of Evaluation Index e evaluation index system of the urban rail transit planning network is the key to measuring the results of the traffic system scheme, as well as an important precondition for picking the most reasonable traffic system scheme. is index system needs to comprehensively reflect the traffic system's economic efficiency, social development, network structure, operation effect, and other rail transit characteristics. Based on an analysis of the related literature's index system, combined with the characteristics of the city and the needs of social development, and in accordance with the principles of strong purpose, hierarchy, science, rationality, ease of operation, and the quantitative and qualitative combination, the evaluation index system of urban rail transit network is constructed in this paper from three aspects of regional economy, social resources, and urban rail transit, as shown in Table 1. Evaluation Modeling Step 1. Statistical panel data of three potential variables in Shanghai from 2000 to 2016 are defined as kth, where kth denotes the three subsystems of regional economy (E), social resources (S), and urban rail transit (T), respectively; kth is thekth second index under the subsystem, k � 1, 2, 3, . . . , l, l ∈ N + ; jth is thejth year to be evaluated, j � 1, 2, 3, . . . , n, n ∈ N + . e initial matrix of the subsystem Step 2. For each subsystem separately, the entropy weight method is used to calculate the weight of the kth second index in thejth time period. e smaller the entropy value, the larger the entropy weight, indicating that the more informative the indicator is, the more important the indicator weight will be. Firstly, the initial matrix is normalized to eliminate the dimension problem, and the normalized matrix Y i � [y i,k,j ] l×n of subsystem i is formed as follows: where k th is the value after normalization and k th and k th are the maximum and minimum values of k th , respectively. Calculate the entropy value e i,k of the k th second index under subsystem i as follows: Calculate the entropy weight ϖ i,k by using the entropy value of the kth second index under subsystem i as follows: Step 3. Using the normalization matrix Y i � [y i,k,j ] l×n of subsystem i and the entropy weights ϖ i,k of the kth second index under subsystem i, the normalization matrix of the subsystem is weighted as U � [u i,j ] m×n : en the probability of subsystem i in the jth year is Step 4. Repeat Step 2 and calculate the weight of subsystem i for the jth time period using the entropy weight method. Calculate the entropy weight ϖ i by the entropy value of subsystem i: Evaluation of urban rail transit systems Regional economy Total GDP (E1) Annual gross regional product Investment in urban infrastructure (E2) Annual city investment in the engineering and social infrastructure necessary for survival and development Step 5. Use the TOPSIS method to solve the comprehensive evaluation index C j of the urban rail transit system in the jth year, first calculating the weighting matrixO � [o i,j ] m×n : Determine optimal S + i and inferior solutions S − i for the weighted value of subsystem i: Calculate the Euclidean distance between the weighted value for the jth year and the optimal and inferior solutions: Calculate the overall evaluation index C j � sep − j /sep + j + sep − j , ∀j, C j ∈ [0, 1] of urban rail transit systems for the jth year: Step 6. Calculate coupling B j of multiple factors in thejth year. e smaller the deviation between the single factors, the greater the coupling among the factors. Calculate deviation B v by selecting the formula corresponding to the number of subsystems to be coupled: where S i is the standard deviation of accidents caused by a single factor; M is the number of coupled subsystems to be calculated, 2 is the product of the probability of two coupled subsystems, and the larger B ′ , the smaller the deviation. e index is mainly used to evaluate the annual risk coupling strength between the subsystems of urban rail transit coupling model, and at the same time the evaluation results are given to propose decoupling measures. e coupling degree of multifactor risk coupling is calculated by the following formula: Step 7. Calculate the comprehensive coordination index V j of probability in the jth year; this index is mainly used to evaluate the orderly and disorderly development of urban rail transit system every year. e more orderly the system develops, the more likely it will lead to safety accidents. Step 8. Calculate the coupling coordination degree K j of the urban rail transit system in the jth year. is indicator comprehensively considers the characteristics of coupling degree and coordination degree and is mainly used to evaluate the annual coupling strength and orderly and disorderly development between subsystems of urban rail transit coupling model. At the same time, the decoupling method is proposed based on the evaluation results. e flow chart of the model is shown in Figure 1. Case Study Using year as the statistical time period, the panel data of three variables of regional economy (E), social resources (S), and urban rail transit (T) in Shanghai from 2000 to 2016 are counted; the results are shown in Table 2 (see Appendix). e statistical results of Table 2 show that the economic development of the region is rapid and the investment in infrastructure and transport facilities is increasing, and the growth is increasing over time; in addition, the population density is increasing year by year; the number of times or the number of people using social resources is increasing rapidly; the flow of rail Discrete Dynamics in Nature and Society transport resources, the sharing rate, and other characteristic values are also increasing every year. Table 3 shows the two-factor and three-factor risk coupling degree; the results show that the coupling degree of E-S and E-T, regional economic and social resources, is larger in the two factors, while the coupling degree of S-T, that is, social resources and rail transit, is small, indicating that regional economic and social resources are easy to cause accidents. Table 3 shows the two-factor and three-factor risk coupling; its results show that the coupling in E-S as well as E-T, that is, regional economy and social resources, is larger in the twofactor risk, while the coupling in S-T, that is, social resources and rail transportation, is smaller, indicating that both, that is, regional economy and social resources, are prone to accidents. Using (15) to calculate the integrated coordination of urban rail transit coupled system in Shanghai from 2000 to 2016, as shown in Table 4, the integrated system coordination maintains a slowly decreasing trend, indicating that the Shanghai urban rail transit system gradually develops from a disorderly state to an orderly state, and the probability of accidents gradually decreases. is is analyzed because the rapid development of Shanghai brings favorable conditions for the improvement of the safety condition of Shanghai's rail transit system. e coordination degrees between the two-factor and three-factor risk coupling were measured separately using the evaluation model proposed in Part 2, and the degree of coordination of their evolution toward the common system goal (risk) was further analyzed and evaluated objectively, as shown in Table 5. From Table 5, it can be seen that the two-factor and three-factor risk coupling coordination degrees maintain a steady decreasing trend, which represents a low degree of interaction and synergistic evolution among the factors and a low possibility of risk occurrence in the system. Meanwhile, Table 5 shows that the coupling coordinations of the two-factor E-S as well as E-T, that is regional economy and social resources and regional economy and rail transit, are both larger, while the risk coupling coordination of S-T, that is, social resources and rail transit, is smaller, indicating that both, that is, regional economy and social resources and regional economy and rail transit, are prone to accidents. E-S-T, that is, the smallest risk coupling coordination among the three factors of regional economy, social resources, and rail transportation, indicates that this scenario is the least likely to lead to accidents. Table 6 shows the annual comprehensive risk index of urban rail transit coupling system based on TOPSIS; the higher the value, the safer the system. e results show that the Shanghai urban rail transit system was the least safe in 2000, and the safety factor of the coupled urban rail transit system is increasing year by year after 2000; that is, the system was gradually safe after 2000. Table 3: Results of two-factor and three-factor risk coupling calculations. In the end, the range of the evaluation value obtained by the entropy-TOPSIS-coupling coordination model is 1 and the coefficient of variation is 0.718. e larger the range and coefficient of variation, the greater the dispersion and the higher the degree of discrimination of the evaluation value, where the range reaches the maximum. erefore, the comprehensive evaluation value of the system obtained by the entropy-TOPSIScoupling coordination model evaluation method is more beneficial to visually assess the level of urban rail transit risk evaluation. Conclusion (1) is paper proposes an entropy-TOPSIS-coupling coordination model for urban rail transit by combining the entropy weight method, TOPSIS method, and coupling coordination model. e simulation results show that the risk of urban rail transit system can be reasonably evaluated based on this model, and the conclusions obtained are more in line with the actual situation. (2) e following results can be obtained from the relevant data of Shanghai from 2000 to 2016: compared with other factors, regional economic and social resources are more likely to cause accidents, so managers need to focus on these two factors for reasonable control. (3) From Table 5, it can be seen that the E-T two-factor coupling coordination degree is the highest in the factor coupling calculation, so managers should try to avoid the coupling condition of regional economic indicators and rail transit indicators. (4) From Table 6, it can be seen that the safety factor of urban rail transit coupling system is increasing year by year, gradually developing from disorderly state to orderly state, and the risk is in a gradually increasing situation. erefore, relevant government departments should pay attention to the urban rail transit problem and increase the strength of urban rail transit safety construction. Subsequent improvements to the system can be carried out by decoupling methods, and theoretical approaches and practical implementation schemes to decoupling methods for urban rail transit systems should be studied in depth in the future. Data Availability Previously reported data are used to support this study and are cited at relevant places within the text as references. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2021-12-30T16:21:49.808Z
2021-12-28T00:00:00.000
{ "year": 2021, "sha1": "e26d97907c9abcd1ce685b571beec11a886d36f0", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ddns/2021/5124951.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "97d10ffac887ad1ec3a9617ba7c8fe3bab4b9517", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
34770625
pes2o/s2orc
v3-fos-license
Altered profile of circulating microparticles in rheumatoid arthritis patients Microparticles (MPs) could be considered biomarkers of cell damage and activation as well as novel signalling structures. Since rheumatoid arthritis (RA) is characterized by immune and endothelial activation, the main aim of the present study was to analyse MP counts in RA patients. Citrated-blood samples were obtained from 114 RA patients, 33 healthy controls (HC) and 72 individuals with marked cardiovascular (CV) risk without autoimmune manifestations (CVR). MPs were analysed in platelet-poor plasma (PPP) and different subsets were identified by their surface markers: platelet-(CD41 + ), endothelial-(CD146 + ), granulocyte-(CD66 + ), monocyte-(CD14 + ) and Tang-derived (CD3 + CD31 + ). Disease activity (DAS28), clinical and immunological parameters as well as traditional CV risk factors (diabetes, hypertension, dyslipidemia and obesity) were registered from clinical records and all data were integrated using Principal Component Analysis (PCA). Absolute MP number was increased in RA patients compared with HC and positively correlated with traditional CV risk factors, similar to that of CVR subjects. In addition, frequency of the different MP subsets was different in RA patients and significantly associated to disease features. Moreover, in vitro assays revealed that MPs isolated from RA patients were able to promote endothelial activation and exhibited detrimental effects on HMEC-I endothelial cell functionality. Circulating MPs from RA Q1 patients displayed quantitative and qualitative alterations that are the result of both disease-specific and traditional CV risk factors. Accordingly, this MP pool exhibited in vitro detrimental effects on endothelial cells, thus supporting their role as biomarkers of vascular damage. INTRODUCTION Microparticles (MPs) are small membrane vesicles (0.1-1.0 µm) constitutively released by many cell types under physiologically conditions, but enhanced in many pathological situations, mainly associated with cell damage.Largely considered as inert cell debris, recent studies have demonstrated they could have a role in intercellular communication [1].They have been demonstrated to harbour nucleic acids, signalling molecules, cytokines and even organelles, thereby supporting their active role in cell biology [2,3].These facts have led to more attention being focused on MPs, since they could exert different effects depending on the conditions under they were originated as well as on the cell type from which they have released.Accordingly, MPs exhibit an Abbreviations: BMI, body mass index; CV, cardiovascular; CVR, cardiovascular risk; EMP , endothelial-derived MP; ESR, erythrocyte sedimentation rate; GMP , granulocyte-derived MP; HC, healthy controls; MoMP , monocyte-derived MP; MP , microparticle; PCA, Principal Component Analysis; PMP , platelet-derived MP; PPP , platelet-poor plasma; PRP , platelet-rich plasma; RA, rheumatoid arthritis; Tang, angiogenic T-cell; Tang-MP , Tang-derived MPs; TNFα, tumour necrosis factor α; VPD, Violet Proliferation Dye 450. array of surface markers derived from their parental cell that can be used to assess their origin [4]. Increased levels of MPs have been reported in patients with malignancies, infections, systemic inflammation, autoimmune and vascular diseases, among other pathological states [5].Thus, circulating MPs have been commonly considered as biomarkers of injury, since they originate after cell activation and apoptosis, or are actively released upon specific activating receptors signals [2,6].This is especially relevant in the context of cardiovascular (CV) disease, since MPs derived from different cell types implicated in the etiopathology of the disease (endothelial cells, lymphocytes, monocytes, smooth muscle cells and platelets) have been found increased in patients [7,8].Actually, MPs from platelets and endothelial cells are proposed to play a role in thrombogenesis as well as in endothelial activation [9,10].Moreover, platelet-derived MPs are known to be able to activate neutrophils [1112], thereby promoting an innate immune activation that leads to neutrophil extracellular traps (NET) formation [13].These mechanisms are also involved in vascular damage, thus supporting the link between MPs, inflammation and CV disease.In fact, the chronic inflammation associated with many autoimmune disorders could underlie the increased prevalence of CV events reported in these patients [14,15].However, the role played by MP subsets in these situations remains unknown.Therefore, knowledge of these events could help to identify patients at risk and improve specific therapies.Additionally, they represent accessible and valuable biomarkers of different tissues (especially the vasculature) that are difficult to reach and study. Taking into account these considerations, rheumatoid arthritis (RA), an autoimmune condition in which both immune and endothelial activation can be found, provides an interesting scenario to analyse MPs subsets.Previous evidence is limited and results concerning disease associations and MP subsets are heterogeneous (reviewed in [1]).A plausible explanation to account for these discrepancies, also found in other pathologies, is the lack of standardized protocols to analyse MPs.Because of their small size and the large heterogeneity of plasma MPs, most studies are performed by flow cytometry, although other methods have also been used.Traditionally, MPs had been identified by annexin V-binding in their surface [16], but recent evidence has brought into question this methodology, since annexin V-negative MPs express specific surface markers [17,18] and have been reported to have clinical relevance [19].Consequently, many authors have chosen different approaches to avoid annexin staining, such as MP total labelling [20] or no labelling [4]. With the aim of estimating the contribution of MPs to RA pathogenesis, this work simultaneously analysed total and platelet-, endothelial-, granulocyte-and monocyte-derived MPs in relation to disease-specific parameters as well as traditional CV risk factors.In addition, since we have recently proposed a role for angiogenic T-cells (Tang) in RA [21], we aimed to evaluate whether Tang-derived MPs can be found and if associations with clinical parameters could provide new insights on this T-cell subset.Finally, in vitro studies were performed to estimate the potential deleterious effect of RA-MPs on vascular endothelium. Patients We conducted a case-control study involving 114 RA patients fulfilling the 2010 American College of Rheumatology RA criteria, consecutively recruited from the Department of Rheumatology (Hospital Universitario Central de Asturias, Oviedo).Routine clinical examination, including DAS28 calculation, was performed at the time of sampling.Medical records were revised in order to register clinical and immunological parameters, medications, traditional CV risk factors and previous CV events.Definition and classification of CV events and traditional risk factors (hypertension, diabetes, dyslipidemia, obesity and smoking) were performed as previously established [22,23]. Simultaneously, 33 healthy volunteers within the similar age range and gender than patients were recruited from the same population, and a group of 72 individuals with different traditional CV risk factors were recruited from their primary care referral centre (Table 1). Automatized complete blood count and serum lipids analysis were carried out for all the participants.Approval for the study was obtained from the Regional Ethics Committee for Clinical Investigation, in compliance with the Declaration of Helsinki.All the participants gave written informed consent prior to study inclusion. Blood sampling and isolation of platelet-poor plasma A fasting blood sample was obtained by venipuncture in 4.5 ml citrate-containing tubes (BD Vacutainer), which were transferred to the laboratory and centrifuged at 3000 g for 15 min at room temperature to obtain platelet-poor plasma (PPP) within a maximum of 2 h after blood collection.The resulting plasma was divided in three aliquots and stored at −80 • C until analysis. Analysis of MPs by flow cytometry PPP aliquots were thawed at room temperature and 200 µl were transferred into new tubes and centrifuged at 13000 rpm for 30 min at 15 • C.Then, the upper 180 µl were carefully discarded and initial volume was restored with 0.22 µm doublefiltered PBS.To identify cell-derived MPs, a Violet Proliferation Dye 450 (VPD, BD Biosciences) staining was performed, thus avoiding annexin V limitations [24].Hence, 150 µl were transferred into a new tube, brought to a final volume of 1 ml with double-filtered PBS and 1 µl of 1 mM VPD was added.After incubation at 37 • C for 15 min, staining was stopped by placing the samples immediately on ice for 20 min.Finally, VPD-stained MP suspensions were divided into different tubes and pairs of antibodies were added to identify specific MP subsets: anti-CD41-FITC (Immunostep, Spain) and anti-CD146-APC (Miltenyi Biotech, Germany); anti-CD14-PE (Miltenyi) and anti-CD66b-APC (BD); anti-CD3-APC (Immunostep) and anti-CD31-PE (Immunostep).Antibodies were previously centrifuged (13000 rpm, 10 min, 4 • C) in order to avoid aggregates Q2 [25].Incubation was performed at room temperature for 15 min and then MP suspensions were transferred into Stepcount tubes (Immunostep), which allow absolute quantification.Tubes were immediately processed by flow cytometry. Samples were analysed in a FACS Canto II flow cytometer.Forward scatter (FSC) and side scatter (SSC) were adjusted to logarithmic gain.MP gate was designed according to latex microbeads (Sigma-Aldrich) and confirmed with platelet-rich plasma (PRP) [26], setting 1.1 and 0.3 µm as the upper and lower detection limits.Below 0.3 µm, only debris seemed to be detected after analysing double-filtered PBS.Unstained MPs were used to set the threshold for VPD-positive signal and unstained negative control of VPD-stained MPs to establish specific antibody fluorescence.No spillover between VPD and the fluorochromes assayed was registered.Acquisition was performed until 10000 microbeads from Stepcount tubes were acquired (<4 min/tube) at medium rate.All samples were processed and analysed batchwise to minimize technical variations. Total and subset specific cell-derived MPs (absolute number/ml plasma) were calculated according to the MP counts acquired, the total number of microbeads from Stepcount tubes and the dilution performed during sample preparation. For angiogenesis assays, 96-well plates were coated with 50 µl of Matrigel (BD) and left 30 min at 37 • C for polymerization.Then, 50000 HMEC-I cells resuspended in 100 µl of complete medium were added to each well, followed by the addition of MPs suspensions (50 µl) at different concentrations by duplicate.After 16 h of culture, both tube formation and branching points were quantified on one focal plane in three no-overlapping fields per well ( 27) at 40× magnification, using a Motic AE2000 (Motic) inverted microscope equipped with a compatible digital camera (Moticam 2000, Motic). To assay endothelial activation, HMEC-I cells were cultured in 24-well plates with MP suspensions at different concentrations for 16 h.Then, cells were washed and stained with Fixable Viability dye e450 nm (eBioscience) for 30 min in the dark.Next, cells were washed and stained with different antibodies: anti-VEGFR2-PE (R&D), anti-CD144-APC (Milteny) and anti-CD62E-APC (Immunostep) for 30 min at 4 • C. Finally, cells were washed again to eliminate non-binding antibodies and immediately analysed by flow cytometry. TNFα quantification Serum aliquots were stored at −80 • C until cytokine measurements.Tumour necrosis factor α (TNFα) serum levels were quantified using a BD OptEIA kit (BD) following the manufacturer's instructions.The detection limit was 1.95 pg/ml. Statistical analysis Data are expressed as median (interquartile range) unless otherwise stated.Differences between MP concentrations were assessed using the Kruskal-Wallis test with Dunn-Bonferroni correction for multiple comparisons test, whereas correlations were studied using the Spearman ranks test.TNFα effect on MPs counts was studied by multivariate lineal regression analysis adjusted by traditional CV risk factors.MPs counts were logtransformed for normalization prior to regression analyses.Because of the high number of parameters studied, a Principal Component Analysis (PCA) was performed, including traditional CV risk factors and demographic, clinical and inflammatory parameters.The number of components retained was based on eightenvalues (>1) and loadings >0.5 were used to identify the variables comprising a component.Principal component scores were calculated for each patient and used for multivariate regression analysis.Results from in vitro assays were analysed by one-way ANOVA with Dunnett post-hoc test.SPSS 19.0, R 3.0.3and GraphPad Prism 5.0 for Windows were used. MP counts were increased in RA patients Circulating MPs were quantified by flow cytometry in plasma samples from 114 RA patients (Table 1), 33 HC and 72 individuals with different traditional CV risk factors (Supplementary Table S1).The strategy used to identify MPs is presented in Figure 1.MP gate was set within 0.3 and 1.1 µm using latex microbeads and PRP, with most of the detected MPs being smaller than 0.6 µm (Figure 1A).Total cell-derived MPs were defined as those events within this gate and positive for VPD staining (Figure 1B).Specific MPs subsets were identified by their surface markers: CD41 + (platelet-derived MPs, PMP), CD146 + (endothelialderived MPs, EMP), CD14 + (monocyte-derived MPs, MoMP), CD66b + (granulocyte-derived MPs, GMP) and CD3 + CD31 + (Tang-derived MPs, Tang-MP) (Figure 1C).Gates were adjusted by the signal provided by the negative control of VPD-stained MPs. Absolute counts of total MPs in patients and controls and that of different subsets are summarized in Figure 2. Total number of MPs was significantly increased in CVR individuals compared with HC [3.14(2.6)× 10 6 compared with 2.10(1.23)× 10 6 MPs/ml] and further increased in RA patients [4.21(3.02)× 10 6 MPs/ml].Although platelets were the main source of MPs among the analysed subsets, they did not show a significant increase in any group.However, the absolute number of MPs derived from endothelial cells, granulocytes and Tang lymphocytes were significantly increased as was the number of total MPs in RA patients.Accordingly, the frequency of these MP subsets out of the total MPs was also increased in RA patients compared with HC (EMP: 0.05(0.10)compared with 0.02(0.067)%,P = 0.029; GMP: 0.02(0.03)compared with 0.01(0.00)%,P = 0.001; Tang-MP: 8.16(11.70)compared with 1.20(6.76)%,P < 0.0001).Conversely, no differences were registered in the different MP subsets between the CVR group and HC.Therefore, RA patients exhibit not only a quantitative increase but also an altered MP profile. MP profile was associated with traditional CV risk factors and disease-specific parameters Next, we wondered whether traditional CV risk factors and/or disease-specific parameters could account for the MP alterations detected in RA patients.Notably, total MPs number was positively associated with some traditional CV risk factors in both RA and CVR patients.Specifically, similar significant correlations were detected with triglycerides (RA: r = 0.390, P < 0.0002; CVR: r = 0.358, P = 0.012), total/high-density lipoprotein (HDL)-cholesterol ratio (RA: r = 0.319, P = 0.004; CVR: r = 0.298, P = 0.040) and body mass index (BMI) (RA: r = 0.232, P = 0.021; CVR: r = 0.304, P = 0.022).Also, the number of traditional CV risk factors correlated with total MP counts (r = 0.221, P = 0.030).However, none of these associations were observed with any specific MP subset. Nevertheless, the striking increase in the absolute number and the differences in MP composition detected in RA patients could not be explained by the presence of traditional CVR factors.Therefore, cellular damage and/or activation related to disease specific parameters may play a role.In this sense, interesting associations were found: EMP counts correlated positively with disease duration (r = 0.285, P = 0.005); GMP with DAS28 (r = 0.271, P = 0.032), erythrocyte sedimentation rate (ESR) (r = 0.233, P = 0.022) and age at diagnosis (r = 0.233, P = 0.021); Tang-MP with DAS28 (r = 0.275, P = 0.007), tender (r = 0.229, P = 0.026) and swollen joint counts (r = 0.306, P = 0.003) and MoMP with RF titre (r = 0.240, P = 0.041).Interestingly, patients on tocilizumab treatment exhibited lower Tang-MPs (P = 0.050) and GMPs (P = 0.011), whereas methotrexate usage was related to decreased Tang-MPs counts (P = 0.033), probably associated to the lower DAS28 (P < 0.001 and P = 0.008, respectively) found in these patients.Therapies used in the CVR did not result in different MPs counts in any of the subsets analysed.All these results indicated that a large number of parameters, including traditional CV risk and disease specific factors, accounted for total and specific MP subsets in RA patients.Therefore, we employed a PCA to reduce this number into a small set of components that could explain most of the variance of the MP counts.This method also provides an integrative approach for the different factors included, avoiding potential collinearity bias and multiple testing concerns.PCA was conducted with the parameters summarized in Table 2 and all of them exhibited communalities higher than 0.5.The Kaiser-Meyer-Olkin test provide a good adequacy of the data (0.687) as the Bartlett test of sphericity (P = 10 −49 ) did.Results of the PCA provided four components with eigenvalues >1, which were interpreted based on the loadings relating the variable to the component.Thus, loadings higher than 0.5 were used to identify the variables that define each component.As seen in Table 2, disease-specific parameters loaded on the first component ('rheumatic-related'), whereas traditional CV risk factors loaded on the second component ('traditional CV-related'), only disease duration loaded on the third component ('duration-related') as ESR did on the fourth ('inflammation-related').This model explained 69.0 % of the total variance. Finally, we analysed whether principal components were associated to the different MP subsets by multiple regression analysis (Table 3), being each MP subset adjusted for the four components.Interestingly, we found that traditional CVR factors (component 2) can predict total MPs numbers, whereas the counts of specific MP subsets are only explained by disease-specific parameters (components 1 and 3).These observations support our previous findings. TNFα levels correlated with Tang-MPs unless traditional CV risk factors were present Elevated production of TNFα, a cytokine involved in RA pathogenesis, has been related to cell activation, apoptosis and endothelial damage.Therefore, to evaluate whether it could play a role in MP release, serum levels of this cytokine were quantified in RA patients and HC. In spite of the increased levels of TNFα present in RA patients [8.42(9.12)compared with 5.35(4.25)pg/ml, P = 0.001], they were unrelated to the total MP number (r = 0.037, P = 0.730).Further analysis of MP subsets, showed Tang-MP counts were slightly associated to TNFα (r = 0.171, P = 0.097), but this correlation becomes relevant in RA patients without any traditional CV risk factor (n = 24, r = 0.669, P < 0.0001).Moreover, this association was also apparent, although at a lower level, in patients with less than two traditional CV risk factors (n = 51, r = 0.459, P = 0.001), and in those with less than three factors (n = 73, r = 0.244, P = 0.038), indicating that the higher number of traditional CV risk factors, the lower TNFα contribution to MP MPs from RA patients promoted endothelial disturbance in vitro Finally, since MPs have been linked to CV risk and endothelial activation, we performed in vitro experiments to evaluate whether circulating MPs isolated from HC, RA patients or individuals with traditional CVR factors could affect angiogenic tube formation and endothelial activation in HMEC-I cells.Angiogenesis assays on Matrigel were conducted after adding HC-, CVR-or RA-MP pools at different concentrations (0.5-8 × 10 6 MP/ml), selected according to the range of total MP counts in controls (Figure 3A).Results showed that the number of both branching points and tubes were dose-dependently inhibited by RA-MPs, whereas no effect was seen with HC-or CVR-MPs (Figure 3B).Interestingly, MPs from RA patients exhibited an anti-angiogenic effect at 1 × 10 6 MP/ml, a lower concentration than they are usually found in plasma. Additionally, MP-mediated activation of endothelial cells was estimated analysing by flow cytometry different endothelialspecific markers, as well as cell viability, in HMEC-I cells cultured in the presence of the different MP pools (Figure 3C).No cytotoxic effect was seen under any of the conditions tested, thus excluding that viability could affect endothelial functionality.However, we observed that RA-, but not HC-or CVR-MPs, increased the expression of CD62E, CD144 and VEGFR2 (all P < 0.050), thus suggesting the promotion of an activated endothelial status. Finally, to assess whether these findings could be attributed to a specific cell-derived MP subset, the amount of each MP subset present in the cultures was compared with the effects found on angiogenic assays.Figure 4 shows that the detrimental effect observed with total RA MPs was also detected when the different subsets were analysed, but it seems to be different depending on their cellular origin.Analysing the effects at physiological levels (median value in HC), striking differences between RA and HC were observed with MoMPs and PMPs but no with GMPs and Tang-MPs, thus suggesting that the deleterious effect of Tangand GMP-RA MPs could be due to the increased proportion within the total MP count, whereas MoMPs and PMPs from RA patients could have a detrimental effect by themselves.Therefore, MPs from RA patients are able to disturb in vitro endothelial functionality dose-dependently, maybe due to an endothelial activation, whereas HC-and CVR-MPs did not promote this effect, even at greater concentrations.These results are a proof of concept that supports functional qualitative and quantitative effects associated to a skewed composition of RA MPs. DISCUSSION The study reported here shows relevant differences in the number and composition of circulating MPs in RA patients compared with HC and individuals with traditional CV risk factors.Additionally, the main finding is that this altered MP pool is the result of disease features as well as traditional CV risk factors, as was supported using a PCA approach.This distinct profile could underlie the detrimental effects exhibited by RA-MPs in endothelial cells assays, presumably by promoting an endothelial activation status.The results herein presented support the use of MPs as biomarkers of endothelial damage in RA patients, with potential use for clinicians in decision making and CV risk stratification. In line with our results, other studies performed with RA patients showed increased MP counts, total or specific from different cell subsets and associated with some clinical features [27][28][29].However, evidence is limited and results are heterogeneous and even contradictory, probably because of the different methodologies used.We have developed a MP total labelling strategy so as to (i) identify virtually all MPs and not only those derived from apoptosis and (ii) avoid the technical drawbacks associated with annexin V staining.Recent evidence suggests the relevance of annexin V-negative MPs in many conditions.Actually, Nielsen et al. reported that in Systemic Lupus Erythematosus patients only annexin V-negative MPs were increased and associated with clinical parameters [19], whereas other authors have reported that most MPs did not express annexin V [17,18,30], so limiting the study to this subset could bias the conclusions.On the other hand, freezing steps and several other factors are thought to affect the annexin V binding-fraction [26,[31][32][33], thus making the comparison between different studies difficult and emphasizing the need for alternative protocols.In this sense, different reagents have been published [18,[34][35][36][37] to overcome annexin V disadvantages.Because of their role in inflammation, angiogenesis and vascular reactivity, MPs have been extensively studied in RA and rheumatic diseases [1].However, this is the first study in which a PCA was used as an integrative tool to analyse MP counts, thus avoiding multiple-testing concerns and supporting previous results.This work clearly indicates that total MP counts can be explained by traditional CV risk factors in individuals at risk (RA and CVR subjects), but RA patients exhibited a profile of increased MP subsets that is only explained by disease-specific factors.Results from CVR individuals as positive CV-risk control allows us to confirm that MP disturbances in RA are specific for the disease itself and independent of comorbidities.Additionally, although PCA component scores are independent in the whole RA group, rheumatic-and traditional CV-related components were positively correlated in the patients who had a history of CV events (r = 0.602, P = 0.008; n = 18) but not in the CV-free group (r = 0.032, P = 0.759), revealing the relationship between these two features.These results are in line with recent evidence about the interplay of traditional CV risk factors and disease parameters in RA [38,39].Moreover, disease-specific factors In vitro effects analysed according to different MP subsets The in vitro effect of MPs (measured as number of branching points) was analysed separately according to the different subsets studied.RA MPs (triangles, grey full line) exhibited greater detrimental effect on branching point numbers than those from HC (circles, black line) and CVR (squares, broken grey line) even at the same concentration, although differences were observed when PMPs and MoMPs were compared GMPs and Tang-MPs.Vertical dotted line represents the median value found in HC. associated with MP subsets in the present study (disease duration, DAS28 and age at diagnosis) have been associated with CV disease in RA [38,40,41], thereby supporting the relevance of MPs as biomarkers of endothelial activation and damage in RA patients. Another interesting finding of this work is the role played by MPs derived from Tang cells, a recently-discovered T-cell subset that enhance endothelial repair through cooperating with endothelial progenitor cells [42].Although no differences in the frequency of MPs derived from T-lymphocytes were detected, both frequency and absolute number of Tang-MPs were increased in RA patients.Interestingly, this increased Tang-MP formation could account for the decreased Tang cell counts previously reported in RA [21].Moreover, our research revealed a DAS28-dependent Tang-MP shedding, thus linking DAS28 with impaired endothelial repair.Actually, the disease parameters positively associated with Tang-MP in this work were the same as has been found negatively associated with Tang frequency [21].Consequently, Tang-MPs could be considered as a surrogate biomarker of endothelial damage and vascular repair failure.Furthermore, the association between TNFα, a proapoptotic cytokine increased in RA, and Tang-MPs reinforces this hypothesis and the link between disease-specific parameters and MP release.Again, the finding that the presence of traditional CV risk factors disturbs this association confirms the interplay between traditional CV risk factors and disease features. In spite of the striking increase in total MPs in patients, there were not differences compared with controls in the PMP counts, suggesting that the RA-specific MP profile may not simply be due to a general MP increase, but rather specific mechanisms targeting different cell populations may be implicated.Accordingly, no associations were detected between PMPs and PCA components, whereas EMPs, GMPs and Tang-MPs correlated positively with disease-related parameters.Another plausible explanation is that therapies would interfere with platelet function, however, no effect of the concomitant medications was observed, neither when untreated patients (allowed to use NSAIDs) were analysed, thus excluding a confounding effect of drug usage on platelet activation.Hence, we could attribute these results to the analysis performed in our study.In fact, increased PMP counts in RA were observed when annexin-V binding was used [28,29], however, when alternative procedures were performed, opposed results were achieved [19,29].This lead us to hypothesize that the total labelling protocol performed in this work could mask the differences in Annexin V-positive PMPs due to an elevated number of negative-events.Therefore, although possible platelet activation during PPP isolation cannot be ruled out, our functional assays indicated that the in vitro detrimental effects of MPs from RA patients depend on qualitative alterations in PMPs rather C The Authors Journal compilation C 2015 Biochemical Society than differences in absolute counts, contrary to was observed with Tang-or GMPs (Figure 4). The fact that the RA MP profile can only be explained by disease-specific features leads us to think that these MPs may have a role in RA pathogenesis.Recent studies analysing the MP proteome support this idea [43].Accordingly, differential 'MP signature' was found in synovial fluid from RA patients compared with other arthritidies [44].Thus, in this pathological situation, MPs could be acting in a vicious circle: disease-related cell injury could generate a RA-specific MP pool which in turn might worsen specific clinical features, as might be damaging vascular endothelium, with subsequent increase in CV risk.Accordingly, MPs from RA patients have been reported to be able to modulate chemokines and cytokines from synoviocytes [45], thus probably amplifying inflammatory responses. Finally, the existence of a RA-specific MP profile was supported by our in vitro assays, since they revealed that effects on endothelial cells depend on the MP pool rather than the concentration (even within physiological concentrations).Specifically, RA-MPs were able to inhibit HMEC-I Matrigel tube formation in a dose-dependent manner, whereas MPs derived from HC and CVR individuals failed to exhibit similar results.This detrimental effect may be due to the promotion of endothelial activation, as was indicated by flow cytometry analysis of endothelial markers.In fact, endothelial activation has been associated with impaired endothelial function in a variety of conditions [46], including RA [47].A role for MPs in CV disease and endothelial function has been previously reported [48,49], but this is the first study where MPs isolated from RA patients have been assayed.Despite providing limited evidence, these results could support the role of MPs as active players in RA pathogenesis, proving worthy of further research. However, it should be noted that not all MPs are proatherogenic.Actually, some groups revealed anticoagulant and protective effects of some MP subsets [50][51][52], in contrast with the procoagulant and deleterious results reported by others [10,27,45,48,53].Interestingly, these diverse effects could be attributed to the exposure of different mediators, such as activated protein C, tissue factor or von Willebrand factor among MP subsets.Although these effects cannot be excluded with the actual data, our results from in vitro assays point to a pathogenic role of MPs in RA patients. Some remarks about the current study should be made.First, despite covering the same age range, HC were younger that RA and CVR patients.However, no associations between age and MP counts were detected in any group.Additionally, age was included in the PCA in order to correct for potential differences.On the other hand, the lack of a standardized protocol to determine cellderived MPs is the main limitation in the field of MPs.The fact that we have developed a new protocol enabling the determination of the total MPs could make the comparison of our results with other studies difficult, since usually only apoptosis-derived MPs were analysed.However, this is a common problem in the field, and a balance between innovative methods and potential results should be considered.Nevertheless, our findings are relatively similar to others obtained by different methods.Finally, although our data did not allow direct determination of the detrimental effects of each specific MP subset, the in vitro results suggest differences between them.Further studies are needed to confirm this hypothesis, however, MPs separation procedures from plasma have not been implemented yet.Furthermore, circulating MPs are present in vivo as a (heterogeneous) group, so 'individual' in vitro effects of a single population need to be considered with caution.In conclusion, the findings of the present study reveal that RA patients exhibited not only increased MP counts but also a qualitatively altered MP profile that is associated with diseasespecific and CV risk factors.Moreover, this MP profile could be able to disturb vascular endothelium.In addition, increased Tang-MPs, probably associated with the DAS28-dependent Tang decrease, could have a role in endothelial repair failure in these patients, thereby supporting the use of both Tang and Tang-MPs as biomarkers of endothelial repair failure. CLINICAL PERSPECTIVES r It is known that the number of cell-derived microparticles (MPs) is associated with endothelial dysfunction and impaired vascular repair. r RA patients exhibit a specific MP profile that is associated with both disease-specific features and traditional CVR factors.This specific profile could underlie the detrimental effects on endothelial cells in vitro, presumably by promoting endothelial activation. r MPs could be considered as biomarkers of endothelial damage in RA patients, with potential use for clinicians in decision making and CV risk stratification. Figure 1 Figure 1Gating strategy for MP analysis (A) Latex beads were used to calibrate FSC and SSC logarithmic gain and to design MP gate.Analysis of MP suspension revealed that most MPs had this size.PRP sample was prepared to confirm MP gate.(B) Total MPs were defined as those events VPD + .Threshold was adjusted with an unstained MP suspension.(C) Gating strategy for PMPs (CD41 + MPs), EMPs (CD146 + MPs), GMPs (CD66b + MPs), MoMPs (CD14 + MP) and Tang-MPs (CD3 + MPs were first gated and evaluated by their CD31 + expression and those CD3 + CD31 + double-positive were defined as Tang-MPs). Figure 2 Figure 2 Total and specific MP subsets in patients and controls Absolute number of total MPs and PMPs, EMPs, MoMPs, GMPs and Tang-MPs was analysed in 114 RA patients, 72 individuals with traditional CVR factors and 33 HC.Horizontal lines represent median and interquartile range.Differences were assessed by Kruskal-Wallis with Dunn-Bonferroni multiple comparison tests.Kruskal-Wallis P value for each subset is indicated at the bottom.Only significant P values from multiple comparisons tests are indicated. Figure 3 Figure 3 In vitro effects of MPs isolated from patients and controls HMEC-I cells were cultured alone (negative control, NC) or in the presence of MP pools at different concentrations isolated from RA patients, individuals with traditional CVR factors or HC.(A) Representative microphotographs (×40) of HMEC-I cells cultured on Matrigel coated plates to perform angiogenic assays.(B) Branching points and tubes numbers identified in different cultures (n = 8).(C) Flow cytometry analysis of CD62E, CD144 and VEGFR2 expression on HMEC-I cells cultured in the presence of the different MP pools (n = 4).Bars represent means + − S.D. and differences between each treatment and the negative control were assessed with one-way ANOVA and Dunnett post-hoc test. Figure 4 Figure 4In vitro effects analysed according to different MP subsets The in vitro effect of MPs (measured as number of branching points) was analysed separately according to the different subsets studied.RA MPs (triangles, grey full line) exhibited greater detrimental effect on branching point numbers than those from HC (circles, black line) and CVR (squares, broken grey line) even at the same concentration, although differences were observed when PMPs and MoMPs were compared GMPs and Tang-MPs.Vertical dotted line represents the median value found in HC. Table 1 Demographic and clinical parameters of RA patients Categorical variables are summarized as numbers (percentage), and continuous variables as medians (interquartile range) unless otherwise as stated * [median (range)].a P value <0.01 (HC compared with RA: P = 0.696).b P value <0.01 (CVR compared with RA: P = 0.460).DAS28, disease activity score (28 joints); HAQ, Health Assessment Questionnaire; RF, rheumatoid factor; αCCP , cyclic citrullinated peptide antibody; ANA, antinuclear antibody.Differences were assessed by Kruskal-Wallis and Dunn-Bonferroni multiple comparisons tests and chi-square test, as appropriate. Table 2 Component loadings from PCA Variables included in the analysis and their corresponding loading on each component are shown.Variables were assigned to each component based on loadings >0.5.Loadings in bold indicate the component on a variable loaded the highest. Table 3 Multivariate regression analyses of MP subsets in RA patientsMultivariate regression analyses of MP subsets (as dependent variables) and the four components obtained by PCA compon- ents (as predictors) associated with specific disease variables: rheumatic-related (Rhe-rel), traditional CV-related (tCV-rel), disease duration-related (Dur-rel) and inflammation-related (Infl-rel).Results are expressed as β coefficient and (P value) for each PCA component.Significant coefficients are highlighted in bold.
2018-04-03T02:42:57.678Z
2014-11-04T00:00:00.000
{ "year": 2015, "sha1": "ded1a19d6f4f6656d12da461c8458dbb534a9076", "oa_license": "CCBY", "oa_url": "https://digibuo.uniovi.es/dspace/bitstream/10651/28859/2/AlteredMP.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "d67d4be137fcc6b738d8a2153606730f3c47eb86", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249861706
pes2o/s2orc
v3-fos-license
Orbital Abscess Developed Apart From Paranasal Sinusitis and Dacryocystitis in Fibrous Dysplasia A 48-year-old man visited the emergency department of our hospital with swelling of the left upper and lower eyelids from the day before. On the first examination, he had severe swelling of the left upper and lower eyelids, proptosis, and chemosis. Left intraocular pressure was 33 mmHg. Computed tomographic images showed an orbital abscess in the anterosuperolateral orbital space, maxillary and ethmoidal sinusitis, and dacryocystitis. The orbital abscess was not contiguous to maxillary and ethmoidal sinusitis and dacryocystitis. Ground-glass appearance was seen in the frontal, maxillary, and ethmoid bones, and most of the space of the frontal sinus was obliterated due to the expansion of the frontal bone. Emergent drainage of orbital abscess, dacryocystorhinostomy, and endoscopic sinus surgery were performed under general anesthesia. Intravenous tazobactam/piperacillin was administered. A culture test of the sinus pus and orbital abscess showed growth of Streptococcus intermedius (2+). At one month postoperatively, there was no recurrence of orbital abscess, paranasal sinusitis, and dacryocystitis. Introduction Fibrous dysplasia is a rare skeletal disorder characterized by fibrous replacement of the bone marrow [1]. This entity is not associated with ethnic and sex-related differences [2]. The common site of the involvement of the paranasal sinus skeleton is the sphenoid bone, followed by ethmoid and maxillary bones [1]. Fibrous dysplasia can obstruct the paranasal sinus ostium, causing acute paranasal sinusitis [3,4]. This rarely spreads directly into the orbit, resulting in orbital cellulitis and orbital abscess [3][4][5][6]. There had been only two reported cases of fibrous dysplasia with orbital abscess directly extended from paranasal sinusitis [3,4]. Here, we report a case of fibrous dysplasia with orbital abscess, which developed apart from paranasal sinusitis and dacryocystitis. Case Presentation This study was conducted in accordance with the tenets of the Declaration of Helsinki and its later amendments. Written informed consent for publication of an identifiable face photo was obtained from the patient. A 48-year-old man presented to the emergency department of our hospital on a weekend night with swelling of the left upper and lower eyelids for one day. One month before his referral to us, he received antibiotics for paranasal sinusitis and orbital cellulitis for two weeks at another hospital. He was clinically diagnosed with fibrous dysplasia previously. He had no history of any immunocompromising disease or facial trauma. On the first examination at the emergency room, there was difficulty in opening the left eye due to severe left upper and lower eyelids swelling and proptosis ( Figure 1A). Light reflex was prompt in both eyes. The left eye was positioned in the inferior gaze, and it could not move in the superior direction. The left intraocular pressure measured using iCARE® (Vantaa, Finland) was 33 mmHg. Slit-lamp examination revealed severe chemosis in the left eye. Computed tomography (CT) images showed an orbital abscess in the anterosuperolateral orbital space, maxillary and ethmoidal sinusitis, and dacryocystitis ( Figures 1B, 1C). An orbital abscess developed apart from maxillary and ethmoidal sinusitis and dacryocystitis. Ground-glass appearance was seen in the frontal, maxillary, ethmoid, and zygomatic bones, which corresponded to the previous diagnosis of fibrous dysplasia. Most of the space of the frontal sinus was obliterated due to the expansion of the frontal bone. Small cystic changes were demonstrated in the frontal and maxillary bones, and one cyst opened toward the superior orbit. The lacrimal sac fossa had a partial defect. A blood test revealed a high white blood cell count (10,600/μl) and elevated C-reactive protein (14.50 mg/dL). After hospital admission, emergent drainage of the orbital abscess, dacryocystorhinostomy, and endoscopic sinus surgery were performed under general anesthesia. Polyps in the middle nasal meatus were removed, and the pus was drained. A thickened uncinate process and ethmoid sinus septa were removed using a drill with a diamond burr to open the posterior ethmoid sinus and superior nasal meatus. The lacrimal sac was opened, and a lacrimal tube was inserted. A sub-brow incision and lateral canthotomy along with cantholysis were performed to drain the orbital abscess. We confirmed no connection between the superolateral orbital space and paranasal sinus. A drain was placed in the superolateral orbital space. The lateral canthus was left unsutured to keep the intraocular pressure reduced. Intravenous tazobactam/piperacillin was administered, and the orbital space was irrigated from the drain. The results of the culture test of the pus, obtained at five days postoperatively, showed growth of Streptococcus intermedius (2+). As this microorganism was found to have high drug sensitivity for tazobactam/piperacillin, we continued the antibiotic, as well as the irrigation, till the ninth postoperative day. As S. intermedius has been isolated from patients with periodontitis, the patient was consulted with a dentist. However, the relationship between the intraoral condition and paranasal/orbital infection was unclear. At nine days after surgery, the lateral canthus was sutured and re-fixed, and at 13 days after surgery, the patient was discharged from the hospital. At one month postoperatively, there was no recurrence of orbital abscess, paranasal sinusitis, and dacryocystitis. Intraocular pressure decreased to 19 mmHg. The vision was normal and the extraocular muscle motility improved. Discussion We report a patient with fibrous dysplasia who showed orbital abscess, maxillary and ethmoidal sinusitis, and dacryocystitis. There was no direct connection between the orbital abscess and paranasal sinusitis. Although dacryocystitis can cause orbital abscess [7], the orbital abscess was far away from the dacryocystitis in this case. There had been only two reported cases of fibrous dysplasia with orbital abscess, which was caused by the direct spread of paranasal sinusitis [3,5]. A possible etiology in this study was an indirect hematogenous spread of ethmoidal sinusitis into the anterosuperolateral orbital space [4]. Another possible etiology was the secondary transformation of aneurysmal bone cysts in the frontal bone shown as cystic lesions on CT [3], although a biopsy of the frontal bone was not performed. One cyst opening toward the superior orbit allowed the accumulation of orbital hematoma [3], which might have been an infection source in this case. Urgent drainage of the orbital abscess is required to prevent the development of serious complications, including visual loss and other lethal conditions, such as cavernous sinus thrombosis, meningitis, and cerebral abscess [8]. Also, broad-spectrum intravenous antibiotics should be given until obtaining the results of cultural tests [8]. Our treatment plan followed this standard treatment regimen. Furthermore, the lateral canthal ligament was left disinserted to reduce both the intraocular and retrobulbar pressures in this case [9]. Conclusions In conclusion, we report a rare case of fibrous dysplasia and orbital abscess with no contiguous spread of paranasal sinusitis. Indirect hematogenous spread of ethmoidal sinusitis and secondary transformation of aneurysmal bone cysts in the frontal bone may be possible etiologies of the orbital abscess. Urgent surgical and medical treatments are necessary to prevent the development of serious complications.
2022-06-20T15:08:42.713Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "bb541c3577826e07ee48f813fbb391ba5f5c1612", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/100555-orbital-abscess-developed-apart-from-paranasal-sinusitis-and-dacryocystitis-in-fibrous-dysplasia.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef38acef4dcc8c8f11253b9d93747d5dfd014891", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
5914466
pes2o/s2orc
v3-fos-license
Early clinical effects of the Dynesys system plus transfacet decompression through the Wiltse approach for the treatment of lumbar degenerative diseases Background This study investigated early clinical effects of Dynesys system plus transfacet decompression through the Wiltse approach in treating lumbar degenerative diseases. Material/Methods 37 patients with lumbar degenerative disease were treated with the Dynesys system plus transfacet decompression through the Wiltse approach. Results Results showed that all patients healed from surgery without severe complications. The average follow-up time was 20 months (9–36 months). Visual Analogue Scale and Oswestry Disability Index scores decreased significantly after surgery and at the final follow-up. There was a significant difference in the height of the intervertebral space and intervertebral range of motion (ROM) at the stabilized segment, but no significant changes were seen at the adjacent segments. X-ray scans showed no instability, internal fixation loosening, breakage, or distortion in the follow-up. Conclusions The Dynesys system plus transfacet decompression through the Wiltse approach is a therapeutic option for mild lumbar degenerative disease. This method can retain the structure of the lumbar posterior complex and the motion of the fixed segment, reduce the incidence of low back pain, and decompress the nerve root. Background Spinal fusion has been used to treat lumbar degenerative diseases for many years, but as follow-up time extended, the complications such as back amyotrophia resulting from extensive dissection, normal spinal function loss, and degenerative adjacent segments were reported frequently [1]. In recent years, the Dynesys system has been used clinically. With the design targeted to achieve instant stability and retain activities of the fixed segments, the Dynesys system plus transfacet decompression through the Wiltse approach could retain posterior ligamentous complex and reduce the damage to bony structure to the largest extent [2]. From June 2009 to June 2012, we treated 37 patients with lumbar degenerative disease by using the Dynesys system plus transfacet decompression through the Wiltse approach and obtained satisfactory effects at the initial stage. General data From June 2009 to June 2012, 37 patients with lumbar degenerative disease were enrolled in this study. There were 21 males and 16 females with average age of 40.5 years (age range: 27-52 years). The preoperative diagnosis included lumbar spinal stenosis (6 cases) with obvious backache and intermittent lameness accompanied by unilateral or bilateral leg pain; and lumber intervertebral disc herniation (31 cases) with backache and lateral leg pain. The disease course was 20.40±12.36 months (range: 8-36 months). All patients had conservative treatment for 4-8 weeks without any effect and never had lumbar surgery. Thirty cases involved only 1 segment, while 7 cases involved 2 segments. There were18 cases in L4/5, 12 cases in L5/S1, and 7 cases in both L4/5 and L5/S1. The patients with spondylolysis degree >II°, scoliosis degree >10°, severe OP, severe obesity, and BMI >35 Kg/m 2 were excluded. All patients accepted routine preoperative examinations, including X-ray, CT, MRI, and postoperative X-ray rechecking of lumbar vertebrae. Visual Analogue Scale (VAS) and Oswestry Disability Index (ODI) evaluating standards were applied to evaluate the therapeutic effect before and after surgery. Position and anesthesia All patients were placed in the prone position under general anesthesia, and pillows were put under the shoulders and bilateral thoracic and abdominal walls. The Wiltse approach Based on preoperative MRI image, 2 incisions were made over the spinous process (vertical incisions 1-2 cm from the lateral spine process) and 1 incision was made to bilaterally isolate from lumbodorsal fascia to the space between the longissimus with multifidi. The positions of longissimus and multifidi were determined by incisions on the skin. Blunt dissection was performed slightly between longissimus and multifidi with fingers, directing to the superficial transfacet from superficial to underlying. Some soft tissues of the upper and lower lateral facet joints, processus transversus, and vertebral plate were dissected so as to clearly expose the above structures without strongly pulling paravertebral muscle groups or other soft tissues. Dynasys fixation A pedicle screw was inserted at the point of the midline connecting facet joint superior articular process exterior margin with processus transversus, or it could be placed slightly outward under the guide of a C-arm X-ray machine. Then the distance between the upper and lower pedicle screws was measured when keeping anterior protruding position of lumbar vertebra and mild separation of spine. A tube-like over-sleeve was selected based on the measured length. Lastly, polyester rope was fit between polyester pipe and the upper and lower pedicle screws, and then tightened and locked with small screws. Decompression Through the Wiltse approach, after the segments were targeted and confirmed by intraoperative a C-arm X-ray machine, some soft tissues were removed from the surface of the upper and lower lateral facet joints, processus transversus and vertebral plate, thus exposing the above structures without strongly pulling paravertebral muscle groups or other soft tissues. The ligamentum flavum was exposed at the place of the intervertebral soft tissues between upper and lower articular processes, dissected with a small spatula along the lower edge of the superior and inferior vertebral plate, as well as the medial edge of the upper and lower articular processes, and excised to enter the spinal canal parrying posterior bony structure between the upper and lower vertebral plates. Then a sharp osteotome of less than 1 cm was used to remove some tissues along the medial edge of the hypozygal of vertebral body. Thereby, the hyperplastic and cohesive articular surface of the superior articular process of the inferior vertebral body towards coronal plane was exposed. Beneath the vertebral body was the extruded lateral crypt and nerve root canal mouth, which were removed. The removal range was determined based on 854 the following standard: the remaining spinal nerve roots and traversing spinal nerve roots through this region could be effectively exposed; the pressed nerve roots could be released and decompressed and the intervertebral disc space could be clearly exposed. In most cases, only the parts that moved from the sagittal plane to the coronal plane, showing hyperplasia and hypertrophy and expanded to the edge of the midline during degenerative process, should be excised precisely but not the entire facet joints. A drainage tube was used, and the incisions, especially in the bilateral lumbodorsal fascia, were sutured layer by layer. Postoperative management The drainage tube was placed for 24-48 h; antibiotics were administered for 1 day; and the stitches were removed 12-14 days after the operation. To reduce nerve root adhesion, the patients were guided to perform the straight leg-raising test after the drainage tube was removed. Two weeks later, the patients whose wound had healed properly could take lumbar exercise supported by 5 points to promote the recovery of lumbar muscle force. The time to get out of bed was determined based on the bony damage during the operation and the quality of the internal fixation. The time to get out of bed in this study was 3.5 days on average (range: 3-7 days) after the operation. The patients had low-intensity activities on the ground under the protection of a gait belt after getting out of bed and more activities were gradually performed. Three to 4 weeks after the operation, patients could move freely on the ground under the protection of gait belt and were photographed and provided with follow-up. Three months after the operation, the patients were photographed again for the purpose of recheck. With good results, they could return to normal activities without protection of a gait belt. Lumbar lateral X-ray was necessary after the operation and during the follow-up. Evaluation method Imaging evaluation: the patient received routine lumbar X-ray (anteroposterior and lateral view and flexion and extension view), CT, and MRI of lumbar before the operation. After the operation (after the drainage tube was removed) and during the follow-up, they received lumbar X-ray (anteroposterior and lateral view and flexion and extension view). CT and MRI could also be performed if necessary. Height of intervertebral space: a lateral X-ray of the lumbar spine was taken, and the height of intervertebral space at the operated segments and the adjacent segments (the upper and lower segment) was measured (in patients with L5/S1 as the operated segment, only the upper adjacent segment was measured). The average value of anterior height, central height, and posterior height was taken as the height of intervertebral space of this segment. Intervertebral range of motion (ROM): the ROM of the operated segment and the adjacent segment was measured based on X-ray film (flexion and extension view). VAS and ODI evaluating standards were applied to evaluate the therapeutic effect. Preoperative, postoperative, and final follow-up clinical sign, symptom, and sphincter function were evaluated. Statistical analysis Statistical analysis was conducted with SPSS 19.0, and the statistical comparison of VAS and ODI scores before and after operation and at the final follow-up, as well as the height of intervertebral space and intervertebral ROM of the operated segment and the adjacent segment, was implemented by paired t test. The difference with P<0.05 was considered significant. Results No complications occurred in patients in this study during the operation. The time of surgery was 130±28 min, and the intraoperative bleeding volume was 275±45 ml. The drainage tube was removed at 48 h after the operation, with the postoperative drainage volume of 151±55ml. The average followup time was 20 months (9-36 months). Compared with preoperative parameters, the scores of VAS and ODI decreased significantly after surgery and at the final follow-up (P<0.05), while the difference of the scores after surgery and at the final follow-up was of no statistical significance (P>0.05) ( Table 1). Compared to preoperation, the height of intervertebral space at the operated segments (L4/L5 and L5/S1) (P<0.05) after the operation was increased significantly, and the intervertebral ROM at the operated segment after surgery was obviously reduced (P<0.05) ( Table 2). However, no significant changes were seen in the height of intervertebral space and the intervertebral ROM at the upper and lower adjacent segments (P>0.05) ( Table 3). The postoperative X-ray showed no instability signs of lumbar, loosened pedicle screw, breakage, or distortion in any patients (Figure 1). Advantages of the Wiltse approach Surgical interventions have been found to restore function, decrease pain, and enhance quality of life in properly selected patients with lumbar degenerative diseases [3]. Posterior lumbar interbody fixation and fusion after decompression is the standard method used to treat lumbar degenerative diseases. However, traditional surgery selects a post-middle approach that results in muscle injury and innervation loss due to dissection and traction, which involves a wide range of soft tissues. Worse, this process will last longer and cause backache and amyotrophia due to the specific features of blood supply, metabolism, and innervation of paravertebral muscle 4 ; postoperative lumbar disability and instability will also appear because the healed scar cannot effectively withstand spinal pressure [5]. For the purpose of reducing the harm to paravertebral muscle, Wiltse proposed the approach of inter-muscular space of the lumbar spine in 1968 [6]. In this approach, physicians can reach the operative processus transversus, articular facet, and other parts through the space between multifidi and longissimus without striping the muscle enthesis. This approach also has little impact on the blood supply and innervation of paravertebral muscle. In conclusion, the advantages include reducing operative bleeding, muscle injury, avascular necrosis caused by the operation, release of the postoperative inflammatory factor, and incidence of postoperative backache [7,8]. Moreover, this approach can direct to the articular surface and processus transverses, and can better meet the requirements of the Dynesys system for screw placement in the articular process and the lateral joint without strong traction of paravertebral muscle compared with the median incision involving muscular traction. Meanwhile, as for this approach, the double incision has been changed into a single incision besides the spinous process. Based on the obesity of the patients and the distance between muscular space inlet and median line measured by T2-weighted MRI before the operation, 2 incisions are made in patients involving L5/S1with relative obesity in general, while a single incision is usually made for L4/5 [9]. Above all, the advantages of the Wiltse approach include less bleeding, less damage to back muscle, and better exposure of articular process to place screws for the Dynesys system. Features of transfacet decompression Decompression fixation fusion is the "golden rule" for spinal surgery. The laminectomy is always a standard approach to treat lumbar degenerative disease, but it can damage the posterior structure of lumbar, affect spinal stability, cause backache due to the postoperative scar adhesions, and cause failed back surgery syndrome. According to the clinical follow-up implemented by Kawaguchi et al. [10], 10% of patients had spondylolysis. The study of Sen et al. [11] suggested that the incidence of failed back surgery syndrome caused by epidural scar adhesion was 8-24%. Based on further understanding of lumbar structure and lumbar degenerative disease, the vertebral plate has no pressure on the nerve root in the spinal canal, but the degenerative articular process with hyperplasia, looseness, and cohesion will compress the nerve root going through the nerve root canal. In view of this pathological change, from the 856 Figure 1. A 35-year-old male with low back and the right lower extremity pain for 12 months aggravating for a month. (A-C) The X-ray before surgery showed lumbar degeneration, L4/5 instability, L5/S1 disc space narrowing, lumbar flexion, and hyperextension limited. (D) MRI showed L4/5 disc bulge, L5/S1 disc herniation. (E) MRI showed L4/5 disc bulge to the left behind. (F) MRI showed L5/S1 disc herniation to the right rear and right nerve root compression. (G, H) The X-ray after surgery showed L5/S1 decompression, Dynesys fixation. There is no clinical symptom with L4/L5, L4/L5 mild instability, so dynamic fixation was given. (I, J) The X-ray 9 month after surgery showed L4/5, L5/S1 segments retain some activity, lumbar flexion and hyperextension limited. (K) Single incision (for underweight persons and L4/5). (L) Two incisions (for obesity and L5/S1). 857 simple extensive decompression to limited precise decompression, vertebral plate incision is unnecessary in most cases and it is better to remove part of or the entire facet joint for a limited but effective decompression [12]. Advantages of transfacet decompression are that the depression phase and range are technically determined based on a comprehensive analysis of the symptoms and signs of patients and relevant imaging materials. It realizes a full decompression in disc-flava ligament space, lateral intervertebral canal, and the mouth of foramen intervertebral through which the nerve roots move, and can also release the nerve root completely by excising the zygapophyseal joint, exposing foramen intervertebral and removing intraspinal compression to the nerve roots (osteophyte, thickening and calcified ligamentum flavum, and protruding intervertebral disc). In structural protection, it retains the spinous process, interspinous ligaments, muscular points, and lateral joint capsule, appropriately maintains the midline structure, enhances spinal stability, relieves postoperative backache, and improves recovery of the patients after transfacet decompression. By combining it with the Wiltse approach, it can also effectively protect the structure and reduce the incidence of postoperative backache by exposing the articular process directly without excessive stripping and traction of paravertebral muscle. A randomized controlled trial on the 5-year follow-up implemented by Hallett et al. [13] suggested transfacet decompression and fusion combined with posterior internal fixation had better effects in the backache scores, SF-36 Scale, and Roland Morris Dysfunction Questionnaire scores. In our opinion, this approach can not only direct to the zygapophyseal joint, but also complete removal of zygapophyseal joint after screw placement for Dynesys outside the articular process. Additionally, it can retain the bone at lateral border and the ventral joint capsule, thus keeping midline structure and joint capsule stability without exposing the superior nerve root in the operation field. The superior and medial parts of this lateral superior articular process were excised without removal of the entire superior articular process, and the excision region and range can be adjusted on the basis of the preoperative symptom, imaging materials, and compression condition during the operation. Dynesys non-fusion fixation With the development of the technology for internal fixation and fusion, the spinal fusion rate is higher than 95%. However, the improvement of fusion was not always accompanied by the increase of the clinical effects. Limited lumbar motion, biomechanics changes after fusion, unstable lumbar spine, and pseudarthrosis formations can cause accelerated degeneration of the adjacent segments [14]. According to Mulholland [15], it is optional to limit segment motion within a certain range so as to maintain an approximately normal loading capacity. The dynamic fixation was proposed to stabilize the spine, improve loading capacity, retain part of the motion of the fixed segments, and prevent instability and degenerative adjacent segments. As a typical demonstration of this concept, the Dynesys system uses a titanium alloy pedicle screw for fixation, which is then connected with a transparent polyurethane tube and polyester rope. It may retain partial motion of the fixed segments and realize decompression of zygapophyseal joint and intervertebral disc [2]. First, on the biomechanics, Schulte et al. [16] proved that the decompression in addition to Dynesys system could better limit the flexion, extension, and lateral bending of a fixed segment. Gedet et al. [17] proposed that the Dynesys system could reduce the ROM of the fixed segment in extension, lateral curvature, and rotation position to 26%, 33%, and 76%, respectively, of the normal parameters. Second, on the relation of degenerative adjacent segments, the cadaver study of Schilling et al. [18] indicated that dynamic fixation could reduce intervertebral disc pressure of the fixed segments remarkably without affecting the adjacent segments. Likewise, Cabello et al. [19] performed a 6-cadaver study, reporting that the intervertebral disc pressure was reduced by 65% with fixation in L5/S1 and the pressure in L4/5 increased by 20%. But by inserting the Dynesys system in L4/5, the pressure was reduced to 50% and the pressure in L3/4 only increased by 10%. Therefore, the Dynesys system can better decrease the pressure of the adjacent segments than rigid fixation. In a randomized controlled trial with 3-year follow-up, Yu et al. [20] made a comparison between the Dynesys system and PLIF approach in clinical effects and imaging inspection. They found the Dynesys system could better retain the vertebral motion but less affected adjacent segments and had lower incidence of degeneration (1/27 and 6/26). According to recent followup results, the postoperative scores in VAS and ODI both declined compared with the pre-operation, and no aggravation of the degeneration of lumbar vertebra was observed from the imaging. The fixed segments had limited motion, but its longterm effect needs further observation. With equivalent efficacy to traditional fixation fusion, the Dynesys system can also decompress the fixed and adjacent segments. The Dynesys system, in addition to transfacet decompression through the Wiltse approach, can effectively protect posterior structure, reduce operative injury with full decompression, and allow the patients to get out of bed sooner. Additionally, it decreases the incidence of low back pain and has satisfactory clinical effects. Thus, it is a therapeutic option for lumbar degenerative diseases by integration of the advantages of different techniques. Conclusions The Dynesys system plus transfacet decompression through the Wiltse approach is a therapeutic option for mild lumbar degenerative disease. This method can retain the structure of the lumbar posterior complex and the motion of the fixed segment, reduce the incidence of low back pain, and decompress the nerve root. The early clinical effects are satisfactory, but its long-term effect needs further observation.
2016-05-04T20:20:58.661Z
2014-05-24T00:00:00.000
{ "year": 2014, "sha1": "8c04acfecc8a9df2e6cf543a8ba0b2375d4caa15", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4043541?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8c04acfecc8a9df2e6cf543a8ba0b2375d4caa15", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208759278
pes2o/s2orc
v3-fos-license
Analytical Dissection of an Automotive Li-Ion Pouch Cell Information derived from microscopic images of Li-ion cells is the base for research on the function, the safety, and the degradation of Li-ion batteries. This research was carried out to acquire information required to understand the mechanical properties of Li-ion cells. Parameters such as layer thicknesses, material compositions, and surface properties play important roles in the analysis and the further development of Li-ion batteries. In this work, relevant parameters were derived using microscopic imaging and analysis techniques. The quality and the usability of the measured data, however, are tightly connected to the sample generation, the preparation methods used, and the measurement device selected. Differences in specimen post-processing methods and measurement setups contribute to variability in the measured results. In this paper, the complete sample preparation procedure and analytical methodology are described, variations in the measured dataset are highlighted, and the study findings are discussed in detail. The presented results were obtained from an analysis conducted on a state-of-the-art Li-ion pouch cell applied in an electric vehicle that is currently commercially available. Introduction Microscopy is a key analytical method that is applied in most research fields to understand various effects that influence the properties or the behavior of components or materials. With reference to automotive batteries, two main fields of application can be distinguished. The first field involves the visualization of chemical processes that occur within the battery during operation, which supports the identification and the understanding of the mechanisms involved. The information gathered in this field enables researchers to find solutions to problems and improve these batteries. This information is particularly useful during the early development of batteries. In this field of application, very small samples are typically analyzed in great detail to obtain the best possible quality of data. For such an analysis, special half cells [1] are also built that allow researchers optical access to very small scale effects. Special sample generation methods are applied that allow chemical reactions to occur; this enables, for instance, an in situ analysis of the formation of new layers within the battery at the time of operation [2][3][4]. The second field involves the collection of microscopic images of surfaces of single components or their cross-sections, which provide much information essential to the derivation of the chemical or the mechanical properties of the cells. The latter is necessary to determine the loading thresholds of batteries and thus integrate them safely into an electric vehicle. In the following text, some examples of such data are described briefly. Numerical finite element models of Li-ion batteries have been recently developed with different levels of detail to, e.g., estimate the mechanical load that can be applied to cell and its associated risk of failure during a crash. In such models, the thickness of the single battery layers is crucial information that is typically derived from microscopic images [5,6]. In addition, microscopic images support the understanding of the layer mechanical properties in that they provide important information about the microstructure of these layers (anisotropy of separator [6]). This microstructural information can be used as input to define mechanical testing scenarios and derive mechanical properties of battery components, thus they can be translated into a working simulation model. Some properties of batteries that mainly influence their electrical properties or their performance levels can also be visualized and understood by using microscopic imaging. Examples include the porosity of the separator [7,8] or the grain size of the active material (AM) [9]. In the literature, the post-mortem analysis of Li-ion batteries by generating microscopic images is a frequently used investigation method [10][11][12]; however, the steps necessary to produce "good" imaging results have not been described in detail. This study was carried out to establish a "best-practice" approach for the post-mortem analysis of samples generated from Li-ion pouch cells. An overview is provided of a method to safely dismount and disassemble battery modules from a battery pack without inducing damage to the battery pouch. A central aspect of this work is a comparison between two different analytical techniques used to measure various battery component parameters, such as layer thickness and particle size. A conclusion was drawn regarding the accuracy of the measured results from two different sample generation methods and their usability in different research fields. This study was carried out on a state-of-the-art Li-ion pouch cell disassembled from a fresh uncycled battery pack, which is applied in an electric vehicle that is currently commercially available. Method In this section, the extraction procedure of cells from the battery modules, the sample generation procedure from individual cells, the sample pre-treatment for microscopic analysis, and the microscopic methodology used are presented and discussed in more detail. From Battery Pack to Cell The battery pack disassembled in this work contained 24 identical battery modules, each of which consisted of eight 41 Ah Li-ion pouch cells with cut-off voltage limits of 2.5 V and 4.2 V. Twelve of the modules were positioned in the front of the battery pack and arranged horizontally with their connection tabs oriented towards the middle of the pack. The other twelve modules were arranged vertically in the back of the battery pack and oriented so that their connectors also faced towards the internal pack structure (Figure 1a). Due to the shipping safety regulations [13], the battery pack had a pre-set state of charge (SOC) value of about 30%; therefore, extra precautions had to be taken during the pack disassembly. After disconnecting and dismounting the bus bars, the battery modules were taken out ( Figure 1b) and then disassembled. A general problem typically faced when disassembling modules that contain Li-ion pouch cells is that most of the batteries are glued to each other. An inaccurate disassembly procedure could result in deformation of the pouch surface or even damage to the internal battery structure. This might not only affect the electrical or the mechanical performance of the battery but may also decrease the safety of the battery and result in instant cell failure. To overcome this problem, a safe and clean method to separate all cells was adopted as part of this study. In a first step of the module disassembly process, the caps of the bolts (Figure 2a), which hold the entire structure together, were removed. The stacked cell structure was then divided into two substructures comprising four cells (Figure 2c) by using glue removal solvent and a nylon rope with a diameter of 0.7 mm. The nylon rope was mounted on one of the connecting bolts ( Figure 2b) and slid entirely between the two cell stacks. After the structure was sprayed with the solvent, three minutes were allowed to let the glue dissolve, and the rope was then used to "cut" through the adhesive by manually applying a constant force at its free end. This procedure ensures a clean disassembly process and does not cause any damage to the pouch of the batteries. In a next step, after the four bolts seen in Figure 2c were removed, the battery housing was also dismounted by making use of the same procedure described above. An intermediate step of the module disassembly process can be seen in Figure 2d. Two module sub-structures are visible in this figure, each of which contained four batteries as well as compression pads that were integrated in the module, the battery housing, and all connecting bolts. The cells were then separated from each other by first cutting through the battery tabs with a sharp blade ( Figure 2e) and then removing the glue between them using the same nylon rope technique as described above (Figure 2f). It is important to keep the cell tabs as large as possible in order to allow a sufficient contact area for battery electrical characterization and further cycling investigations. The final step in the module disassembly process was the removal of the plastic frame from around each battery. This frame was connected to the pouch at four different points located on each battery corner (Figure 2g). Careful drilling through these points allowed the frame to be removed by simply pulling it off to the side (Figure 2h). A general problem typically faced when disassembling modules that contain Li-ion pouch cells is that most of the batteries are glued to each other. An inaccurate disassembly procedure could result in deformation of the pouch surface or even damage to the internal battery structure. This might not only affect the electrical or the mechanical performance of the battery but may also decrease the safety of the battery and result in instant cell failure. To overcome this problem, a safe and clean method to separate all cells was adopted as part of this study. In a first step of the module disassembly process, the caps of the bolts (Figure 2a), which hold the entire structure together, were removed. The stacked cell structure was then divided into two substructures comprising four cells (Figure 2c) by using glue removal solvent and a nylon rope with a diameter of 0.7 mm. The nylon rope was mounted on one of the connecting bolts ( Figure 2b) and slid entirely between the two cell stacks. After the structure was sprayed with the solvent, three minutes were allowed to let the glue dissolve, and the rope was then used to "cut" through the adhesive by manually applying a constant force at its free end. This procedure ensures a clean disassembly process and does not cause any damage to the pouch of the batteries. In a next step, after the four bolts seen in Figure 2c were removed, the battery housing was also dismounted by making use of the same procedure described above. An intermediate step of the module disassembly process can be seen in Figure 2d. Two module sub-structures are visible in this figure, each of which contained four batteries as well as compression pads that were integrated in the module, the battery housing, and all connecting bolts. The cells were then separated from each other by first cutting through the battery tabs with a sharp blade ( Figure 2e) and then removing the glue between them using the same nylon rope technique as described above (Figure 2f). It is important to keep the cell tabs as large as possible in order to allow a sufficient contact area for battery electrical characterization and further cycling investigations. The final step in the module disassembly process was the removal of the plastic frame from around each battery. This frame was connected to the pouch at four different points located on each battery corner (Figure 2g). Careful drilling through these points allowed the frame to be removed by simply pulling it off to the side (Figure 2h). Figure 2. Battery module disassembly method. (a) Removal of the caps and bolts that hold the module together; (b) separation of the two four-cell stack with a 0.7 mm nylon rope; (c) module separated into two parts; (d) module disassembled into parts (two four-cell stacks, module housing, compression pad, and connection bolts); (e) cutting the tab connection; (f) separation of the cells from each other using a nylon rope; (g) location of the connection points between the plastic frame and the pouch; and (h) removal of the plastic frame. Generation of Samples for Cross-Sectional Analysis The purpose of an extensive cross-sectional analysis of the battery is two-fold. On the one hand, the results of this analysis allow the determination of the battery layer thickness. On the other hand, information can be gathered about manufacturing details and layer arrangement of the cells. Two different approaches for sample generation were adopted in this study to evaluate the thicknesses of the battery components and identify some cell characteristics that were otherwise not visible. For the first approach, small samples with a size of 5 mm × 5 mm were cut out from each individual battery layer after the battery disassembly. In order to minimize the influence of the mechanical cutting process, the edges of the samples were post-processed by utilizing a broad-ionbeam cutting technique (see Section 2.2.2), resulting in a defect-free, cross-sectional sample surface (i.e., lacking residuals or cracks induced by mechanical grinding process). This enables the precise measurement of the size of the active materials (AM) within the battery layers and the corresponding current collectors (CC) as well as the thickness of the battery separator. During the second approach, the stacked layer structure was investigated. The results of this investigation not only allow the visualization of the layer arrangement inside the battery but also provide better insight into critical battery areas (e.g., close to the battery tabs or at the cell edges). For (b) separation of the two four-cell stack with a 0.7 mm nylon rope; (c) module separated into two parts; (d) module disassembled into parts (two four-cell stacks, module housing, compression pad, and connection bolts); (e) cutting the tab connection; (f) separation of the cells from each other using a nylon rope; (g) location of the connection points between the plastic frame and the pouch; and (h) removal of the plastic frame. Generation of Samples for Cross-Sectional Analysis The purpose of an extensive cross-sectional analysis of the battery is two-fold. On the one hand, the results of this analysis allow the determination of the battery layer thickness. On the other hand, information can be gathered about manufacturing details and layer arrangement of the cells. Two different approaches for sample generation were adopted in this study to evaluate the thicknesses of the battery components and identify some cell characteristics that were otherwise not visible. For the first approach, small samples with a size of 5 mm × 5 mm were cut out from each individual battery layer after the battery disassembly. In order to minimize the influence of the mechanical cutting process, the edges of the samples were post-processed by utilizing a broad-ion-beam cutting technique (see Section 2.2.2), resulting in a defect-free, cross-sectional sample surface (i.e., lacking residuals or cracks induced by mechanical grinding process). This enables the precise measurement of the size of the active materials (AM) within the battery layers and the corresponding current collectors (CC) as well as the thickness of the battery separator. During the second approach, the stacked layer structure was investigated. The results of this investigation not only allow the visualization of the layer arrangement inside the battery but also provide better insight into critical battery areas (e.g., close to the battery tabs or at the cell edges). For this purpose, one cell was deep-discharged to 0 V, and stacked component samples (50 mm × 50 mm) were generated from three different locations in the battery (Figure 3a) by cutting through each layer individually with a ceramic knife. The use of ceramic tools is a safety measure that is usually taken to prevent any internal short circuits during cell opening [14,15]. Representative samples cut out from positions 1 and 2 can be seen in Figure 3b,c, respectively. this purpose, one cell was deep-discharged to 0 V, and stacked component samples (50 mm × 50 mm) were generated from three different locations in the battery (Figure 3a) by cutting through each layer individually with a ceramic knife. The use of ceramic tools is a safety measure that is usually taken to prevent any internal short circuits during cell opening [14,15]. Representative samples cut out from positions 1 and 2 can be seen in Figure 3b,c, respectively. Stacked layer sample generation was not conducted in a controlled environment. This led to the evaporation of the electrolyte; however, this had no influence on the imaging results, since a focus was placed on identifying interesting manufacturing details rather than determining the precise size of the layers. The specimen post-preparation process for microscope imaging is described in Sections 2.2.1 and 2.2.2. Post-Preparation of the Stacked Component Sample for Microscopic Investigations As already mentioned, large areas (i.e., 50 mm × 50 mm) were cut out of the pouch cell to prepare the cross-sections of the complete stack of electrodes ( Figure 3). To prevent the stack of electrodes from shifting during the mechanical preparation process, the complete structure was fixed with a specially designed clamp. The sample was then embedded into a two-component, cold-mounting epoxy resin, which is characterized by its relatively long curing time but excellent properties to adhere to most materials. The vacuum impregnation of the sample was carried out in a vacuum chamber of a Struers CITOVAC system at a low pressure to minimize the formation of preparation artefacts while mechanically polishing the cross-sectional region. The embedded structure was polished in a dry state with silicon carbide grinding papers and subsequently coated with a 10 nm thin, high-purity carbon film for subsequent scanning electron microscopy (SEM) investigations. The sample is shown in Figure 4. Stacked layer sample generation was not conducted in a controlled environment. This led to the evaporation of the electrolyte; however, this had no influence on the imaging results, since a focus was placed on identifying interesting manufacturing details rather than determining the precise size of the layers. The specimen post-preparation process for microscope imaging is described in Section 2.2.1 and Section 2.2.2. Post-Preparation of the Stacked Component Sample for Microscopic Investigations As already mentioned, large areas (i.e., 50 mm × 50 mm) were cut out of the pouch cell to prepare the cross-sections of the complete stack of electrodes ( Figure 3). To prevent the stack of electrodes from shifting during the mechanical preparation process, the complete structure was fixed with a specially designed clamp. The sample was then embedded into a two-component, cold-mounting epoxy resin, which is characterized by its relatively long curing time but excellent properties to adhere to most materials. The vacuum impregnation of the sample was carried out in a vacuum chamber of a Struers CITOVAC system at a low pressure to minimize the formation of preparation artefacts while mechanically polishing the cross-sectional region. The embedded structure was polished in a dry state with silicon carbide grinding papers and subsequently coated with a 10 nm thin, high-purity carbon film for subsequent scanning electron microscopy (SEM) investigations. The sample is shown in Figure 4. . Dry-polished layer stack structure used for identification of cell-specific parameters and layer arrangements. The presented sample was generated from the side of battery that was opposite to the battery tabs. Post-Preparation of Single Layers Via Broad Ion Beam (BIB) Cutting For the preparation of the cross-sections of the individual layers, all generated samples from the anode, the cathode, and the separator were pre-treated and glued to a tungsten blade, allowing for a small amount of overhang. This blade was then transferred into a Gatan Ilion Broad Ion Beam Milling system (BIB or Slope Cutter). The Gatan Ilion ion polisher is used to prepare high-quality planar cross sections from samples that cannot be polished with classic mechanical methods to achieve the desired quality (for example, porous multilayer samples with a high hardness difference between the layers, such as is found in electrodes from batteries, Li-ion, or fuel cells) [16,17]. The overhang of the sample was milled with low-energy argon ions that originated from two ion-guns positioned at different angles with respect to the sample. This technique was used to create a 1 mm-broad artefact-and damage-free surface for subsequent investigations [18]. To prevent the heat-induced damage that can be caused by ion bombardment, the specimen temperature was reduced to ensure that the milling area remained close to the ambient temperature during the milling process. Method for Surface Analysis For the surface analysis, small samples with the size of 5 mm × 5 mm were cut out from the battery anode, the cathode, and the separator, respectively. Separator samples were placed immediately into a propylene carbonate (PC) replacement electrolyte solution after extraction from the battery to prevent them from drying out and shrinking. All samples were mounted on SEM sample stubs and coated with a 10 nm-thin, high-purity carbon film using a LEICA EM ACE200 coater [19]. All scanning electron microscopy investigations were performed using a ZEISS Ultra 55 Field Emission Scanning Electron Microscope (FESEM). High-resolution surface characterization of the morphology was conducted with secondary electrons (SE) in the high vacuum mode at a 5 keV excitation energy, whereas material characterization was done with backscattered electrons (BSE) and energy dispersive X-ray spectroscopy (EDXS) and an excitation energy of 15 keV. Secondary electrons are inelastically scattered primary electrons with an energy <50 eV. They are emitted from the immediate surface area of the incident primary electron beam, offering the best lateral resolution in the range of several nanometres [20]. SE images were acquired with an SE2 detector (Everhart Thornley Detector-ETD) and Secondary Electron Inlens Detector (SEI), both of which provide topographic contrast. Backscattered electrons are elastically scattered in the field of the atomic nucleus. The energy ranges from >50 eV to the excitation energy. The higher the atomic number (atomic weight) of a phase or region is, the more electrons are backscattered from this specimen area (material contrast). BSE images acquired with the HDAsB detector (High Definition Angle Selective Backscatter Electron . Dry-polished layer stack structure used for identification of cell-specific parameters and layer arrangements. The presented sample was generated from the side of battery that was opposite to the battery tabs. Post-Preparation of Single Layers via Broad Ion Beam (BIB) Cutting For the preparation of the cross-sections of the individual layers, all generated samples from the anode, the cathode, and the separator were pre-treated and glued to a tungsten blade, allowing for a small amount of overhang. This blade was then transferred into a Gatan Ilion Broad Ion Beam Milling system (BIB or Slope Cutter). The Gatan Ilion ion polisher is used to prepare high-quality planar cross sections from samples that cannot be polished with classic mechanical methods to achieve the desired quality (for example, porous multilayer samples with a high hardness difference between the layers, such as is found in electrodes from batteries, Li-ion, or fuel cells) [16,17]. The overhang of the sample was milled with low-energy argon ions that originated from two ion-guns positioned at different angles with respect to the sample. This technique was used to create a 1 mm-broad artefact-and damage-free surface for subsequent investigations [18]. To prevent the heat-induced damage that can be caused by ion bombardment, the specimen temperature was reduced to ensure that the milling area remained close to the ambient temperature during the milling process. Method for Surface Analysis For the surface analysis, small samples with the size of 5 mm × 5 mm were cut out from the battery anode, the cathode, and the separator, respectively. Separator samples were placed immediately into a propylene carbonate (PC) replacement electrolyte solution after extraction from the battery to prevent them from drying out and shrinking. All samples were mounted on SEM sample stubs and coated with a 10 nm-thin, high-purity carbon film using a LEICA EM ACE200 coater [19]. All scanning electron microscopy investigations were performed using a ZEISS Ultra 55 Field Emission Scanning Electron Microscope (FESEM). High-resolution surface characterization of the morphology was conducted with secondary electrons (SE) in the high vacuum mode at a 5 keV excitation energy, whereas material characterization was done with backscattered electrons (BSE) and energy dispersive X-ray spectroscopy (EDXS) and an excitation energy of 15 keV. Secondary electrons are inelastically scattered primary electrons with an energy <50 eV. They are emitted from the immediate surface area of the incident primary electron beam, offering the best lateral resolution in the range of several nanometres [20]. SE images were acquired with an SE2 detector (Everhart Thornley Detector-ETD) and Secondary Electron Inlens Detector (SEI), both of which provide topographic contrast. Backscattered electrons are elastically scattered in the field of the atomic nucleus. The energy ranges from >50 eV to the excitation energy. The higher the atomic number (atomic weight) of a phase or region is, the more electrons are backscattered from this specimen area (material contrast). BSE images acquired with the HDAsB detector (High Definition Angle Selective Backscatter Electron Detector) provide material (W-contrast) and additional orientation contrast (especially in the images of the Broad Ion Beam cuts). Method for Chemical Analysis In order to gain information about the chemical composition of the surface, high-energy electrons were directed toward the samples, causing the inner shell electrons to become ionized, thus leaving a vacancy in the inner shell. When this vacancy was subsequently filled by electrons from higher shells, the energy difference could be observed as an X-ray quantum or Auger electron. These energies are specific for each element and are called characteristic X-ray radiation. The continuous X-ray was generated by electrons that were decelerated in the Coulomb field of the atomic nucleus, thereby continuously losing their kinetic energy in the form of Bremsstrahlung [19,21]. EDXS spectra were acquired with an EDAX Super Octane Silicon Drift Detection System (Energy Resolution of about 123 eV @ MnKa) equipped with a silicon nitride window for the highest sensitivity in the low energy region. Light Microscopy Investigation Approach Light microscopy imaging was conducted on the stacked component sample (Section 2.2.1) and on a cross-section of an unprocessed anode as well as on the surfaces of the individual layers. The system used was a Keyence VHX-6000 digital microscope equipped with a VH-Z500T high-resolution zoom lens. The optical imaging results were compared to SEM images, and the advantages and the disadvantages of both methods were highlighted for the specific use case. Results In this section, not only the SEM imaging results of all cross-section investigations are presented but also the methods used to derive the surface properties and the chemical compositions of all battery components. The interesting manufacturing details of the cell under investigation are identified, and the thickness measurements of all battery layers are described in detail. Optical microscopy imaging results are shown at the end of this section and then compared to those obtained from the SEM analysis. Layer Properties in Cross-Section and Identification of Cell-Specific Details By analyzing all stacked samples (see Section 2.2) in cross-section, it was possible to identify some interesting manufacturing details of the investigated pouch cell. An examination of Sample #3 (Figure 4), which was generated from the middle of the cell side opposite the battery tabs, reveals the fact that all separator membranes were welded to the pouch at the sealing point near the edge (Figure 5a). In terms of the mechanical behavior of the battery and possible battery failure, this feature indicates that the separator could experience high tensile strain if a mechanical force were to act on the battery close to its edge, and mechanical rupture of the separator membrane could occur. This occurrence would expose both battery electrodes to direct contact, which is a prerequisite for a premature electrical failure and thermal runaway. If the sample is examined from its edge to its middle, it can be seen that the separator membrane comes first into contact with the anode and after two mm with the cathode (Figure 5b). The reason for this phenomenon is in part due to the battery geometry in this area. Close to the battery edge, the distances between layers are smaller due to the hot-welded area of the pouch. The lack of cathode layers in this battery section ensures that no short circuit will occur, even if some of the battery layers came too close to each other. Another purpose of a longer anode layer is to increase the stiffness of the battery near the edge, which leads to smoothing of the load distribution in the case of edge loading. In Figure 6b, the layer arrangement in the pouch cell can be seen. The investigated battery contained 22 anode layers, 21 cathode layers and 44 separator foils, whereby the two outermost layers on both side of the battery were anodes. This observation suggests that the active material, which was deposited on the outer side of the last copper foil, remained electrochemically inactive and did not participate in the charge transfer and the energy storage process. This can be seen in Figure 6c, where the pouch and the neighboring anode-separator-cathode stack are visible. In this study, the pouch was identified as a four-layer, metallic-polymer compound consisting of one Al layer (3, in Figure 6d) and three polymer layers (1, 2 and 4, in Figure 6d). Another specific characteristic of the battery under investigation is visible in Figure 6a. A small 3 µm-thin layer can be seen between each cathode and separator. With EDXS analysis, this was identified as an aluminium oxide (Al2O3 or also called alumina) layer. Such a layer in lithium ion-batteries is typically deposited on top of the separator membrane to enhance its thermal properties, enabling it to preserve its mechanical integrity at temperatures up to 200 °C [22,23]. The performance of lithium-ion batteries with coated alumina separator foils is the same at room temperature as that of those with a polymer separator. The former, however, is characterized by its safer operation and increased cyclability under extreme temperature conditions in the range from −30 °C to +60 °C [22,23]. If the sample is examined from its edge to its middle, it can be seen that the separator membrane comes first into contact with the anode and after two mm with the cathode (Figure 5b). The reason for this phenomenon is in part due to the battery geometry in this area. Close to the battery edge, the distances between layers are smaller due to the hot-welded area of the pouch. The lack of cathode layers in this battery section ensures that no short circuit will occur, even if some of the battery layers came too close to each other. Another purpose of a longer anode layer is to increase the stiffness of the battery near the edge, which leads to smoothing of the load distribution in the case of edge loading. In Figure 6b, the layer arrangement in the pouch cell can be seen. The investigated battery contained 22 anode layers, 21 cathode layers and 44 separator foils, whereby the two outermost layers on both side of the battery were anodes. This observation suggests that the active material, which was deposited on the outer side of the last copper foil, remained electrochemically inactive and did not participate in the charge transfer and the energy storage process. This can be seen in Figure 6c, where the pouch and the neighboring anode-separator-cathode stack are visible. In this study, the pouch was identified as a four-layer, metallic-polymer compound consisting of one Al layer (3, in Figure 6d) and three polymer layers (1, 2 and 4, in Figure 6d). Another specific characteristic of the battery under investigation is visible in Figure 6a. A small 3 µm-thin layer can be seen between each cathode and separator. With EDXS analysis, this was identified as an aluminium oxide (Al 2 O 3 or also called alumina) layer. Such a layer in lithium ion-batteries is typically deposited on top of the separator membrane to enhance its thermal properties, enabling it to preserve its mechanical integrity at temperatures up to 200 • C [22,23]. The performance of lithium-ion batteries with coated alumina separator foils is the same at room temperature as that of those with a polymer separator. The former, however, is characterized by its safer operation and increased cyclability under extreme temperature conditions in the range from −30 • C to +60 • C [22,23]. To test the hypothesis that an alumina (Al2O3) layer was directly deposited on top of the separator surface on the cathode side, individual layers were analyzed in cross-section, the results of which are presented later in this section. A conformation is also provided in Section 3.2, whereby the surfaces on both sides of the polymer membrane as well as those of the anode and the cathode were analyzed in more detail. Samples #1 and #2, which were respectively generated at the anode and the cathode tabs of the investigated cell, showed comparable results to each other. For this reason, only the representative results for Sample #1 are presented. The first characteristic location was the contact position of all copper current collectors and the anode tab. Here, the contact was achieved by point-welding all layers together, as shown in Figure 7a. Another cell-specific detail can be seen in Figure 7d. An area close to each welding point can be seen, where an additional separator layer was added to the battery stack. This layer had a length of ca. 2 mm, as measured from the beginning of the cathode layer (Figure 7b) towards the middle of the cell (Figure 7d). The purpose of integrating an additional membrane in the tab area was to increase battery safety and prevent separator damage during charging/discharging due to the increased heat generation in the tab area [24]. The thicknesses of the individual components were also measured from the stacked-layer sample. These results are shown in Figure 7c,e for the anode and the cathode, respectively. The overall thickness of the anode layer was about 140 µm and comprised 20 µm of CC and 60 µm of AM on both sides. The cathode was 20 µm wider than the anode due to its thicker AM (about 65 µm) and CC (25 µm). The separator size of the stacked sample was evaluated as approximately 20 µm. To test the hypothesis that an alumina (Al 2 O 3 ) layer was directly deposited on top of the separator surface on the cathode side, individual layers were analyzed in cross-section, the results of which are presented later in this section. A conformation is also provided in Section 3.2, whereby the surfaces on both sides of the polymer membrane as well as those of the anode and the cathode were analyzed in more detail. Samples #1 and #2, which were respectively generated at the anode and the cathode tabs of the investigated cell, showed comparable results to each other. For this reason, only the representative results for Sample #1 are presented. The first characteristic location was the contact position of all copper current collectors and the anode tab. Here, the contact was achieved by point-welding all layers together, as shown in Figure 7a. Another cell-specific detail can be seen in Figure 7d. An area close to each welding point can be seen, where an additional separator layer was added to the battery stack. This layer had a length of ca. 2 mm, as measured from the beginning of the cathode layer (Figure 7b) towards the middle of the cell (Figure 7d). The purpose of integrating an additional membrane in the tab area was to increase battery safety and prevent separator damage during charging/discharging due to the increased heat generation in the tab area [24]. The thicknesses of the individual components were also measured from the stacked-layer sample. These results are shown in Figure 7c,e for the anode and the cathode, respectively. The overall thickness of the anode layer was about 140 µm and comprised 20 µm of CC and 60 µm of AM on both sides. The cathode was 20 µm wider than the anode due to its thicker AM (about 65 µm) and CC (25 µm). The separator size of the stacked sample was evaluated as approximately 20 µm. As mentioned in Section 2.2, single-layer samples were also investigated in cross-section. After broad-ion-beam cutting, all specimens were left with a clean surface, which made the precise determination of the thicknesses of the copper and the aluminium current collectors and the thicknesses of their corresponding active materials possible. In this way, more highly accurate results could be obtained as compared to those determined from the stacked-layer sample (Figure 7c,e). Figure 8a shows the battery anode in cross-section with a total thickness of about 140 µm, which could be divided into a 10 µm copper current collector and two active materials on each side of it, each with a thickness of 65 µm. The anode active material grains and the binder are also visible in this figure, which holds the entire AM structure together. In Figure 8b, the separator can be seen; it consisted of one 17 µm polypropylene layer and the 3 µm deposited alumina protective layer. In the cathode (Figure 8c), the aluminium current collector thickness was determined to be around 20 µm, and the corresponding active material was −75 µm. Different particle sizes were visible, the sizes of which were determined during the sample surface analysis (Section 3.2.1). All measured layer thicknesses are summarized in Table 1, and the reasons for all differences in the layer sizes are discussed in detail in the Discussion section. As mentioned in Section 2.2, single-layer samples were also investigated in cross-section. After broad-ion-beam cutting, all specimens were left with a clean surface, which made the precise determination of the thicknesses of the copper and the aluminium current collectors and the thicknesses of their corresponding active materials possible. In this way, more highly accurate results could be obtained as compared to those determined from the stacked-layer sample (Figure 7c,e). Figure 8a shows the battery anode in cross-section with a total thickness of about 140 µm, which could be divided into a 10 µm copper current collector and two active materials on each side of it, each with a thickness of 65 µm. The anode active material grains and the binder are also visible in this figure, which holds the entire AM structure together. In Figure 8b, the separator can be seen; it consisted of one 17 µm polypropylene layer and the 3 µm deposited alumina protective layer. In the cathode (Figure 8c), the aluminium current collector thickness was determined to be around 20 µm, and the corresponding active material was −75 µm. Different particle sizes were visible, the sizes of which were determined during the sample surface analysis (Section 3.2.1). All measured layer thicknesses are summarized in Table 1, and the reasons for all differences in the layer sizes are discussed in detail in the Discussion section. Grain Size of Active Materials and Chemical Composition By taking a look at the anode surface, visible in Figure 9, a granular structure could be clearly identified. The diameter of the biggest particles was found to be approximately 25 µm. The results of the chemical analysis revealed a high carbon (C) content, allowing the classification of the anode active material as graphite. The traces of phosphorous (P) were identified here as remainders of the electrolyte (assumed to be LiPF6 dissolved in a carbonate-mixture solvent). A high fluorine (F) peak was also visible in the EDXS-spectrum. While the high fluorine content is usually ascribed to the fluorinated binder, this was very likely not the case here. Based on additional experiments, it was possible to determine that the anode active layers could be easily removed with water. This indicated that a water-based slurry was used in the anode manufacturing process. Under these conditions, a conclusion can be made that the fluorine signal can be assigned to the electrolyte decomposition products that are part of the SEI found on the surfaces of the graphite particles. The small copper (Cu) peaks that appear in the spectrogram were due to the sample generation process and can be neglected. The cathode also showed a granular surface structure (Figure 10a), whereby the largest particles were about 15 µm. Chemical elements identified by using EDXS were manganese (Mn), cobalt (Co), nickel (Ni), oxygen (O), and carbon (C). This led to the conclusion of a LiNiMnCoO2 (NMC) cathode chemistry. Two different types of structures were visible on the cathode surface, which are marked with red circles in Figure 10b. The results of a detailed analysis of the formed compounds show that Grain Size of Active Materials and Chemical Composition By taking a look at the anode surface, visible in Figure 9, a granular structure could be clearly identified. The diameter of the biggest particles was found to be approximately 25 µm. The results of the chemical analysis revealed a high carbon (C) content, allowing the classification of the anode active material as graphite. The traces of phosphorous (P) were identified here as remainders of the electrolyte (assumed to be LiPF 6 dissolved in a carbonate-mixture solvent). A high fluorine (F) peak was also visible in the EDXS-spectrum. While the high fluorine content is usually ascribed to the fluorinated binder, this was very likely not the case here. Based on additional experiments, it was possible to determine that the anode active layers could be easily removed with water. This indicated that a water-based slurry was used in the anode manufacturing process. Under these conditions, a conclusion can be made that the fluorine signal can be assigned to the electrolyte decomposition products that are part of the SEI found on the surfaces of the graphite particles. The small copper (Cu) peaks that appear in the spectrogram were due to the sample generation process and can be neglected. Grain Size of Active Materials and Chemical Composition By taking a look at the anode surface, visible in Figure 9, a granular structure could be clearly identified. The diameter of the biggest particles was found to be approximately 25 µm. The results of the chemical analysis revealed a high carbon (C) content, allowing the classification of the anode active material as graphite. The traces of phosphorous (P) were identified here as remainders of the electrolyte (assumed to be LiPF6 dissolved in a carbonate-mixture solvent). A high fluorine (F) peak was also visible in the EDXS-spectrum. While the high fluorine content is usually ascribed to the fluorinated binder, this was very likely not the case here. Based on additional experiments, it was possible to determine that the anode active layers could be easily removed with water. This indicated that a water-based slurry was used in the anode manufacturing process. Under these conditions, a conclusion can be made that the fluorine signal can be assigned to the electrolyte decomposition The cathode also showed a granular surface structure (Figure 10a), whereby the largest particles were about 15 µm. Chemical elements identified by using EDXS were manganese (Mn), cobalt (Co), nickel (Ni), oxygen (O), and carbon (C). This led to the conclusion of a LiNiMnCoO2 (NMC) cathode chemistry. Two different types of structures were visible on the cathode surface, which are marked with red circles in Figure 10b. The results of a detailed analysis of the formed compounds show that The cathode also showed a granular surface structure (Figure 10a), whereby the largest particles were about 15 µm. Chemical elements identified by using EDXS were manganese (Mn), cobalt (Co), nickel (Ni), oxygen (O), and carbon (C). This led to the conclusion of a LiNiMnCoO 2 (NMC) cathode chemistry. Two different types of structures were visible on the cathode surface, which are marked with red circles in Figure 10b. The results of a detailed analysis of the formed compounds show that the compound marked by "1" had the formerly mentioned NMC chemistry, whereas the second structure was determined to be a manganese oxide (LMO) compound. Taking this into account, the chemistry of the cathode layer could be established as a blend of NMC and LMO. the compound marked by "1" had the formerly mentioned NMC chemistry, whereas the second structure was determined to be a manganese oxide (LMO) compound. Taking this into account, the chemistry of the cathode layer could be established as a blend of NMC and LMO. Separator Surface Structure and Pore Size The separator surface structure was investigated to determine the separator fiber thickness and the pore size as well as to determine whether an alumina layer was deposited on the separator membrane, as already stated above. Figure 11 shows the SEM image results. To the left side, the Al2O3 layer is clearly visible. The right side depicts the fiber structure of the polypropylene membrane. The separator fibers were aligned in a direction perpendicular to the battery tabs (machine direction seen as the u-direction on Figure 3a). Separator Surface Structure and Pore Size The separator surface structure was investigated to determine the separator fiber thickness and the pore size as well as to determine whether an alumina layer was deposited on the separator membrane, as already stated above. Figure 11 shows the SEM image results. To the left side, the Al 2 O 3 layer is clearly visible. The right side depicts the fiber structure of the polypropylene membrane. The separator fibers were aligned in a direction perpendicular to the battery tabs (machine direction seen as the u-direction on Figure 3a). (a) (b) Figure 11. SEM images of the separator surface structure on both sides. To the left (a), an Al2O3 (alumina) layer deposited on the membrane surface is visible. To the right (b), the fiber structure of the microporous membrane can be seen. Here, "u" and "v" denote the machine and the transversal directions of the battery and its layers. The total separator fiber thickness was around 3 µm, as indicated in Figure 12a. The underlying pore structure revealed pores with different diameters, the smallest of which were approximately 50 nm (Figure 12b). A general requirement for the pore size in lithium-ion batteries is that they must be in the sub-micrometre range to prevent dendritic lithium penetration from occurring during the consecutive battery lifetime [25]. Measurement Results Light Microscopy Images of the stacked component structure and the cross-sectional images of the anode were also generated by making use of the test setup, as described in Section 2.5. In Figure 13a, the pouch is visualised as a four-layered structure with an overall thickness of 188 µm. It was also possible to measure the thicknesses of all other battery components in the stacked structure (Figure 13b), although the additional alumina layer on top of the separator was not visible. The total anode thickness was determined to be 175 µm and consisted of a 25 µm-thick Copper CC and a 75 µm-thick AM. The total cathode thickness was approximately 160 µm, including an active material layer of 65 µm and an aluminium current collector of 30 µm. A single, unprocessed anode layer in cross-section can be seen in Figure 13c, where a deviation in the thickness measurement can be observed as Figure 11. SEM images of the separator surface structure on both sides. To the left (a), an Al 2 O 3 (alumina) layer deposited on the membrane surface is visible. To the right (b), the fiber structure of the microporous membrane can be seen. Here, "u" and "v" denote the machine and the transversal directions of the battery and its layers. The total separator fiber thickness was around 3 µm, as indicated in Figure 12a. The underlying pore structure revealed pores with different diameters, the smallest of which were approximately 50 nm (Figure 12b). A general requirement for the pore size in lithium-ion batteries is that they must be in the sub-micrometre range to prevent dendritic lithium penetration from occurring during the consecutive battery lifetime [25]. (a) (b) Figure 11. SEM images of the separator surface structure on both sides. To the left (a), an Al2O3 (alumina) layer deposited on the membrane surface is visible. To the right (b), the fiber structure of the microporous membrane can be seen. Here, "u" and "v" denote the machine and the transversal directions of the battery and its layers. The total separator fiber thickness was around 3 µm, as indicated in Figure 12a. The underlying pore structure revealed pores with different diameters, the smallest of which were approximately 50 nm (Figure 12b). A general requirement for the pore size in lithium-ion batteries is that they must be in the sub-micrometre range to prevent dendritic lithium penetration from occurring during the consecutive battery lifetime [25]. Measurement Results Light Microscopy Images of the stacked component structure and the cross-sectional images of the anode were also generated by making use of the test setup, as described in Section 2.5. In Figure 13a, the pouch is visualised as a four-layered structure with an overall thickness of 188 µm. It was also possible to measure the thicknesses of all other battery components in the stacked structure (Figure 13b), although the additional alumina layer on top of the separator was not visible. The total anode thickness was determined to be 175 µm and consisted of a 25 µm-thick Copper CC and a 75 µm-thick AM. The total cathode thickness was approximately 160 µm, including an active material layer of 65 µm and an aluminium current collector of 30 µm. A single, unprocessed anode layer in cross-section can be seen in Figure 13c, where a deviation in the thickness measurement can be observed as Measurement Results Light Microscopy Images of the stacked component structure and the cross-sectional images of the anode were also generated by making use of the test setup, as described in Section 2.5. In Figure 13a, the pouch is visualised as a four-layered structure with an overall thickness of 188 µm. It was also possible to measure the thicknesses of all other battery components in the stacked structure (Figure 13b), although the additional alumina layer on top of the separator was not visible. The total anode thickness was determined to be 175 µm and consisted of a 25 µm-thick Copper CC and a 75 µm-thick AM. The total cathode thickness was approximately 160 µm, including an active material layer of 65 µm and an aluminium current collector of 30 µm. A single, unprocessed anode layer in cross-section can be seen in Figure 13c, where a deviation in the thickness measurement can be observed as compared to Figure 8a. This effect can be explained by the sample generation procedure. As a result of the mechanical cutting-out process, the edge of the sample was damaged, resulting in a large variation in sample thickness and inaccurate results. compared to Figure 8a. This effect can be explained by the sample generation procedure. As a result of the mechanical cutting-out process, the edge of the sample was damaged, resulting in a large variation in sample thickness and inaccurate results. . Light microscopy was also used for the identification of the particle sizes in the anode and the cathode surface materials. As an example, the anode surface results are shown. The imaged surface of the graphite layer is visible at the top right corner of Figure 14. The generated image contains data along the U-, the V-, and the W-axes, which could be used to conduct volume, distance, and profile measurements on the graphite particles or on the layer surface. Such a profile measurement through the particle of interest allowed the determination of its size ( Figure 14). The largest carbon grains measured had diameters of approximately 25 µm, which was consistent with the data presented in Section 3.2.1. An inspection of the separator surface structure revealed only the membrane fibers, although no information could be gathered regarding the pore size or the fiber thickness ( Figure 15). Nevertheless, the machine direction (u-direction in Figure 3a) of the separator could be identified by examining the fiber propagation direction. Light microscopy was also used for the identification of the particle sizes in the anode and the cathode surface materials. As an example, the anode surface results are shown. The imaged surface of the graphite layer is visible at the top right corner of Figure 14. The generated image contains data along the U-, the V-, and the W-axes, which could be used to conduct volume, distance, and profile measurements on the graphite particles or on the layer surface. Such a profile measurement through the particle of interest allowed the determination of its size ( Figure 14). The largest carbon grains measured had diameters of approximately 25 µm, which was consistent with the data presented in Section 3.2.1. An inspection of the separator surface structure revealed only the membrane fibers, although no information could be gathered regarding the pore size or the fiber thickness ( Figure 15). Nevertheless, the machine direction (u-direction in Figure 3a) of the separator could be identified by examining the fiber propagation direction. Discussion The correct determination of the layer thicknesses depended primarily and substantially on the sample generation and the preparation methods used. In this paper, two different investigation approaches for battery component thickness determination are presented. In the first approach, the sizes of single layers in cross section, which were prepared by broad-ion-beam (BIB) cutting, were measured. In the second approach, the battery layer thicknesses were measured directly from the stacked component structure, as described in Section 2.2.1. A comparison of all the obtained values is shown in Table 1. Discussion The correct determination of the layer thicknesses depended primarily and substantially on the sample generation and the preparation methods used. In this paper, two different investigation approaches for battery component thickness determination are presented. In the first approach, the sizes of single layers in cross section, which were prepared by broad-ion-beam (BIB) cutting, were measured. In the second approach, the battery layer thicknesses were measured directly from the stacked component structure, as described in Section 2.2.1. A comparison of all the obtained values is shown in Table 1. Discussion The correct determination of the layer thicknesses depended primarily and substantially on the sample generation and the preparation methods used. In this paper, two different investigation approaches for battery component thickness determination are presented. In the first approach, the sizes of single layers in cross section, which were prepared by broad-ion-beam (BIB) cutting, were measured. In the second approach, the battery layer thicknesses were measured directly from the stacked component structure, as described in Section 2.2.1. A comparison of all the obtained values is shown in Table 1. A large difference can be seen in the values determined for both current collectors. The measured thickness of the copper from the stacked sample was double that of the thickness from the BIB-prepared anode sample. The measurement results of the active material thicknesses also varied between the samples, depending on how they were prepared. There are several reasons for the observed variations in thickness: • The variation in the widths of the current collectors is ascribed to the effects of sample polishing. Due to their different material hardness properties, copper and aluminium tend to bend to the sides during the polishing process, thus leading to different final results; • The differences in the observed active material thicknesses arise from differences in the pressure acting on the binder due to the constraint of the stacked structure between two plates; • The difference in pouch thickness can also be accounted for by differences in the applied external clamp pressure. The approach that gave the best result in the battery layer thickness investigation was the approach taken in the investigation conducted on single-layer components. It was necessary to use the BIB-cutting method in this case to produce an artefact-free cross section and reduce any effects of the sample generation procedure. The sum of all battery layers multiplied by their corresponding thicknesses resulted in a whole-cell thickness of about 8 mm, which corresponded to the thickness measurements made before the cell disassembly and the manufacturers' data. In comparison, the error obtained when determining the layer size from the stacked structure was about 5%. For detailed battery layer models used for mechanical simulations, for example, this level of accuracy is insufficient. Carrying out an investigation of the layer stack, however, had other advantages and enabled us to identify cell characteristics and visualize layer arrangements inside the battery. In this study, we used two different imaging techniques, each of which had its own advantages and disadvantages. Depending on the type of information that was relevant for the specific use case, both techniques proved useful and yielded good results. Light microscopy can be used to make layer size measurements rapidly if the sample quality is good or to determine the particle sizes of the anode and the cathode active materials. A huge drawback of optical microscope systems, however, is their resolution. Optical systems are limited to a sample size of several micrometers, which is the reason why the separator coating was not visible in the light microscopy images. For the investigation of smaller structures that form on the battery electrodes or for the determination of pore size of the separator, SEM is a more suitable technique. One of the strengths of SEM is that objects with sub-micrometer sizes can be visualized. Another advantage of electron microscope systems is that they can be used for chemical analysis. During their lifetime, batteries experience changes to their internal layers due to parasitic side reactions with the electrolyte, leading to the formation of decomposition products. Valuable chemical information about such degradation products can be obtained with the use of EDXS. Comparable studies [26,27] have shown values for the thicknesses of individual layers that are consistent with the ones obtained in this study. No information, however, has been previously provided on the types of samples used, their generation procedures, or the measurement setups. The testing methodology proposed in this paper provides a rapid and convenient way to obtain highly accurate results by eliminating factors that can have huge impact on the measurement itself (sample cutting-out process). Conclusions In this paper, we described the development of a "best practice" methodology for sample generation and preparation as well as an investigation of the battery layer structure and chemical composition. A unique battery pack and battery module disassembly procedure was presented in this work, which provided insight on how to acquire damage-free pouch cells for a subsequent post-mortem analysis. A new sample generation and preparation method for the visualization of the battery layer arrangement and the analysis of the battery manufacturing details was described. The analysis of the stacked layer samples revealed interesting artefacts inside the battery under investigation, examples of which included an additional separator layer in the battery tab area, and showed that the separator and the pouch were welded along the battery edge. The methodology used proved useful for the investigation of pouch cells, especially because it yields highly accurate results and can be easily adapted for investigations on different pouch cell types. The best results for the thicknesses of all battery layers were obtained through investigations conducted on single components, the cross-sections of which were prepared by broad-ion-beam cutting. In this case, a high-resolution optical imaging system could be used, because all battery layers were thicker than 10 µm. It was also possible to rapidly determine the size of the particles of the active material of both electrodes using light microscopy. Our findings indicate, however, that different investigation techniques (SEM, EDXS) should be used to conduct a more detailed structural analysis of the anode (e.g., visualization of coatings or additional layers, determination of the pore size of the separator, or visualization of grain structure) and to determine the material composition of the individual layers. The results of this work highlight the need for precise sample generation and preparation methods for the post-mortem analysis of batteries and the need for careful selection of a suitable sample investigation technique based on the specific use case. In the field of battery safety research, qualitative results can only be obtained if the battery layer size and the material composition can be precisely determined. The findings of this research will help other researchers construct better mechanical and multi-physical models, which can accurately be used to predict safety hazards associated with lithium ion batteries. Acknowledgments: The authors thank the consortium members of the SafeBattery project for their valuable input to this work. Conflicts of Interest: The authors declare no conflict of interest.
2019-11-07T15:26:15.516Z
2019-10-31T00:00:00.000
{ "year": 2019, "sha1": "b73846990435966357491bc78d79d2324af29014", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-0105/5/4/67/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bdb56f8432cd44ce51b19b0f1de217375a7e8acf", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
18533163
pes2o/s2orc
v3-fos-license
A case of symptomatic reflex epilepsy precipitated by bathing By definition, reflex seizures are epileptic events triggered by specific motor, sensory, or cognitive stimulation. Reflex epilepsy is a term reserved for the scenarios in which seizures are exclusively triggered by a specific stimulus like reading or eating [1]. Focal seizures precipitated by somatosensory stimuli often originate in the parietal lobe (somatosensory area II) and present with sensory manifestations usually an ill-localized vague sensation or pain which may proceed to clonic motor movements or other manifestations depending upon the propagation of ictal activity [1], [2]. Similarly, myoclonic or focal motor/sensory seizures can be triggered by somatomotor stimuli like sudden unexpected movement in a related form of reflex epilepsy [1]. Hot water epilepsy prevalent in southern India can present with generalized or focal seizures in response to a hot water bath [2]. The following report is the narrative of a young female with an uncommon form of reflex seizure precipitated by bathing which was demonstrated to have an excellent correlation electrographically and on neuroimaging. Introduction By definition, reflex seizures are epileptic events triggered by specific motor, sensory, or cognitive stimulation. Reflex epilepsy is a term reserved for the scenarios in which seizures are exclusively triggered by a specific stimulus like reading or eating [1]. Focal seizures precipitated by somatosensory stimuli often originate in the parietal lobe (somatosensory area II) and present with sensory manifestations usually an illlocalized vague sensation or pain which may proceed to clonic motor movements or other manifestations depending upon the propagation of ictal activity [1,2]. Similarly, myoclonic or focal motor/sensory seizures can be triggered by somatomotor stimuli like sudden unexpected movement in a related form of reflex epilepsy [1]. Hot water epilepsy prevalent in southern India can present with generalized or focal seizures in response to a hot water bath [2]. The following report is the narrative of a young female with an uncommon form of reflex seizure precipitated by bathing which was demonstrated to have an excellent correlation electrographically and on neuroimaging. Case report A 25-year-old female presented with paroxysmal events precipitated by bathing. A few seconds to a minute after pouring water on her head, she would develop intense itching and unpleasant sensations in the scalp for several minutes. During these episodes, at times, she would have impaired consciousness and dysphasia lasting up to 2 min as reported by her husband. This continued for five years without any response to various forms of therapy. During video telemetry, we were able to replicate the events by pouring water at room temperature as well as ice cold water on her head [Video 1]. Ictal rhythmic delta activity was seen on the right hemisphere predominantly in the temporal region preceding the onset and evolving persistently on the right side [ Fig. 1]. Warm water did not precipitate the event. One point five-Tesla magnetic resonance imaging revealed a small subcentimeter T2 hypointense and T1 isointense round lesion in the opercular cortex of left parietal region suggestive of an inflammatory granuloma [ Fig. 2]. Advising the patient to have warm water baths was not feasible as she was unable to follow the instruction regularly. Total seizure freedom was achieved with 600 mg daily of oxcarbazepine. Discussion Water acting as a trigger for reflex epilepsy has been virtually unheard of except for the well-known entity of hot water epilepsy (HWE). Hot water epilepsy is an idiopathic sensory reflex epilepsy usually associated with a pleasurable feeling which leads to selfinduction of seizures [2]. Paradoxically, the present patient exhibited neither of these but instead a very unpleasant sensation in response to water at room temperature or cold water. The possible mechanism in her is that water at cold temperature stimulates the somatosensory area II located in the posterior parietal operculum where the ictal activity is generated leading to abnormal sensations [3]. Later, the temporal structures are recruited resulting in the semiology of a focal seizure with impaired consciousness. According to Rui-Sala Padró et al. in their series of six patients with reflex epilepsies provoked by cutaneous stimuli, washing the mouth using cold water was identified in one of them as the consistent trigger [4]. The electroradiological dissociation could be explained on the basis of rich connections producing false lateralization of the ictal activity to the opposite hemisphere. It is also interesting to note that anatomical correlation for lesional reflex epilepsy is more robust than in idiopathic/nonstructural epilepsies as exemplified in the present case where a small lesion was so precisely located in this reflexogenic cortex. In HWE, the temporal and parietal cortexes have recently been implicated after ictal SPECT data showed hypermetabolism in the said areas [5]. Focal lesions like cortical dysplasia of the parietal lobe can present with HWE. Another well established phenomenon in this form of reflex epilepsy is the presence of a trigger zone, the stimulation of which by the specific stimulus results in a seizure [1,2]. This aspect is also supported by the history as the episodes were exclusively in response to a head and not a body bath. Conclusion To the best of our knowledge, this is the first ever report of a symptomatic reflex epilepsy of focal semiology precipitated by the singular stimulus of a head bath. The role of the parietal lobe in this pattern of reflex seizures is emphasized through this illustrative case with supporting discussion. Source of support None. Acknowledgments We acknowledge the director of St. Stephen's Hospital and the hospital management for allowing us to publish this work. Contents lists available at ScienceDirect Epilepsy & Behavior Case Reports j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / e b c r Fig. 1. 32-Channel ictal recording (common referential average/bipolar and Cz reference montages ) showing diffuse rhythmic delta better expressed on the right hemisphere. 65 A case of symptomatic reflex epilepsy precipitated by bathing Permissions Permission was obtained from the director and hospital administration of St. Stephen's Hospital, New Delhi-54. Informed consent was obtained from the patient and family. Conflicting interests We have no conflicting interests with any persons or organizations.
2016-10-25T01:09:03.338Z
2016-09-03T00:00:00.000
{ "year": 2016, "sha1": "0d19b134a606d115f99c0120ed6c8e24bf763374", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ebcr.2016.08.006", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d19b134a606d115f99c0120ed6c8e24bf763374", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
238862866
pes2o/s2orc
v3-fos-license
A Novel Joint TDOA/FDOA Passive Localization Scheme Using Interval Intersection Algorithm : Due to the large measurement error in the practical non-cooperative scene, the passive localization algorithms based on traditional numerical calculation using time difference of arrival (TDOA) and frequency difference of arrival (FDOA) often have no solution, i.e., the estimated result cannot meet the localization background knowledge. In this context, this paper intends to introduce interval analysis theory into joint FDOA/TDOA-based localization algorithm. The proposed algorithm uses the dichotomy algorithm to fuse the interval measurement of TDOA and FDOA for estimating the velocity and position of a moving target. The estimation results are given in the form of an interval. The estimated interval must contain the true values of the position and velocity of the radiation target, and the size of the interval reflects the confidence of the estimation. The point estimation of the position and the velocity of the target is given by the midpoint of the estimation interval. Simulation analysis shows the efficacy of the algorithm. Introduction Passive localization was increasingly used in radar, sensor networks, wireless communication, and other fields in recent years, especially in military applications [1][2][3][4], due to its high concealment and other advantages. To the best of our knowledge, most passive location algorithms are based on the methods of angle of arrival (AOA), time difference of arrival (TDOA), or the combination of AOA and TDOA, which have the characteristics of high location accuracy and passivity. Frequency difference of arrival (FDOA) measurement can be added to the moving target, which improves the target location accuracy; FDOA measurement is utilized to calculate the velocity of the target, as well [3]. The advantages of passive location have drawn significant concern from many scholars worldwide. Given that the joint location algorithm through TDOA/FDOA has some problems, such as non-linearity and susceptibility to noise, they propose a variety of related algorithms to improve the accuracy of passive location [1][2][3][4][5]. Reference [1] aims at the problem of noise influence by introducing a semidefinite relaxation technique on the basis of TDOA/FDOA joint positioning and the target parameters are optimized by combining the idea of random robust least squares. The algorithm has strong anti-noise ability, but it is complex, and the calculation process is time consuming. Reference [3] is based on nonlinear weighted least squares (WLS). Firstly, the nonlinear WLS problem is obtained by TDOA measurement, and its bias error is derived to get the unbiased solution of WLS, which is taken to the second step, and a new nonlinear WLS problem is obtained by FDOA measurement. This method can effectively avoid the danger of local convergence and provide a reliable, global optimal solution. In reference [4], based on the classical two-step weighted least square method, Information 2021, 12, 371 2 of 10 the relationship between the extra variables and the target parameters is established, and the final solution is obtained by the least square method. Although this algorithm has low complexity, it is poor in anti-noise interference ability. In reference [5], a joint location algorithm based on TDOA/FDOA measurement is proposed to locate the target directly, which combines the radial distance equation from the target with the reference station, and the target position expression is used to obtain the exact position of the target. Ho et al. proposed the two-stage weighted least squares (TSWLS) algorithm [6]. In the first step, additional variables are introduced and pseudo linear equations are established, and then the weighted least squares (WLS) solution is obtained. In the second step, a new equation is established by using the relationship between additional variables and target location to improve positioning accuracy. Although the TSWLS algorithm has high real-time performance, its positioning accuracy needs to be further improved. Semidefinite relaxation (SDR) algorithm [7,8] first described the location problem as an optimization problem with quadratic constraints, then transformed it into a semidefinite programming (SDP) problem by using reasonable approximation and appropriate relaxation conditions. Reference [9] first described the positioning problem as a quadratic constrained quadratic programming (QCQP) problem with quadratic constraints, and then transformed the quadratic constraints into linear constraints using the solution of WLS, that is, the QCQP problem into a linear constrained quadratic programming (LCQP) problem. Finally, the property of generalized inverse matrix is used to solve the LCQP problem, and an iterative algorithm is formed. This algorithm has the advantage of closed solution. The above algorithms are all passive location algorithms based on numerical values. Although they have finer accuracy closing to Crame-Rao lower bound (CRLB), the reliability of target location results cannot be guaranteed. In this paper, we propose an interval algorithm to track a moving target's location and velocity with guaranteed concept. Based on the conventional TDOA/FDOA joint location algorithm, the interval analysis algorithm [10,11] is introduced to transform the numerical results of the traditional location algorithm into interval results. In this paper, with the combination of the interval analysis algorithm and Newton iterative algorithm [9], we firstly obtain the initial estimation interval of velocity from the location interval of a radiation source. Then the meridional velocity and zonal velocity of the radiation source are divided, respectively. After several interval approximations, the velocity estimation interval is continuously reduced by the iterative method, and finally, a smaller interval containing the true velocity of the radiation source is obtained. The advantage of this improvement is that the location and velocity interval obtained by us is certain to contain the real value of the radiation source, which significantly improves the credibility of the location and velocity results. Background and Methods The proposed algorithm uses the TDOA and FDOA measurements generated between the three satellites to locate the position and measure the velocity of the radiation source. The schematic model is shown in Figure 1. In the geodetic coordinate system, the position and velocity of the three satellites are, respectively, represented as The position and velocity of the radiation source u are expressed as u Then the distance between the radiation source and the satellite is If s 1 is used as the reference station, the TDOA and FDOA of ground truth obtained from the three satellites are, respectively, presented as [7]: Information 2021, 12, 371 3 of 10 where c is the velocity of light, f 0 is the central frequency of the carrier, and is Euclidean distance. . Then the distance between the radiation source and the satellite is If s1 is used as the reference station, the TDOA and FDOA of ground truth obtained from the three satellites are, respectively, presented as [7]: where c is the velocity of light, is the central frequency of the carrier, and ‖□‖ is Euclidean distance. Assuming that in the WGS-84 geodetic coordinate system, the elevation of the target emitter is 0 and does not rise (keeps the elevation unchanged); namely, the velocity is 0 in the vertical direction of the earth's surface, and then the longitude and latitude velocity of the radiation source can be expressed as ∘ = [ ∘ , ∘ ] and ∘ = ∘ , ∘ , where ∘ and ∘ represent longitude and latitude, respectively. For the reason that more variables exist in the geodetic coordinate system, it is easy to encounter nonlinear problems. The algorithm in this paper will be calculated in the Earth-Centered Earth-Fixed (ECEF) coordinate system in order to reduce the variables. The formula for transforming the velocity of the radiation source from the geodetic coordinate system to the ECEF coordinate system is as follows [12] = In the actual location scene, it is inevitable to produce TDOA and FDOA measurement errors. We cannot get the accurate measurement error distribution, but it is relatively Assuming that in the WGS-84 geodetic coordinate system, the elevation of the target emitter is 0 and does not rise (keeps the elevation unchanged); namely, the velocity is 0 in the vertical direction of the earth's surface, and then the longitude and latitude velocity of the radiation source can be expressed as where φ • and ϕ • represent longitude and latitude, respectively. For the reason that more variables exist in the geodetic coordinate system, it is easy to encounter nonlinear problems. The algorithm in this paper will be calculated in the Earth-Centered Earth-Fixed (ECEF) coordinate system in order to reduce the variables. The formula for transforming the velocity of the radiation source from the geodetic coordinate system to the ECEF coordinate system is as follows [12] In the actual location scene, it is inevitable to produce TDOA and FDOA measurement errors. We cannot get the accurate measurement error distribution, but it is relatively easy to obtain the error range. Therefore, in this paper, the measurement error is assumed to be a bounded error interval [*], and the measurement error bounds of TDOA and FDOA are [∆τ] and [∆ f ], respectively, which can be defined as f i1 • ] denote the measurement intervals of TDOA and FDOA with bounded errors, respectively. In the TDOA/FDOA joint interval locating velocity measurement algorithm, the interval calculation will be carried out in our paper, and then the velocity interval is shrunk by continuous iterating. To this end, we need an initial velocity interval to prepare for the following iterative operation. According to the prior information, the final interval is obtained by position, and the initial interval of velocity estimation is obtained using the relationship between position and velocity in Equations (3) and (4). From Equation (4), we can obtain Bring the above equation into Equation (3), the initial estimation interval of the velocity estimation can be obtained as follows where FDOA is the frequency difference measurement with bounded error. Assuming that the zonal velocity is known, the formula for solving the meridional velocity can be obtained by bringing Equation (7) into the following equation: The updated result is presented as follows: where By the same token, it is assumed that the meridional velocity is known and the zonal velocity formula is Afterwards, the exact velocity interval of the initial velocity interval of the radiation source is solved by the proposed algorithm based on the dichotomy algorithm by the following steps: Step One: Divide the zonal velocity interval, and divide the zonal velocity of the initial velocity interval into ten parts evenly. Step Two: The zonal velocity line is calculated by Equation (10) to produce the corresponding meridional velocity, and the discrete solution set of FDOA can be obtained by solving Equation (2) several times. As shown in Figure 2, the box composed of a discrete point set is the interval approximation result of the frequency difference line, as indicated by the small rectangle in the figure, and the interval of the frequency difference line completely covers FDOA. According to the TDOA/FDOA location principle, if we make the two frequency difference lines intersect, the intersection area is bound to contain the actual velocity. If the outer rectangle is generated by the intersection area, it is the result of the first velocity interval approximation, as shown by the rectangle in the center of the figure. The second step is repeated by using the circumscribed rectangle as a new initial velocity interval, and the velocity interval is continuously reduced through iterative operation until the width of the circumscribed rectangle is constant, and the resulting interval is the velocity estimation interval. Step Two: The zonal velocity line is calculated by Equation (10) to produce the corresponding meridional velocity, and the discrete solution set of FDOA can be obtained by solving Equation (2) several times. As shown in Figure 2, the box composed of a discrete point set is the interval approximation result of the frequency difference line, as indicated by the small rectangle in the figure, and the interval of the frequency difference line completely covers FDOA. According to the TDOA/FDOA location principle, if we make the two frequency difference lines intersect, the intersection area is bound to contain the actual velocity. If the outer rectangle is generated by the intersection area, it is the result of the first velocity interval approximation, as shown by the rectangle in the center of the figure. The second step is repeated by using the circumscribed rectangle as a new initial velocity interval, and the velocity interval is continuously reduced through iterative operation until the width of the circumscribed rectangle is constant, and the resulting interval is the velocity estimation interval. Step Three: Similarly, the meridional velocity is divided into intervals and Equation (11) is used to obtain a new velocity estimation interval by iterating to shrink the velocity interval. Step Four: According to the principle of the proposed algorithm, the intersection of the estimated velocity interval obtained in Step Two and Step Three are sure to include the ground truth of the velocity of the radiation source. The final velocity estimation interval is obtained by intersecting the two calculated velocity intervals, and this interval contains the velocity of ground truth. Performance Evaluation This section evaluates the proposed interval calculation-based location and velocity estimation algorithm to verify the feasibility of the algorithm and analyzes its performance by the numerical simulation. The simulation work was conducted using MATLAB programming on a personal computer with 3.4G Hz processor. Step Three: Similarly, the meridional velocity is divided into intervals and Equation (11) is used to obtain a new velocity estimation interval by iterating to shrink the velocity interval. Step Four: According to the principle of the proposed algorithm, the intersection of the estimated velocity interval obtained in Step Two and Step Three are sure to include the ground truth of the velocity of the radiation source. The final velocity estimation interval is obtained by intersecting the two calculated velocity intervals, and this interval contains the velocity of ground truth. Performance Evaluation This section evaluates the proposed interval calculation-based location and velocity estimation algorithm to verify the feasibility of the algorithm and analyzes its performance by the numerical simulation. The simulation work was conducted using MATLAB programming on a personal computer with 3.4G Hz processor. The Illustration of Interval Computation for TDOA/FDOA-Based Localization In the simulation, the practical satellite ephemeris's data were used, and the true position and velocity of the radiation source were set to be {20 • N, 120 • E} and {70 m/s, 90 m/s}, where the zonal velocity was 70 m/s, the meridional velocity was 90 m/s, and the velocity perpendicular to the earth's surface was 0 m/s. The measurement errors of TDOA and FDOA were assumed to be bounded errors, and the errors fall under uniform distribution. For the computational cost, we performed a typical experiment as follows: the TDOA measurement error was set to [−500, 500] m, and the FDOA measurement error was set to Information 2021, 12, 371 6 of 10 [−1, 1] Hz. The estimated results of the velocity estimation interval were obtained through the simulation. After 100 ensemble runs, the average computational time was around several seconds, which is longer than the other numerical computation-based methods (sub-second), but is still acceptable for some specific high computational scenarios. The simulation results obtained in the steps of the algorithm are illustrated below. Figure 3 shows the result of the first iterative operation of the interval algorithm, in which the red region is the initial velocity interval, the blue region is the interval approximation result of two FDOA lines, and the yellow region is the result of the outer rectangle of the region where two FDOA lines intersect. {70 m/s, 90 m/s} , where the zonal velocity was 70 m/s , the meridional velocity was 90 m/s, and the velocity perpendicular to the earth's surface was 0m/s. The measuremen errors of TDOA and FDOA were assumed to be bounded errors, and the errors fall unde uniform distribution. For the computational cost, we performed a typical experiment as follows: the TDOA measurement error was set to [−500, 500] m, and the FDOA measurement error was set to [−1, 1] Hz. The estimated results of the velocity estimation interval were obtained through the simulation. After 100 ensemble runs, the average computational time was around sev eral seconds, which is longer than the other numerical computation-based methods (sub second), but is still acceptable for some specific high computational scenarios. The simulation results obtained in the steps of the algorithm are illustrated below Figure 3 shows the result of the first iterative operation of the interval algorithm, in which the red region is the initial velocity interval, the blue region is the interval approximation result of two FDOA lines, and the yellow region is the result of the outer rectangle of the region where two FDOA lines intersect. The iterative results of Step Two and Step Three of the algorithm are given below. In Figure 4, the pink region is the velocity estimation interval obtained after many iterations, and the white spot in the pink area is the actual velocity of the radiation source The iterative results of Step Two and Step Three of the algorithm are given below. In Figure 4, the pink region is the velocity estimation interval obtained after many iterations, and the white spot in the pink area is the actual velocity of the radiation source. The Illustration of Interval Computation for TDOA/FDOA-Based Localization In the simulation, the practical satellite ephemeris's data were used, and the true position and velocity of the radiation source were set to be {20 ∘ N, 120 ∘ E} and {70 m/s, 90 m/s} , where the zonal velocity was 70 m/s , the meridional velocity was 90 m/s, and the velocity perpendicular to the earth's surface was 0m/s. The measurement errors of TDOA and FDOA were assumed to be bounded errors, and the errors fall under uniform distribution. For the computational cost, we performed a typical experiment as follows: the TDOA measurement error was set to [−500, 500] m, and the FDOA measurement error was set to [−1, 1] Hz. The estimated results of the velocity estimation interval were obtained through the simulation. After 100 ensemble runs, the average computational time was around several seconds, which is longer than the other numerical computation-based methods (subsecond), but is still acceptable for some specific high computational scenarios. The simulation results obtained in the steps of the algorithm are illustrated below. Figure 3 shows the result of the first iterative operation of the interval algorithm, in which the red region is the initial velocity interval, the blue region is the interval approximation result of two FDOA lines, and the yellow region is the result of the outer rectangle of the region where two FDOA lines intersect. The iterative results of Step Two and Step Three of the algorithm are given below. In Figure 4, the pink region is the velocity estimation interval obtained after many iterations, and the white spot in the pink area is the actual velocity of the radiation source. The final velocity estimation result is obtained by intersecting the two velocity intervals' estimation results of the above figure, as shown in Figure 5: As shown in Figures 4 and 5, the velocity estimation interval obtained by the simulation of this algorithm includes the actual velocity of the radiation source, and the area of the velocity estimation interval can be reduced by taking the intersection of the velocity intervals obtained by the two steps in the algorithm. As a result, the credibility of the algorithm is improved. The final velocity estimation result is obtained by intersecting the two velocity inte vals' estimation results of the above figure, as shown in Figure 5: As shown in Figures 4 and 5, the velocity estimation interval obtained by the sim lation of this algorithm includes the actual velocity of the radiation source, and the ar of the velocity estimation interval can be reduced by taking the intersection of the veloci intervals obtained by the two steps in the algorithm. As a result, the credibility of t algorithm is improved. In order to reflect the credibility of the algorithm and the accuracy of its performanc we checked whether the ground truth was included in the estimated interval for inclusi rate, and compared the RMSE of the middle point of the estimated interval and the groun truth as the accuracy performance. After 1000 Monte Carlo runs, and the following pe formance measures are obtained in Table 1: It can be seen that under the above simulation conditions, the algorithm can achie a 100% inclusion rate, and the RMSE value is relatively small, indicating that the algorith has good credibility and a stable performance under certain errors. The Performance Evaluation under The Measurement Error We also plot the variation curve of velocity estimation performance for the influen of measurement errors in different TDOA and FDOA. As shown in Figure 6, the area the velocity estimation interval of the radiation source increases as the measurement e rors of TDOA and FDOA increases, both vertically and horizontally, and this is becau according to the TDOA/FDOA joint locating principle, the TDOA measurement error w affect the position estimation interval, thus it will also affect the result of the velocity es mation interval. It can be seen from Equation (9) that FDOA will directly affect the veloci estimation interval. In order to reflect the credibility of the algorithm and the accuracy of its performance, we checked whether the ground truth was included in the estimated interval for inclusion rate, and compared the RMSE of the middle point of the estimated interval and the ground truth as the accuracy performance. After 1000 Monte Carlo runs, and the following performance measures are obtained in Table 1: It can be seen that under the above simulation conditions, the algorithm can achieve a 100% inclusion rate, and the RMSE value is relatively small, indicating that the algorithm has good credibility and a stable performance under certain errors. The Performance Evaluation under the Measurement Error We also plot the variation curve of velocity estimation performance for the influence of measurement errors in different TDOA and FDOA. As shown in Figure 6, the area of the velocity estimation interval of the radiation source increases as the measurement errors of TDOA and FDOA increases, both vertically and horizontally, and this is because, according to the TDOA/FDOA joint locating principle, the TDOA measurement error will affect the position estimation interval, thus it will also affect the result of the velocity estimation interval. It can be seen from Equation (9) that FDOA will directly affect the velocity estimation interval. When we take the CRLB as the implementation of benchmark comparison, we take the 3-sigma principle as a transform bridge between the Gaussian distribution and the uniform distributed interval of the measurements [11]. It can be seen from Figure 7 that our proposed algorithm can reach the CRLB in most cases. The TDOA measurement error has little effect on the RMSE of the velocity estimation result. Although the RMSE of the velocity estimation interval increases obviously with the increase of the FDOA measurement error, the overall change of the RMSE is small, and the performance is still stable while affected by the error. When we take the CRLB as the implementation of benchmark comparison, we take the 3-sigma principle as a transform bridge between the Gaussian distribution and the uniform distributed interval of the measurements [11]. It can be seen from Figure 7 that our proposed algorithm can reach the CRLB in most cases. The TDOA measurement error has little effect on the RMSE of the velocity estimation result. Although the RMSE of the velocity estimation interval increases obviously with the increase of the FDOA measurement error, the overall change of the RMSE is small, and the performance is still stable while affected by the error. The Performance Evaluation under The Ephemeris Error The effect of the ephemeris error on the velocity estimation interval is reflected in Figures 8 and 9. It can be seen that in the case of the same ephemeris position error, the When we take the CRLB as the implementation of benchmark comparison, we take the 3-sigma principle as a transform bridge between the Gaussian distribution and the uniform distributed interval of the measurements [11]. It can be seen from Figure 7 that our proposed algorithm can reach the CRLB in most cases. The TDOA measurement error has little effect on the RMSE of the velocity estimation result. Although the RMSE of the velocity estimation interval increases obviously with the increase of the FDOA measurement error, the overall change of the RMSE is small, and the performance is still stable while affected by the error. The Performance Evaluation under The Ephemeris Error The effect of the ephemeris error on the velocity estimation interval is reflected in Figures 8 and 9. It can be seen that in the case of the same ephemeris position error, the The Performance Evaluation under the Ephemeris Error The effect of the ephemeris error on the velocity estimation interval is reflected in Figures 8 and 9. It can be seen that in the case of the same ephemeris position error, the velocity estimation interval area increases with the increase of the ephemeris velocity error. When the ephemeris velocity error is the same, the velocity estimation interval area increases with the rise in the ephemeris position error. It can also be seen that although the RMSE of the velocity estimation interval increases with the increase of the ephemeris error, the deteriorating range is small which indicts the robustness of our proposed algorithm against the ephemeris position error. velocity estimation interval area increases with the increase of the ephemeris velocity error. When the ephemeris velocity error is the same, the velocity estimation interval area increases with the rise in the ephemeris position error. It can also be seen that although the RMSE of the velocity estimation interval increases with the increase of the ephemeris error, the deteriorating range is small which indicts the robustness of our proposed algorithm against the ephemeris position error. . Figure 9. The influence of ephemeris error on interval RMSE of velocity estimation. In fact, when the ephemeris velocity error is beyond 2 m/s, the estimated velocity interval box area of the target will be as high as ten thousand (m/s) 2 . Certainly, such errorlevel is not acceptable in the practical applications. However, the RMSE of the middle point is quite low, which is under 10 m/s in all cases. Nevertheless, the effect of the ephemeris position error on the interval area is not so obvious while it evidently has an apparent influence on the estimation accuracy, as shown in Figure 9. velocity estimation interval area increases with the increase of the ephemeris velocity error. When the ephemeris velocity error is the same, the velocity estimation interval area increases with the rise in the ephemeris position error. It can also be seen that although the RMSE of the velocity estimation interval increases with the increase of the ephemeris error, the deteriorating range is small which indicts the robustness of our proposed algorithm against the ephemeris position error. . Figure 9. The influence of ephemeris error on interval RMSE of velocity estimation. In fact, when the ephemeris velocity error is beyond 2 m/s, the estimated velocity interval box area of the target will be as high as ten thousand (m/s) 2 . Certainly, such errorlevel is not acceptable in the practical applications. However, the RMSE of the middle point is quite low, which is under 10 m/s in all cases. Nevertheless, the effect of the ephemeris position error on the interval area is not so obvious while it evidently has an apparent influence on the estimation accuracy, as shown in Figure 9. In fact, when the ephemeris velocity error is beyond 2 m/s, the estimated velocity interval box area of the target will be as high as ten thousand (m/s) 2 . Certainly, such error-level is not acceptable in the practical applications. However, the RMSE of the middle point is quite low, which is under 10 m/s in all cases. Nevertheless, the effect of the ephemeris position error on the interval area is not so obvious while it evidently has an apparent influence on the estimation accuracy, as shown in Figure 9. Discussion It is worthwhile to point out that the proposed interval calculation-based algorithm may prove to be a revolutionary methodology for the positioning and tracking research field. Our proposed method is free of the measurement distribution assumption, while most research works assume that the TDOA and FDOA measurements follow the Gaussian distribution which is not necessarily true. On the other hand, the estimated result is in a guaranteed interval form, which must include the ground truth. Additionally, the interval results are more convenient in performing the multiple system cooperation by the simple intersection calculation. Conclusions This paper introduces the interval analysis algorithm based on TDOA/FDOA joint location for the research of passive location. The variant formulas for calculating meridional velocity and zonal velocity are obtained through the frequency difference formula. When dividing the zonal velocity line and the meridional velocity line, the discrete point sets of meridional velocity and zonal velocity are obtained using the corresponding variant formula. Combined with the dichotomy algorithm, the velocity estimation interval is continuously reduced until the velocity estimation interval no longer shrinks, and a result interval including the actual velocity of the radiation source is obtained. Several simulation experiments show that the inclusion rate can reach 100% in a certain error range, and the velocity estimation interval area is smaller, closer to the real value of the radiation source, and has higher accuracy and credibility. When the interval estimation comes to the midpoint, the calculated velocity estimation interval RMSE is relatively small, and the performance of the algorithm is rather good. Under the high error estimation, the RMSE does not fluctuate wildly, and the performance of the algorithm is still stable.
2021-10-15T13:11:05.410Z
2021-09-13T00:00:00.000
{ "year": 2021, "sha1": "92985a280b8de61733fa28b8e43ffec982fda7b1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2078-2489/12/9/371/pdf?version=1631527617", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "77ddd01cce26db6bc199d793126ed953c7da70ab", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
232368111
pes2o/s2orc
v3-fos-license
Development of a nomogram for screening the risk of left ventricular hypertrophy in Chinese hypertensive patients Abstract Left ventricular hypertrophy (LVH) is an important risk factor for cardiovascular morbidity and mortality in hypertensives. Therefore, early identification of at‐risk patients is necessary. The objective of this study was to estimate the risk of LVH among Chinese hypertensives by designing a nomogram. 832 hypertensives were divided into two groups based on the presence of LVH. The least absolute shrinkage and selection operator (LASSO) regression and multivariable logistic regression were successively applied for optimal variable selection and nomogram construction. Discrimination power, calibration, and clinical usefulness were evaluated using the receiver operating characteristic (ROC) curve, calibration curve, and decision curve analysis. Internal validation was performed using the bootstrap method. The nomogram included five predictors, namely gender, duration of hypertension, age, body mass index (BMI), and systolic blood pressure. The area under the ROC curve (AUC) was 0.724 (95% CI: 0.687‐0.761), indicating moderate discrimination. The calibration curve showed an excellent agreement between the predicted LVH and the actual LVH probability. The risk threshold between 5% and 72% according to the decision curve analysis, and the nomogram is clinically beneficial. Internal validation by bootstrapping with 1000 samples showed a good C‐index of 0.715, which suggested that the predictive abilities for the training set and testing set were in consistency. Our study proposed a nomogram that can be utilized to assess the LVH risk rapidly for Chinese hypertensives. This tool could be useful in identifying patients at high risk for LVH. Further studies are required to ascertain the stability and applicability of this nomogram. The development of LVH is a complicated interaction of mechanism underlying hypertension. An accurate prediction model identifying patients at high risk of LVH in the early stages might be useful in the prevention of cardiovascular events. Until now, only a small number of prediction models for left ventricular mass have been reported. [6][7][8][9] However, most of these models were designed using the Caucasian populations. Previous studies have proposed that ethnic disparities in the prevalence of LVH exist. 10,11 In addition, there might also be ethnic differences regarding the relationship between LVH and poor cardiovascular outcomes. 12,13 Existing evidence suggests that the relationship between LVH and poor cardiovascular outcomes is strongest among Chinese and Hispanics compared with non-Hispanic Whites. 13 This could imply that Chinese and Hispanics might benefit more from a risk prediction model for LVH. Nevertheless, to the best of knowledge there are no LVH risk prediction models aimed at Chinese hypertensives. The aim of this study was to use routine clinical measures to develop a predictive model for LVH in Chinese hypertensives and to convert the complex predictive formula into an intuitive nomogram, which can be utilized to assess the LVH risk rapidly in the clinical setting. | Ethics statement This study adhered to the guidelines outlined in Helsinki Declaration, and the Ethics Board of the First Affiliated Hospital of Fujian Medical University provided the ethical approval. All study subjects completed an informed consent. | Study population This survey was conducted as a single-center, cross-sectional study. The study protocol was developed prior to clinical data acquisition (a) incomplete medical records; (b) secondary hypertension and serious cardiovascular diseases; (c) acute myocardial infarction and cerebrovascular accident within the past three months; (d) serious liver or kidney diseases; (e) autoimmune diseases and malignancy; (f) active inflammatory or infectious diseases; and (g) pregnancy. According to the exclusion criteria described above, 832 patients were finally enrolled. | Survey and measurements All participants completed the structured questionnaires, which contained detailed demographic information regarding gender, age, smoking habits, and previous medical history in the consultation room at their first visit. The height and body mass were measured in patients with lightweight clothing and without shoes. Body mass index (BMI) constituted the ratio calculated by dividing body mass (in kilograms) by squared height (in meters). Blood pressure was measured using an automatic blood pressure monitor (Omron, Kyoto Japan) after patients were requested to rest for 5 minutes in a seated position. | Definitions Hypertension was defined as blood pressure > 140/90 mmHg or having known hypertension or self-reported antihypertensive therapy. 22,23 Diabetes was defined as a new diagnosis of diabetes or self-reported history of diabetes or self-reported use of hypoglycemic agents according to the guideline provided by the American Diabetes Association in 2014. 24 Current smokers were defined as individuals who smoked at least 100 cigarettes during their lifetimes and currently smoke cigarettes every day. 25 According to the American Society of Echocardiography guidelines, LVH was defined as LVMI > 115 g/m 2 and > 95 g/m 2 for males and females, respectively. 21 | Statistical Analysis All statistical analyses were conducted using Statistical Product and Service Solutions (version 20.0) and R software (version 4.0.2; https://www.R-proje ct.org). The measurement data were tested for normal distribution (Kolmogorov-Smirnov test) and homogeneity of variance (Levene's test). Continuous variables were described by means ± standard deviations or median and interquartile distance. Categorical variables were expressed as percentages (%) and absolute numbers (n). Variables between groups were assessed by chi-square test and Student's t test as appropriate. To select the best potential predictive variables, the least absolute shrinkage and selection operator (LASSO) regression was performed using the "glmnet" package of R software. 26 To obtain the best subset of predictors, the LASSO regression minimizes the error in prediction for a response variable by placing a penalty constraint to the model that forces regression coefficients for some variables toward zero. By implementing LASSO regression, variables with nonzero coefficients remain in the final model. Based on −2log-likelihood test, 10-fold crossvalidation was carried out by LASSO regression for centralization and standardization of included variables and then obtains the best-fit lambda value. The lambda with 1 standard error provides a simplest model with good performance and is used for variable selection. These variables were then taken into multivariate logistic regression model. After multivariate analysis, the independent predictors were chosen to develop a nomogram. The "rms" package of R software was utilized to create the nomogram. The discrimination efficiency of the prediction model was measured using receiver operating characteristic (ROC) analysis with "ROCR" package of R software. Calibration curves were plotted using the R software "rms" package to evaluate the calibration of the nomogram. To further estimate the clinical usefulness of the nomogram, decision curve analysis was conducted using the "rmda" package of R software. Net benefit is a weighted measure between true positives and false positives depending on the threshold probability and is a crucial component of decision curve analysis. In the decision curve analysis, the maximum net benefit is acquired via identifying all patients with echocardiography LVH. Therefore, this net benefit is equivalent to the overall prevalence of LVH in the training set. The line corresponding to the extreme assumption that all patients with LVH can be drawn (all patients were classified with LVH, traditionally called "treat all"). Likewise, the minimum net benefit is obtained by assuming that no patient with LVH and is zero (no patient was classified with LVH, traditionally called "treat none"). In order to be clinically useful, a nomogram should have a higher net benefit than the two extreme cases. 27 Internal validation was performed by subjecting the nomogram to bootstrapping with 1000 resamples to obtain a relatively corrected C-index. 28 P <.05 for both sides was considered significant. | Clinical characteristics Overall, 832 hypertensive patients were enrolled in this study, 550 males (66.11%) and 282 females (33.89%), with an average age of 61.44 ± 12.00 years (range 19-89 years). The clinical characteristics of all patients are summarized in Table 1. All patients were categorized into non-LVH group and LVH group based on the echocardiographic results. The overall prevalence of LVH was 31.97%. Compared with the non-LVH group, patients in the LVH group had higher average age, systolic blood pressure, and glycosylated hemoglobin, lower eGFR, and uric acid levels (P <.05). In addition, higher proportion of female, diabetes, long duration of hypertension history (≥10 years), and antihypertensive treatment were observed in the LVH group relative to the non-LVH group (P <.05). There were no significant differences in smoking, family history of hypertension, antihyperglycemic treatment, lipidlowering therapy, BMI, diastolic blood pressure, total cholesterol, triglyceride, HDL cholesterol, LDL cholesterol, and UACR between the two groups. | Variable selection Of the demographic and clinical variables, namely gender, smoking habit, family history of hypertension, duration of hypertension, diabetes, antihypertensive treatment, antihyperglycemic treatment, lipid-lowering therapy, age, BMI, systolic blood pressure, diastolic blood pressure, uric acid, total cholesterol, triglyceride, HDL cholesterol, LDL cholesterol, eGFR, glycosylated hemoglobin, and UACR were included in the LASSO regression. After LASSO regression selection, 20 variables were reduced to 7 variables with nonzero coefficients (Figure 1). These variables included gender, age, BMI, duration of hypertension, systolic blood pressure, eGFR, and glycosylated hemoglobin. | Development of an individualized prediction model After that, these selected variables were together subjected to multivariate logistic regression analysis. Among them, 5 variables that were independently risk factors for LVH and were used to construct has the estimated probability of LVH of 42.75% ( Figure 2B). | Performance assessment of the nomogram The performance of the nomogram was evaluated using the ROC curve and the calibration curve. This nomogram achieved moderate | Clinical application of the nomogram The potential clinical utility was evaluated using the decision curve analysis. Between the threshold probability of 5% and 72%, using the nomogram to predict the probability of LVH added more benefit than the "treat none" or "treat all" strategies ( Figure 3C). | Model validation The internal validation was performed using the bootstrap method with 1000 resamples, the C-index, bias-corrected Somers' D xy rank correlation (D xy ), and R-squared index (R 2 ) in the testing set were 0.715, 0.430, and 0.172; and in the training set, they were 0.724, 0.449, and 0.188 (Table 3). These data suggested that the predictive abilities for the training set and testing set were highly consistent. | DISCUSS ION In this study, a nomogram was constructed to assess the risk of LVH Hence, this tool could be easily used even with clinicians lacking It is known that there are ethnic differences in the prevalence of LVH. 10,11 A previous study reported a significantly higher prevalence of LVH among blacks compared with whites. 10 It has also been documented that Caribbean Hispanics have higher prevalence of LVH and left ventricular remodeling, compared with non-Hispanic whites. 11 Moreover, the association between LVH and cardiovascular events may also differ by ethnic background. 12,13 Havranek et al 12 31 To the best of our knowledge, this is the first study to develop a prediction tool for Chinese hypertensives to evaluate the probability of LVH. In the present study, variables selected by LASSO regression were together subjected to multivariate logistic regression analysis and those that were statistically significant used to build the nomogram. Female gender was identified as an independent risk factor for LVH in this study. To date, there have been controversies regarding the association between gender and LVH. Recently, a study reported that female patients with hypertension were shown to be more vulnerable to developing LVH compared with the males, 32 and the condition of combined LVH in hypertension offset the gender-specific benefits in cardiovascular risk. 33 Previously, our research team had reported that gender may have impact on the LVH. 34 Data from another study conducted in a European country, however, suggested that the male gender is an independent risk factor for LVH. 35 These inconsistencies could be attributed to the differences in study populations and methodologies (eg, measurement methods, diagnostic criteria). In the present study, most of the females were postmenopausal in the condition of estrogen deficiency, lacking of the protection of the cardiovascular system. Furthermore, the use of gender-specific cutoffs was also likely to overestimate the prevalence of LVH in females. Either of these situations explains why female hypertensives are at a higher risk of LVH. Similar to the previous study, the duration of hypertension was shown as an independent risk factor for LVH in our study. As reported by Nardi et al, 7 the duration of hypertension is able to independently predict the presence of LVH. Furthermore, increasing age was closely related to the increase in LVMI, consistent with some previous studies. 6,35,36 It has been reported that LV mass increases gradually with aging in patients both with and without hypertension. 37 Our study suggested that obesity is also an independent risk factor for LVH congruent with the existing evidence. 6,8,36 Multiple mechanisms including insulin resistance, 38 cardiac ectopic fat deposition, 39 and obesity-related increased blood volume 40 potentially account for this association. A positive association between systolic blood pressure and the risk of LVH was observed in this study. It is generally accepted that systolic blood pressure is the most important driver for ventricular hypertrophy and concentric cardiac-remodeling. 41 In order to evaluate the clinical utility of this nomogram, we performed decision curve analysis. This novel approach provides insights into the clinical outcomes based on the threshold probability, from which the net benefit can be drawn. The net benefit was calculated as the proportion of true positives less the proportion of false positives, and by weighing the relative hazards of forgoing treatment and the negative consequences of an unnecessary treatment. 27 The decision curve analysis demonstrated that the use of this nomogram to predict the probability of LVH added more benefit compared with the "treat none" or "treat all" strategies in the range of thresholds from 5% to 72%. A well-performed LVH risk prediction tool could be beneficial for clinical decision making. interventions in high-risk cohorts that target the modifiable factors, such as BMI and systolic blood pressure, to reduce the risk of LVH. Recently, the Systolic Blood Pressure Intervention Trial (SPRINT) demonstrated that intensive antihypertensive therapy is able to prevent the development of LVH. 42 Furthermore, it has been reported that the combination of weight loss and antihypertensive therapy strategy could contribute to reduction of left ventricular mass. 40 Regarding nonmodifiable factors, such as gender, age, and duration of hypertension, it is also helpful for the general public to be fully aware of the LVH risk. The nomogram developed in this study is able to identify high-risk patients using only five routine clinical parameters, which could be an attractive option for the general population, primary health clinics, or people with low socioeconomic status. In the actual clinical setting, the electrocardiogram (ECG) is a common screening tool for LVH due to its low cost and wide availability and is recommended as a routine examination for hypertensive patients in the Chinese hypertension guidelines. 23 Secondly, the number of cases in our study was relatively small. Therefore, some true associations may have been neglected due to a lack of statistical power. Finally, it was a single-center study. The patients were mostly selected from Fujian province, which may limit generalizability. Although the bootstrap technique was applied for internal validation, external validation using a prospective multicenter study should also be conducted in the future. In summary, our results highlight some key risk factors of LVH, including gender, duration of hypertension, age, BMI, and systolic blood pressure, among Chinese hypertensives. A user-friendly nomogram was developed based on the risk factors identified in the present study. We believe that the utilization of this nomogram will help to identify individuals at high risk of LVH, help clinical decision making, and thus prevent adverse cardiovascular events. Nevertheless, further external validation data are required to ascertain the stability and applicability of this nomogram. ACK N OWLED G EM ENTS The authors would like to thank the editor and reviewers for their valuable comments. CO N FLI C T O F I NTE R E S T The authors declare that there is no conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T The data of this study are available from the corresponding author on reasonable request.
2021-03-27T06:16:37.561Z
2021-03-26T00:00:00.000
{ "year": 2021, "sha1": "2a7682884768e135c56e5679ce9e115350d3817e", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jch.14240", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "468ff006b7887dc0022e9f88cef9ba0a970498cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219977597
pes2o/s2orc
v3-fos-license
Aquatic habitats of the malaria vector Anopheles funestus in rural south-eastern Tanzania Background In rural south-eastern Tanzania, Anopheles funestus is a major malaria vector, and has been implicated in nearly 90% of all infective bites. Unfortunately, little is known about the natural ecological requirements and survival strategies of this mosquito species. Methods Potential mosquito aquatic habitats were systematically searched along 1000 m transects from the centres of six villages in south-eastern Tanzania. All water bodies were geo-referenced, characterized and examined for presence of Anopheles larvae using standard 350 mLs dippers or 10 L buckets. Larvae were collected for rearing, and the emergent adults identified to confirm habitats containing An. funestus. Results One hundred and eleven habitats were identified and assessed from the first five villages (all < 300 m altitude). Of these, 36 (32.4%) had An. funestus co-occurring with other mosquito species. Another 47 (42.3%) had other Anopheles species and/or culicines, but not An. funestus, and 28 (25.2%) had no mosquitoes. There were three main habitat types occupied by An. funestus, namely: (a) small spring-fed pools with well-defined perimeters (36.1%), (b) medium-sized natural ponds retaining water most of the year (16.7%), and (c) slow-moving waters along river tributaries (47.2%). The habitats generally had clear waters with emergent surface vegetation, depths > 0.5 m and distances < 100 m from human dwellings. They were permanent or semi-permanent, retaining water most of the year. Water temperatures ranged from 25.2 to 28.8 °C, pH from 6.5 to 6.7, turbidity from 26.6 to 54.8 NTU and total dissolved solids from 60.5 to 80.3 mg/L. In the sixth village (altitude > 400 m), very high densities of An. funestus were found along rivers with slow-moving clear waters and emergent vegetation. Conclusion This study has documented the diversity and key characteristics of aquatic habitats of An. funestus across villages in south-eastern Tanzania, and will form an important basis for further studies to improve malaria control. The observations suggest that An. funestus habitats in the area can indeed be described as fixed, few and findable based on their unique characteristics. Future studies should investigate the potential of targeting these habitats with larviciding or larval source management to complement malaria control efforts in areas dominated by this vector species. Background Anopheles funestus has been a major malaria vector in many east and southern African countries for several years [1][2][3][4]. In south-eastern Tanzania, they have been implicated in more than 85% of malaria transmission events across several villages [4][5][6]. Its dominance in pathogen transmission [4,7] is attributable to factors such as: (a) being predominantly anthropophilic (i.e. strong preference for blood from humans over other vertebrates) and endophilic (i.e. strong preference for biting and resting indoors than outdoors) [8,9], (b) their resistance to some of the commonly-used pyrethroid insecticides in locations such as south-eastern Tanzania [10][11][12][13][14], and (c) their superior daily survival probabilities as reflected in the higher parity rates compared to other Anopheles species [4,5,7]. The supremacy of An. funestus in malaria transmission has been observed even in areas where they occur at far lower densities compared to other malaria vectors, such as Anopheles arabiensis [4,5,15]. In such settings, the infrequent occurrence partly explains why their behaviours are relatively understudied in the field. More generally, An. funestus is also far easier to find as adults than as larvae. As a result, this species rarely features in larval surveys of Anopheles species. Researchers, therefore, sometimes rely on adult collections rather than larval collections to obtain enough samples for insecticide resistance testing [4], which according to the World Health Organization (WHO) protocols require F1 offspring with synchronized age groups [16]. It has previously been suggested that an in-depth ecological understanding, followed by improved targeting of An. funestus could potentially improve their control, and significantly reduce malaria transmission in areas where the vector dominates [4]. Given the strong resistance of some An. funestus populations to insecticides commonly applied on insecticide-treated nets (ITNs) and/or indoor residual spraying (IRS) [10][11][12][13][14], supplementary measures targeting the aquatic stages of the mosquitoes are critical for more effective control of An. funestus. This requires rigorous surveys to identify and characterize preferred larval habitats for An. funestus [17]. Strategies such as targeted larviciding-a component of larval source management could indeed significantly improve control efforts and accelerate progress towards malaria elimination, especially in communities where the aquatic habitats are fixed, few and findable [18,19]. A previous study in western Kenya reported that An. funestus prefers to oviposit in large semi-permanent water bodies containing aquatic vegetation and algae [20]. A separate study in coastal Kenya observed these species breeding in vegetated aquatic habitats that are stable and permanent, and were along river streams [21]. In Cameroon, it was demonstrated that An. funestus habitats were often found in open savannas instead of deep or degraded forests [22,23]. These habitats had greater exposure to sunlight and high temperatures, and remained productive for longer, often with peaks after the start of the dry season. Unfortunately, in south-eastern Tanzania where the species now dominates transmission, there have not been detailed studies of its natural aquatic habitats and responses to interventions. This situation is complicated by difficulties in colonizing the species inside laboratories, which would enable such studies. This current baseline study was, therefore, aimed at identifying and characterizing the main larval habitats of An. funestus to advance knowledge of its aquatic ecology. The findings were expected to provide a basis for further investigations into improved control strategies targeting the species, and also to inform ongoing efforts for rearing this species under laboratory conditions. Study areas This study was conducted in six villages of Kilombero and Ulanga districts in south-eastern Tanzania (Fig. 1). Five of these villages were located at altitudes less than 300 m above sea level, while the sixth was at an altitude greater than 400 m. In Kilombero district, the study villages were Ikwambi (− 7.97927° S, 36 The study villages were selected based on the high abundance of adult An. funestus mosquitoes based on previous surveillance work done by Ifakara Health Institute (unpublished data). The annual rainfall and temperature ranges in these villages were 1200-1800 mm and 20-32.6 °C respectively. The main economic activities are crop farming (mostly rice and maize farming) and livestock keeping. Larvae collection and rearing This study was done between January and September 2018, and repeated between October and December 2019. The study villages were surveyed for the presence of aquatic habitats along transects of 1000 m, each radiating from an approximated village centroid. All identified water bodies were marked, geo-referenced, physically characterized and examined for the presence of Anopheles larvae. Standard 350 mL dippers or 10 L plastic buckets were used to sample water from the pools (Fig. 2). When the water bodies consisted of rivers and streams, larval sampling was done along the river length over distances not exceeding 1000 m, so as to match the 1000 m transects in the main survey. Parts of the rivers with or without Anopheles larvae were similarly characterized and geo-referenced. The buckets were used in sites where it was impractical to use the dippers (e.g. habitats with depths greater than 50 cm), and also to collect the larvae for further rearing and identification. The larvae collected from different aquatic sites were transported to the insectary at Ifakara Health Institute for rearing to adults. Once in the insectary, the larvae were kept in rearing pans (32 cm diameter and 5 L holding capacity) labelled with information on the dates and place of larvae collection. The temperature in the insectary was kept at 26 °C ± 2 °C and relative humidity at 82% ± 10%. The larvae were fed with Tetramin ® fish food until they developed into pupae and emerged into adult mosquitoes. Emerging adult mosquitoes were collected using mouth aspirator, killed by freezing and all Anopheles were identified using morphology-based identification keys developed by Gilles and Coetzee [9,24]. All identified An. funestus mosquitoes were then packed individually in 1.5 mL Eppendorf tubes with silica gel and submitted to molecular laboratory for sibling species identification by polymerase chain reaction (PCR) assays as described by Koekemoer et al. [25]. Habitats positive for An. funestus were then identified among all the surveyed habitats. Characteristics of aquatic habitats Characteristics of all the aquatic habitats as well as the surrounding environments were recorded. For the habitats, information collected included water movement (stagnant or slow), water colour (clear, coloured, or polluted), a tree canopy (shade) over habitat (none, partial, heavy), habitat size in circumference (less than 10 m, between 10 and 100 m, more than 100 m), vegetation type (none, submerged, floating, emergent), vegetation quantity (none, scarce, moderate, abundant), algae quantity (none, scarce, moderate), water depth (less than 10 cm, between 10 and 50 cm, more than 50 cm), distance from the nearest homes (less than 100 m, between 100 and 500 m, more than 500 m) and water type (semipermanent, permanent). The habitats were considered temporary, semi-permanent or permanent if retained water for less than 3 months, 3-9 months and throughout the year respectively. Additionally, the physicochemical characteristics of water in the larval habitats were assessed in four of the six villages, namely Tulizamoyo, Ikwambi, Kisawasawa and Kilisa. Parameters assessed included: water temperature (°C), pH (scale of 0-14), conductivity (Siemens/m), total dissolved solids (mg/L) and turbidity (nephelometric turbidity units, using 2100Q portable turbidity meter). Assessments of these parameters were conducted in the field sites immediately after the collection of larvae from the habitats. Lastly, nitrate levels (milligrams per litre) were also analysed by spectrophotometric method. To do this, one litre of water samples from each habitat in the study sites was collected, stored in a cooler box and sent to the laboratory at Ifakara Health Institute for analysis within 24 h post collection. Data analysis Analysis was done using open source software, R programming language [26]. A total of 16 environmental variables were used to identify the main predictors for the presence of An. funestus larvae in the study villages. At first, all main predictors were initially assessed individually using univariate logistic regression and assess its impact on the presence of An. funestus larvae. Secondly, all the variables were included in the final model and assess their effect on the presence of An. funestus larvae. Odds ratios and their 95% confidence intervals are reported, and the statistical differences were considered significant when P-values < 0.05. Adult mosquitoes that emerged from the different sampled habitats consisted of: An. funestus sensu lato (s.l.) (64%; n = 696), Culex spp. (24.5%; n = 267), Anopheles coustani (6.2%; n = 67), Anopheles gambiae s.l. (4.3%; n = 47) and other species (1%; n = 11). PCR identification of the 501 An. funestus group revealed that 53.3% (n = 267) were An. funestus sensu stricto (s.s.), 28.7% (n = 144) were Anopheles rivulorum, 11.8% (n = 59) were Anopheles leesoni and 6.2% (n = 31) were unidentified due to non-amplification in the PCR assays. The An. funestus s.s. commonly shared habitats with the other sibling species including An. leesoni and An. rivulorum. Table 1 summarizes different environmental variables in aquatic habitats associated with the presence of An. funestus and other mosquito species. These variables were assessed individually and later combined in the final model to see how they influence the presence of An. funestus larvae. Results from univariate logistic regression showed that, the permanent habitats with emergent vegetation were strongly associated with the presence of An. funestus larvae (P < 0.01). The final model, multivariate outputs show that stagnant or slow-moving water did not significantly affect the presence of An. funestus larvae from the observed aquatic habitats ( Table 2). However, heavily shaded aquatic habitats (with high densities of tree canopy), especially along the rivers were more likely to harbour An. funestus larvae compared to others (Table 2). Furthermore, the aquatic habitats with a depth greater than 50 cm and vegetation were significantly associated with the presence of An. funestus larvae ( Table 2). Habitat characteristics At higher altitudes, such as in Ruaha village, which was higher than 400 m above sea level, all the An. funestus larvae collected were from the rivers. The river sections acting as the breeding sites for An. funestus had slow-moving and clear waters near their banks. They were characterized by abundant emergent vegetation and water depths of greater than 50 cm (Fig. 3), and were within 100 m from human dwellings. The physical characteristics at these altitudes were the same as in the other habitats of An. funestus found below 300 m altitude, i.e. the natural perennial ponds, or small springfed water pools with well-defined areas (Figs. 4 and 5). Table 3 shows the median values of physicochemical parameters in larval habitats of different mosquito species. The pH in all An. funestus larval habitats were weakly acidic, ranging from 6.5 to 6.7. The concentration of total dissolved solids (tds) was highest in the water others. The association between these physicochemical characteristics and the occurrence of An. funestus was however not statistically significant at P < 0.05 (Table 4). Discussion Although An. funestus are among the most important vectors of malaria in Africa, little is known regarding their larval ecology and development. This crucial gap needs an urgent solution, but is perpetuated by the inability of most mosquito biologists to create laboratory colonies of this vector species. Understanding the basic environmental parameters that influence mosquitoes breeding and oviposition can improve the planning, development and deployment of new interventions to control malaria transmission [27]. This study identified and characterized larval habitats of An. funestus in southeastern Tanzanian villages of Ulanga and Kilombero districts, where this mosquito species has been implicated in most malaria-infective bites [4,6]. The study examined more than 100 potential habitats across six villages and identified three main habitat types. First were small water wells with well-defined edges and were spring-fed, some of which were also used by locals as domestic water sources (Fig. 4b). These habitats were often occupied by multiple species of the An. funestus group, and in some cases, they were shaded by large trees. The second type of habitat was mediumsized ponds, for which the central part retained water Fig. 3 Picture of a riverside aquatic habitat for Anopheles funestus mosquitoes, as identified in the study areas in rural south-eastern Tanzania. At altitudes above 400 m, these were the only An. funestus habitats identified Fig. 4 Typical larval habitats of Anopheles funestus mosquitoes in lower altitude areas (a medium-sized ponds that retain water at the centre most of the year and have emergent surface vegetation and b small spring-fed wells with well-defined perimeters) and habitats at higher altitudes (c slow-moving waters at the riverside with emergent vegetation) for all or most of the year. These habitats often had surface vegetation (Fig. 4a) and were occupied by multiple other Anopheles species, such as An. arabiensis. Third was the riverside habitats consisting of the slow-moving waters on the rivers or river tributaries, also with vegetation (Fig. 4c). These habitats were mostly found at altitudes above 400 m above sea level, unlike the other two habitats which were more common at lower altitudes below 300 m (Fig. 5). In summary, An. funestus in this area appears to prefer permanent and semi-permanent aquatic habitats with stagnant or slow-moving waters, emergent vegetation e.g. algae on swamp surfaces, clear waters at depths exceeding 50 cm and nearness to human dwellings. This study provides a basis for designing future surveys and control operations targeting malaria, especially in places such as south-eastern Tanzania where An. funestus and An. arabiensis play a major role in malaria transmission [5,6,[28][29][30]. This study has suggested that permanent or semi-permanent habitats characterized by emergent vegetation play a major role in the ecology of An. funestus. The findings are concurrent with past evidence from earlier investigations in Kenya [20,21,31]. Although this current study did not assess the seasonality of An. funestus larvae densities in the different habitats, the observed preference of permanent and semi-permanent water bodies explains the known seasonality of its adult densities in the same study villages as observed in recent entomological surveys [4,30]. The adult densities of An. funestus tend to peak after the rains just before the dry seasons begin, and are sustained by the large permanent water bodies [20]. Although no detailed studies have been done in this area targeting An. funestus aquatic habitats, early accounts by Gillies and DeMeillon [9], as well as limited surveys done nearly 50 years ago in the Ifakara area (which neighbours the current study site) already suggested an association between the late peaks in An. funestus densities and the large perennial habitats [32]. Although there was no clear statistical association, the An. funestus habitats had depths greater than 50 cm and were located within 100 m from the human dwellings. This is likely due to the anthropophagic nature of these mosquitoes [33], and further explains the importance of this species in malaria transmission in these areas. Other Anopheles species, such as An. gambiae, which breed in open sunlit stagnant water pools [9,20,34] are also highly anthropophagic and generally occur near human habitations [35]. The ability of An. funestus to breed in the river waters is not unique to Tanzania, but has also been demonstrated in other places such as coastal Kenya [9,21], and may be due to the higher levels of aeration and dissolved oxygen in such waters. Additional investigations are therefore required to further examine these details. A similar ecological niche has been described for Anopheles pseudopunctipennis in South America, which was successfully controlled by clearing the river waters of the algal blooms [36]. While it is unclear whether clearing the identified habitats of emergent vegetation would be suitable for control of An. funestus in Tanzania, it will be important to investigate it as a potential environmentally-friendly approach, in which community members could be engaged to achieve effective disease prevention. Besides, it will be important to ascertain the importance of these habitat types across multiple sites and settings. For instance, in one area in the north of Tanzania, Dida et al. [37] found no mosquito larvae near the main rivers, suggesting the dominant malaria vectors may be breeding elsewhere in such settings. Understanding the physicochemical characteristics of mosquito larval habitats is also important in understanding their overall ecological needs, and assessing options for manipulation. It is probably that the physicochemical parameter levels observed in this study might have been influenced by agricultural practices and pesticide use, which is widespread in the valley [38]. Emerging adult mosquitoes from these habitats might become more resistant towards the insecticides having the same chemical formula used in mosquito vector control [35,39,40]. Habitats most productive of An. funestus were those in higher altitude villages, which were probably less affected by agricultural insecticidal deposits [41,42], than habitats at the floor of the valley. Nonetheless, the mosquito species from the same study villages are known to be already strongly resistant to insecticides used for public health, including pyrethroids and carbamates [43], a situation potentially related to the widespread use of pesticides in both agriculture and public health. Similar to other studies on Anopheles mosquitoes, the An. funestus habitats in this study area had weak acidity pH [40,44]. The main habitats had pH ranging from 6.5 to 6.7, turbidity from 26.6 to 54.8 NTU and total dissolved solids from 60.5 to 80.3 mg/L, all of which are similar to most observations of habitats of Anopheles mosquitoes in previous studies [45,46]. Anopheles funestus mosquitoes were collected from the habitats with different concentrations of nitrate, but it remains unclear whether this might influence larval development as earlier described [40]. One limitation of this study was that some characteristics such as water temperature, though included in this analysis are subject to change during the day. Future studies should consider laboratory investigations and also the use of field data collected multiple times a day to determine the suitable temperature ranges and other physicochemical characteristics for optimal survival of this mosquito species. Conclusion Overall, this study has provided a basic description of An. funestus habitats in rural south-eastern Tanzanian districts of Ulanga and Kilombero. There were three main habitat types occupied by An. funestus, namely: (a) small spring-fed pools with well-defined perimeters, (b) medium-sized natural ponds retaining water most of the year, and (c) slow-moving waters along river tributaries particularly important at higher altitudes at the edge of the valley. The habitats generally had clear waters with emergent surface vegetation, depths greater than 0.5 m and distances less than 100 m from human dwellings. They were permanent or semi-permanent, retaining water most of the year. Effective control measures for this species should consider understanding their behaviour and ecology including characteristics of their aquatic habitats so that they can be targeted during their immature stages. Given the rarity of the An. funestus habitats and the observed characteristics, these habitats fit the description of being fixed, few and findable. Future studies should, therefore, investigate the potential of using larviciding or larval source management to improve malaria control in settings where An. funestus dominate.
2020-04-09T09:19:37.432Z
2020-04-03T00:00:00.000
{ "year": 2020, "sha1": "0543b7ccb959d9969bcc458d77ec3de22a12506b", "oa_license": "CCBY", "oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/s12936-020-03295-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02c14a8617abbcbe1da83560fe08f98d7bb4f0c9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
13656627
pes2o/s2orc
v3-fos-license
Classes 1 and 2 integrons in faecal Escherichia coli strains isolated from mother-child pairs in Nigeria Background Antimicrobial resistance among enteric bacteria in Africa is increasingly mediated by integrons on horizontally acquired genetic elements. There have been recent reports of such elements in invasive pathogens across Africa, but very little is known about the faecal reservoir of integron-borne genes. Methods and findings We screened 1098 faecal Escherichia coli isolates from 134 mother-child pairs for integron cassettes by PCR using primers that anneal to the 5’ and 3’ conserved ends of the cassette regions and for plasmid replicons. Genetic relatedness of isolates was determined by flagellin and multi-locus sequence typing. Integron cassettes were amplified in 410 (37.5%) isolates and were significantly associated with resistance to trimethoprim and multiple resistance. Ten cassette combinations were found in class 1 and two in class 2 integrons. The most common class 1 cassette configurations were single aadA1 (23.4%), dfrA7 (18.3%) and dfrA5 (14.4%). Class 2 cassette configurations were all either dfrA1-satI-aadA1 (n = 31, 7.6%) or dfrA1-satI (n = 13, 3.2%). A dfr cassette was detected in 294 (31.1%) of trimethoprim resistant strains and an aadA cassette in 242 (23%) of streptomycin resistant strains. Strains bearing integrons carried a wide range of plasmid replicons of which FIB/Y (n = 169; 41.2%) was the most frequently detected. Nine isolates from five different individuals carried the dfrA17-aadA5-bearing ST69 clonal group A (CGA). The same integron cassette combination was identified from multiple distinct isolates within the same host and between four mother-child pairs. Conclusions Integrons are important determinants of resistance in faecal E. coli. Plasmids in integron-containing strains may contribute to dispersing resistance genes. There is a need for improved surveillance for resistance and its mechanisms of dissemination and persistence and mobility of resistance genes in the community and clinical settings. Methods and findings We screened 1098 faecal Escherichia coli isolates from 134 mother-child pairs for integron cassettes by PCR using primers that anneal to the 5' and 3' conserved ends of the cassette regions and for plasmid replicons. Genetic relatedness of isolates was determined by flagellin and multi-locus sequence typing. Integron cassettes were amplified in 410 (37.5%) isolates and were significantly associated with resistance to trimethoprim and multiple resistance. Ten cassette combinations were found in class 1 and two in class 2 integrons. The most common class 1 cassette configurations were single aadA1 (23.4%), dfrA7 (18.3%) and dfrA5 (14.4%). Class 2 cassette configurations were all either dfrA1-satI-aadA1 (n = 31, 7.6%) or dfrA1-satI (n = 13, 3.2%). A dfr cassette was detected in 294 (31.1%) of trimethoprim resistant strains and an aadA cassette in 242 (23%) of streptomycin resistant strains. Strains bearing integrons carried a wide range of plasmid replicons of which FIB/Y (n = 169; 41.2%) was the most frequently detected. Nine isolates from five different individuals carried the dfrA17-aadA5-bearing ST69 clonal group A (CGA). The same integron cassette combination was identified from multiple distinct isolates within the same host and between four mother-child pairs. Conclusions Integrons are important determinants of resistance in faecal E. coli. Plasmids in integroncontaining strains may contribute to dispersing resistance genes. There is a need for PLOS Introduction The level of antibiotic resistance among pathogenic and commensal bacteria has steadily increased and has become a global health concern [1]. Much of the problem has been attributed to mobile genetic elements such as plasmids and transposons [2]. Integrons are genetic elements that also contribute to the prevalence and horizontal transmission of antibiotic resistance [3,4,5]. Integrons are genetic elements with a site-specific recombination system for capturing, expressing and exchanging gene cassettes [6]. They are characterized by conserved features, namely an intI gene encoding an integrase, a recombination site (attI), a promoter (P) and the ability to integrate gene cassettes comprise of a single open reading frame (orf) and a specific recombination site, attC [4,7]. Intracellularly, gene cassettes exist either in a linear form inserted into an integron or as a free circular cassette that is not dependent on an integron [7,8]. At least nine classes of integrons have been described and class 1and 2 integrons are the most predominantly associated with antibiotic resistance in clinical isolates [9]. The capture of resistance genes is especially important when these integrons are disseminated by broad-host-range conjugative plasmids or transposons. So far, more than 8000 gene cassette arrays have been identified in class 1 integrons (http:// integrall.bio.ua.pt/). Yet, less than 10 array compositions are commonly reported [9,10,11]. These commonly reported arrays are found in a wide variety of hosts and environments, highlighting a high level of horizontal dissemination of these elements among bacterial populations and species. Clonal expansion also contributes to the current prevalence of interregional spread of integron-carrying bacterial species. However, the extent to which each contributes is unknown [12]. In Nigeria, there have been reports of integrons in strains isolated from the clinical and environmental settings [13,14,15,16]. However, very little is known about the faecal reservoir of integron-borne genes. In this study, we investigated prevalence of classes 1 and 2 integrons, their association with antimicrobial resistance and how resistance cassettes are disseminated in faecal, Escherichia coli strains isolated from children with diarrhoea and their apparently healthy mothers., Isolation and identification of Escherichia coli strains All faecal samples were inoculated onto Eosin Methylene Blue (EMB) agar plates (Oxoid Ltd., Basingstoke, Hampshire, England) and incubated for 24 hours aerobically at 37˚C. From each sample, up to five morphologically distinct colonies typical of E. coli were selected and identified by standard biochemical testing [17]. DNA extraction All isolates were grown overnight in 5 ml of peptone broth (Oxoid, England). A 1 ml aliquot of the culture was centrifuged at 10,000 rpm for two minutes in a microcentrifuge (BioRad, USA). DNA was extracted from each isolate using the Promega Wizard genomic extraction kit (Promega, corporation, Madison, USA) according to the manufacturer's instructions. Detection of class 1 and 2 integrons All the isolates were screened for class 1 and 2 integrons by polymerase chain reaction (PCR). Class 1 integron was detected and amplified by Levesque 5CS and 3CS primers which bind the 5' and 3' conserved ends respectively [19]. Class 2 integron was detected and amplified by White hep74 and White hep51 to hybridize attI2 and orfX respectively [20] (S1 Table). The following PCR cycle, adapted from proposed by Lévesque et al. (1995, with modifications) was used to amplify the variable regions of both class 1 and class 2 integrons. following a 2 minute hot start at 94˚C, 40 cycles of denaturation 94˚C for 30s, annealing 57˚C for 30s, extension 72˚C for 1 min per kb were performed. E. coli strains 042 (carrying a class 1 integron) and 17-2 (carrying a class 2 integron) were used as positive controls in the integron PCRs and E. coli K-12 strain DH5α, lacking integrons, was used as a negative control. 5μl of each PCR reaction was electrophoresed on a 1% agarose gel in 1X Tris Acetate EDTA buffer. Gels were stained with ethidium bromide for 15 minutes, destained in distilled water for 30 minutes, and visualized under ultraviolet (UV) light. Identification of resistance cassettes and cassette combinations in integrons Unique resistance cassettes and cassette combinations within class 1 and 2 integrons were delineated by performing restriction fragment length polymorphism (RFLP) with Mbo1 (Biolab, England) and Alu1 (Biolab, England) as previously described [21]. Up to three amplicons from each unique profile were ligated into a pGEMT-Easy vector (Promega, USA) and sequenced. The identity of contained cassettes was determined using BLAST [22]. Flagellin typing The entire coding sequence of the variable chromosomal gene fliC was amplified by PCR using the primers F-FLIC (5'-ATG GCA CAA GTA ATT AAT AAC CAA C-3') and R-FLIC (5' CTA ACC CTG CAG CAG AGA CA-3') [28]. The cycling conditions were 30s at 95˚C, 1 min at 60˚C, and 2 min at 72˚C for 35 cycles. PCR product was digested with Rsal (Promega, USA) as described by Fields et al. (1997) [29]. The restriction profiles were compared after electrophoresis on 1.5% agarose gel. The H-type of flagellin RFLPs of interest was determined by sequencing and BLAST analysis. Multi locus sequence typing (MLST) Greater resolution of similarity between chromosomes of selected isolates was provided by multilocus sequence typing. This was done by amplifying seven housekeeping genes namely adenylate kinase (adk), fumarate hydratase (fumC), gyrase B (gyrB), isocitrate dehydrogenase (icd), malate dehydrogenase (mdh), adenylosuccinate synthetase (purA) and recombinase A (recA) with primers described by Wirth et al [30] (S2 Table). Cycling conditions were as follows: 95˚C for 2 minutes, followed by 35 cycles of 94˚C for 1 minute, 56˚C for 1 minute and 72˚C for 1 minute. A final, 3 minute elongation step was performed at 72˚C. Amplified DNA products of the housekeeping genes were sequenced and alleles were assigned by using an online E. coli MLST database at http://www.mlst.net. Statistical analysis The Chi-square (χ2) and Fischer's exact tests (two-tailed) of R statistical software package (version 3.3.0) were used to determine the statistical significance of the data. Pearson Product Moment Correlation Coefficient, r, calculated using SPSS (SPSS, Inc. Chicago, Illinois) was used to identify significant correlations. All reported p-values were two-sided and a p-value of less than or equal to 0.05 was considered to be statistically significant. Subjects and faecal Escherichia coli isolates A total of 1098 Escherichia coli strains was isolated from the stool samples of 134 mother and child pairs recruited into the study. These comprised of 542 isolates from children aged up to 60 months (13.36±2.12) and 556 isolates from their mothers' ages of 15-46 years (25.88 ±7.07). Antimicrobial resistance rates in the faecal Escherichia coli isolates As shown in Table 1, the majority of the E. coli isolates from both mothers and their children were resistant to streptomycin (n = 1050, 95.8%), sulphonamide (n = 1038, 94.2%), ampicillin (n = 1014, 92.5%), tetracycline (n = 1025, 93.3%) and trimethoprim (n = 947, 85.9%) except to ciprofloxacin (n = 94, 8.4%). Isolates that were resistant to ciprofloxacin (χ 2 = 6.498; p = 0.009) were more frequently isolated from healthy mothers than their children. The differences in resistance to other antimicrobials were not significantly different between isolates obtained from children and those from mothers. Plasmid replicon types and integrons Plasmid replicons belonging to all incompatibility groups sought except A, C, K, B and W were identified among integron-containing strains. (The protocol we used does not delineate IncY from the more common IncFIB replicon). As shown in supplementary Table 4, the majority of the integron containing isolates (n = 330, 80.5%) carried one or more of ten different plasmid-replicon-types and at least 133 carried more than one type. IncFIB/Y (n = 169; 41.2%) was the most frequently detected plasmid replicon in the isolates (S4 Table). Pearson regressions did not reveal any association between integron cassette combinations and specific plasmid replicons, pointing to a role for plasmids other than those sought in this study and/or transposons in disseminating integrons (Fig 1). In 34 instances, the same integron cassette array was detected in at least one E. coli isolate from a child as well as an isolate from that child's mother (Table 4), leading us to hypothesize transmission of integron-bearing strains and/or mobile elements within families. Diversity among strains bearing the dfrA5 cassette We have previously identified a widely disseminated transposon as being responsible for disseminating integron-borne dfrA7 cassettes among commensal E. coli isolates in Nigeria and elsewhere in Africa [21]. dfrA7 was highly prevalent in the current study and associated with a wide variety of plasmid replicons (S4 Table). A single dfrA5 cassette was the second most common single dfr cassette in this study and has been associated with successful invasive nontyphoidal Salmonella lineages across Africa [31]. As the context for this cassette among commensals remains unknown, we used a simple PCR marker to define one chromosomal feature and one plasmid feature in each of the strains dfrA5 isolates obtained in this study to determine whether clonal expansion and/or successful plasmids showed associations with the presence of dfrA5. As shown in Figs 2 and 3, fourteen flagellin types were identified and arbitrarily labelled A-N. Type C (23%) was predominantly identified followed by type E (16%) and type B (11%). Multiple distinct isolates with the dfrA5 cassette were identified in the same host in six instances. In three of those instances, (pairs 246, 250 and 289) the same fliC allele was not identified in the mother and the infant whereas in the other three (pairs 130, 174 and 195), identical fliC allele although no single allele type was seen in more than one pair. Two of the types seen in pairs were the most common overall (C and E). Of interest is that for four of the six pairs with dfrA5-positive isolates, isolates from both the mother and the child carried FIB/Y replicons in four cases and this replicon represents 42.4% of dfrA5 isolates overall (S5 Table). Resistant pandemic lineage in the fecal reservoir of mothers and infants Fourteen strains out of 1098 E. coli strains isolated from mother and child pairs carried dfrA17-aadA5 cassette that is normally found in clonal group A (CGA), a multi-resistant E. coli belonging to multilocus sequence type (MLST) 69 [32]. These strains comprised of 12 strains with only dfrA17-aadA5 cassette and two strains with a dfrA17-aadA5 cassette and dfrA1-sat1, a class 2 Integron cassettes ( Table 3). As CGA possess a chromosomally located dfrA17-aadA5-bearing integron, we sought to determine whether any of the 14 dfrA17-aadA5positive strains identified in this study belong to this clone. CGA strains carry an H18 flagellin (pattern A in our study) and belong to the MLST type 69 whose allele profile includes fumC69. We identified the fliC, fumC and recA allele types for the dfrA17-aadA5-positive strains. As shown in Table 5, nine isolates from five different individuals carried the CGA-associated H18 [26] scheme confirmed that all nine strains belong to ST69. In one instance, independent dfrA17-aadA5-positive isolates were obtained from a mother and her infant. Discussion This study examined integron cassette content in faecal E. coli from children with diarrhea and their apparently healthy mothers. Integrons are gene exchange systems and are known to play a significant role in the acquisition and dissemination of antimicrobial resistance genes and to be selected by antimicrobial pressure [6]. While our study had the limitation of only detecting cassette regions under 4 Kb in size that had intact 5' and 3' ends, we still amplified integron cassettes from as many as 37.3% of isolates. Our detection of 340 (31%) class 1 integron containing strains and 44(4%) class 2 integron containing strains is comparable to data from studies using a similar methodology, which reported a prevalence of integrons to be above 30% [21,33,34,35,36]. Integrons are clearly widespread in faecal E. coli in this environment and may be responsible for high rates of resistance observed in this study. This is true for trimethoprim where a significant association between the presence of resistance and amplification of integron cassettes was seen and for which eight different cassette arrays including a resistance-conferring dfrA gene were detected. The preponderance of the cassette arrays with a dfrA gene may be associated with the recent and current intensive use of trimethoprim in many common infections in the study environment as well as to prevent opportunistic infections in HIV-positive patients [21,37,38,39]. Resistance to chloramphenicol and nalidixic acid was also associated with the presence of integrons even though cassettes conferring resistance to these agents were not recovered. This points to integrons as markers of multiresistant strains and the potential that integrons are physically linked to other resistance genes. Unlike class 2 integrons, the diversity of class 1 integron cassettes/cassette combinations is low when compared with studies performed elsewhere [40,41]. The low diversity of class 1 cassettes led us to hypothesize that a few strains or mobile elements may be disseminating class 1 integron-mediated resistance. Integrons are not themselves mobile but can be rapidly disseminated when contained within mobile elements such as plasmids. We detected known plasmid replicons in a large number (81.7%) of integron containing isolates examined. In particular, IncF plasmids (predominantly IncFIB/Y replicons but also IncFIA and IncFIC) were commonly found in association with the integron-borne cassettes identified. IncF plasmids represent one of the most prevalent incompatibility types and have been identified worldwide in Enterobacteriaceae from different origins and sources [42,43]. These plasmids appear to significantly contribute to the dissemination of antibiotic resistance in Enterobacteriaceae and some have been associated with specific genes conferring resistance to β-lactams, quinolones, and aminoglycosides [42,44,45,46]. Therefore the preponderance of IncFIB in integron containing isolates may indicate its contribution to the dissemination of integron-borne cassettes in this environment. The study also points to a significant prevalence of IncO plasmids, which have not previously been described from Nigeria. Of all the cassettes detected in integrons, aadA1 (n = 96, 23.4%) which encodes resistance to spectinomycin/streptomycin was the commonest. This is in line with various reports of high prevalence of aadA1 among integron-positive isolates in the literature [20,47]. This was followed by dfrA7 (n = 75, 18.3%), a cassette that has also been reported worldwide as a commonly identified cassette combination in integrons [48,49,50,51]. In our previous research, focused on different populations and strains, we demonstrated that a single dfrA7 cassette is the predominant amplifiable cassette combination from fecal E. coli in Nigeria and elsewhere in Africa [21]. This was also the case in this study. Previously, we found that a widely disseminated transposon accounted for the high frequency of this particular allele [21]. While there were seven other dfr-containing configurations, the second most common one, a single dfrA5 cassette was also greatly over-represented in the study. Kingsley et al [51] and Okoro et al [31] have reported this cassette in Salmonella enterica Typhimurium in the context of a transposon that is similar to the one we found associated with dfrA7. S. Typhimurium strains carrying this transposon represent a successful lineage that has expanded at multiple sites on the African continent. We wanted to know whether overrepresentation of dfrA5 in this study was due to clonal expansion of one or a few E. coli lineages or mobility of the cassette through the commensal flora by virtue of a transposon or other mobile element. Flagellin typing of the 59 isolates that carried a single dfrA5 cassette revealed that while some flagellin types were very common, none represented more than 23% of the set and there were 14 flagellin types overall. As each flagellin type is seen in multiple E. coli lineages, it is unlikely that a significant expansion of any dfrA5-bearing clone has occurred. Dissemination of dfrA5 across the commensal flora could be due to a successful plasmid or a more modular element such as a transposon that can associate with different plasmids or the chromosome [52]. In the case of the former, a single plasmid marker would be overrepresented in the set. However PCR screening for eleven plasmid replicons identified 49 among dfrA5 strains and different replicons were associated with different flagellin types. Altogether the data point to diverse strain backgrounds and possible over-representation of FIB/Y plasmids in integron bearing strains but little indication of clonal expansion of a distinct dfrA5-bearing lineage. The dissemination of dfrA5 could also be due to a more modular mechanism potentially similar to what we have previously seen for dfrA7. Venturini et al. [52] recently reported a role for IS26 elements in disseminating dfrA5, which could very well be under play in our own study environment. In addition to pointing to flexible contexts for more common integron arrays, this study also provided a little insight into circulation of a successful pandemic lineage, the dfrA17-aadA5-bearing ST69 CGA in the faecal microflora. Multi-resistant E. coli belonging to multilocus sequence type (MLST) 69 have been termed CGA and implicated in invasive infections in different parts of the world, including Nigeria [32]. The epidemiology of CGA in Nigeria is understudied since pandemic lineages are rarely sought here. As CGA possess a chromosomally located dfrA17-aadA5-bearing integron, we sought to determine whether any of the 14 dfrA17-aadA5-positive strains identified in this study belong to this clone. Likely members of this lineage represented a small minority of the integron-bearing strains detected in this study. Verified ST69 strains were detected in 5 (1.9%) of 264 individuals including both members of one mother-infant pair. Our data suggest that CGA may be shared among individuals in the same household and its presence may facilitate the dissemination of dfrA17-aadA5 cassette in this environment. Conclusions In conclusion, this study reveals the presence of integrons in fecal E. coli isolated from apparently healthy mothers and their sick children in Nigeria. The identified integrons contain antibiotic-resistant gene cassettes and while the variety of cassette combinations is very limited, the backgrounds of strains carrying these elements and the repertoire of plasmids associated with them are reasonably diverse. The detection of different replicons in association with different flagellin types in dfrA5 strains, points to a successful plasmid or a more modular element that is borne on plasmids or located on the chromosome for its dissemination. In view of the findings of this study, there is a need for improved surveillance, which can provide information for the persistence and mobility of resistance genes between the community and clinical settings. Supporting information S1
2018-04-03T02:26:41.745Z
2017-08-22T00:00:00.000
{ "year": 2017, "sha1": "59ad524ff4cc6dbf469617c4d182751a78b39da1", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0183383&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59ad524ff4cc6dbf469617c4d182751a78b39da1", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119253780
pes2o/s2orc
v3-fos-license
AC Magnetic Field Sensing Using Continuous-Wave Optically Detected Magnetic Resonance of Nitrogen Vacancy Centers in Diamond Nitrogen-vacancy (NV) centers in diamond are considered sensors for detecting magnetic fields. Pulsed optically detected magnetic resonance (ODMR) is typically used to detect AC magnetic fields; however, this technique can only be implemented after careful calibration that involves aligning an external static magnetic field, measuring continuous-wave (CW) ODMR, determining the Rabi frequency, and setting the microwave phase. In contrast, CW-ODMR can be simply implemented by continuous application of green CW laser and a microwave filed. In this letter, we report a method that uses NV centers and CW-ODMR to detect AC magnetic fields. Unlike conventional methods that use NV centers to detect AC magnetic fields, the proposed method requires neither a pulse sequence nor an externally applied DC magnetic field; this greatly simplifies the procedure and apparatus needed to implement this method. This method provides a sensitivity of 2.5 {\mu}T/Hz$^{1/2}$ at room temperature. Thus, this simple alternative to existing AC magnetic field sensors paves the way for a practical and feasible quantum sensor. To demonstrate a magnetic field sensor, either a pulsed ODMR technique or CW-ODMR technique may be used. The pulsed ODMR technique has been applied to detect both AC and DC magnetic fields 3,16,17 . Although the pulsed ODMR technique provides better sensitivity than the CW-OMDR technique, it requires careful calibration before it can detect a magnetic field. This calibration typically involves aligning an external static magnetic field, measuring CW-ODMR, observing Rabi oscillations (to determine the Rabi frequency), controlling the microwave phase, and constructing a pulse sequence. Pulsed ODMR detects high frequency magnetic fields by narrowing the pulse interval, which is technically possible, but this method is not easy from the viewpoint of the coherence time of NV center and the cost of the high speed control device. In contrast, the CW-ODMR technique can be used to detect DC magnetic fields or low frequency (e.g., kHz) AC magnetic fields, and it is a more convenient technique because it only requires the continuous application of microwaves and an optical laser. Although the sensitivity of CW-ODMR is currently lesser than that of the pulsed ODMR, its simple experimental requirements have led many researchers to use it for practical and feasible magnetic field measurements. To extend the applications of CW-ODMR method, using CW-ODMR with NV centers in diamond, we developed a method to measure AC magnetic fields up to MHz frequencies. Already, the CW-ODMR method is being used to measure AC magnetic fields in the kHz frequency range; in this technique, magnetic fields are applied to exploit the two level nature of NV centers [18][19][20] . In contrast, in the present work, we use the spin-1 properties of NV centers to measure AC magnetic fields with MHz frequencies. Three energy eigenstates exist in the ground state manifold of NV centers, all of which are used for magnetic field sensing. The lowest energy eigenstate |0〉 is about 2.87 GHz below the two higher energy eigenstates, which themselves have an energy difference in the order of MHz. The idea behind the proposed method is to use this MHz transition frequency to detect AC magnetic fields while the lowest energy eigenstate is continuously excited by the continuous microwave radiation. Note that the CW-ODMR technique is compatible with CCD based techniques that have slow camera frame rate. Since CCD cameras detect a wide field, the magnetic field information in diamond may be collected over a wide area in a single measurement. This allows the magnetic field distribution to be rapidly acquired because, unlike other techniques, the magnetic field in diamond does not need to be measured point by point. However, a potential problem of the CCD based scheme is the slow camera operation time (from 100 Hz to 1 kHz). Since the pulse repetition rate exceeds a few MHz for typical AC magnetic field sensing, sophisticated techniques such as the use of optical shutters are required 6,9 . Conversely, because the CW-ODMR technique does not invoke such fast operations, in our AC magnetic field sensors provides a way to adopt the CCD based technique with much more simple experimental setup. Since the CCD based setup can increase the measurement volume of NV centers, arXiv:1801.05865v1 [quant-ph] 17 Jan 2018 the signal from the NV is enhanced and highly sensitive sensing becomes possible. Furthermore, no external static magnetic field is used, the measurement volume can be increased without concern for the uniformity of the static magnetic field. We start by explaining the theory behind both the conventional methods and the proposed method. The Hamiltonian of an NV center with no external magnetic field is given as follows. whereŜ is a spin-1 operator for electron spin, D is the zero field splitting, and E x (E y ) is the strain in the x(y) direction. Without loss of generality, we set E y = 0 by defining the x axis to pass through the NV center in the direction of the strain. Throughout this letter, we set ħ = 1. The ground state is |0〉, and we define the two higher energy eigenstates as |D〉 = (|1〉 − | − 1〉) / 2 and |B〉 = (|1〉 + | − 1〉) / 2 with eigenenergies D − E x and D + E x , respectively. With zero external magnetic field, two dips appear around 2.87 GHz in CW-ODMR, which is indicative of externally driven transitions from the ground state |0〉 to higher energy eigenstates such as |B〉 or |D〉 [21][22][23] . We now consider the dynamics of NV centers when both the microwave field and the target AC magnetic field are present. With these external fields, the Hamiltonian of the NV center takes the form H = H NV + H ex , where H ex is given by where γ e is the gyromagnetic ratio of the electron spin, B mw (B AC ) is the microwave field (target AC magnetic field) amplitude, and ω mw (ω AC ) is the frequency of the driving microwave field (target AC magnetic field). We assume that ω mw is in the order of GHz, whereas ω AC is in the order of MHz. In a rotating frame defined by U, the effective Hamiltonian becomes H = U HU † − iU d dt U † . By considering U = e iω mw tŜ 2 z and using the rotating wave approximation, we obtain the following Hamiltonian. In a different rotating frame defined by U = e i ω AC 2 Ŝ2 x −Ŝ 2 y , we obtain the following Hamiltonian. where we have again used the rotating wave approximation. Importantly, an AC magnetic field in the z direction [the fifth term in the Hamiltonian (5)] induces the transition between |B〉 and |D〉 when the frequency of the field in resonance with the energy difference between |B〉 and |D〉. Without the AC magnetic field, we can only induce transitions between the ground state |0〉 and the bright (dark) state |B〉 [|D〉] via the microwave radiation with a frequency of ω mw D + E x (ω mw D − E x ) in the conventional CW-ODMR setup. |B〉 and |D〉 the applied AC magnetic field and the microwave field can drive transitions from the ground state to the bright and dark states. Thus, the results of CW-ODMR with an applied AC magnetic field should differ from t hose of CW-ODMR without any AC magnetic field. We now quantify the change in the CW-ODMR signal that occurs because of the AC magnetic fields first focusing on the transition induced between |B〉 and |D〉 by the AC magnetic field with a frequency of ω AC = 2E x . We assume a weak amplitude for the AC magnetic field so that we can use time dependent perturbation theory. In the interaction picture, we obtain By using Fermi's golden rule, we can show that the transition from the ground state to the higher energy eigenstates ω mw D ± γ e B (z) AC /2 ± E x (Fig. 1). Thus, applying an external AC magnetic field changes the dip structure in CW-ODMR. We now describe the details of the diamond sample used in our experiment. We used an ensemble of NV centers in a diamond film on a (001) electronic grade substrate. The isotopically purified 12 C diamond film ( 12 C = 99.999 %) was grown using nitrogen doped microwave plasma assisted chemical vapor deposition. To both increase the NV center density and improve the coherence time 24 , the sample was irradiated with ion doses of 10 12 cm −3 with 15 keV He + ions and was annealed for 24 h in vacuum at 800 • C. The NV density was estimated to be of the order of 10 15 cm −3 . Now, we explain the experiment of sensing an AC magnetic field using CW-ODMR. For these experiments, we used a homebuilt system for confocal laser scanning microscopy with a spatial resolution of 400 nm. The diamond sample was positioned above the antenna 25 used to emit the microwave radiation. A 30 µm diameter copper wire is placed in contact with the sample surface to apply the target AC magnetic field, which is detected by measuring the difference in the CW-ODMR spectrum. Figure 2 shows the signal from the conventional CW-ODMR technique (with no external AC magnetic field); the resonance frequency is split by about 4 MHz because of a local magnetic field and strain from impurities in diamond 22,23 . This splitting gives the energy difference between |B〉 and |D〉. ODMR is performed by applying an AC magnetic field with a frequency of f AC = ω AC /2π = 4 MHz and a magnetic field amplitude B AC = 7.7 µT to induce transitions between |B〉 and |D〉. The result in Fig. 2 shows the difference in the spectrum due to the external AC magnetic field, and it demonstrates the detection of the external AC magnetic fields by CW-ODMR. Since we use the resonance between |B〉 and |D〉, the frequency band in which AC magnetic fields may be detected is determined by the width of the split in the CW-ODMR spectrum. As noted in Eq. (5), this splitting may be increased by applying an electric field 26 , which provides a way to determine which frequency of the AC magnetic field will be detected. Straightforward calculations show that the detectable frequencies range from hundreds of kHz to hundreds of MHz. The lower limit is determined by the resonance linewidth of the ODMR spectrum with 200 kHz currently being the minimum linewidth 24 . However, the upper limit is determined by the breakdown field in diamond. The splitting width due to the Stark effect is given as 2RE x , so the maximum splitting width is 340 MHz because the Stark shift constant R = 17 Hz cm/V 27 and the breakdown electric field of diamond E = 10 MV/m 28 . Next, we measure ODMR dependence on the amplitude of the AC magnetic field. The ODMR spectrum with various AC magnetic field amplitudes is shown in Fig. 3 where we set f AC = 4 MHz. While two resonances are observed without applying the AC magnetic fields, the resonance is split into four lines, and the split becomes larger as we increase the amplitude of the AC magnetic ODMR spectrum with applied AC magnetic fields. Here, the x and y axes denote the frequency of the microwave and the amplitude of the AC magnetic fields, respectively. Four resonances are observed with the applied AC magnetic fields. fields. This is consistent with our derived formula that represents the resonance ω mw D ± γ e B (z) AC /2 ± E x . In Fig. 4(a), we plot the ODMR signal against the amplitude of the AC magnetic field, with the microwave frequency f mw = ω mw /2π = 2.86887 GH z, which is one of the two resonance frequencies obtained with no external AC magnetic field. The signal quadratically depends on the amplitude, which can be quantitatively understood as follows. We represent the two resonances around D − E x as the sum of two Lorentzian functions: , where γ is the linewidth. In this case, we obtain dF(B)/dB ∝ B for small B, which this shows the quadratic dependence. To estimate the magnetic sensitivity δB AC from the experimental results, we must determine the signal fluctuation δS where S corresponds to the photoluminescence in ODMR 3 . As shown in Fig. 4(b), the fluctuation decreases with the relation T, where T is the mea-surement time. From these experimental observations, we estimate the sensitivity of the method for detecting AC magnetic fields to be 2.5 µT/ Hz. Note that the sensitivity could be improved by using a NV center in diamond with a narrow linewidth 24 and with an almost perfect preferential orientation of the axis 29-31 and a high density of the NV centers 24,31,32 . In fact, by using the parameters reported in 31 , we estimate that the proposed method would have a sensitivity of 50 nT/ Hz. In conclusion, we report a method to detect MHz frequency AC magnetic fields that uses CW-ODMR by exploiting NV centers in diamond. By simply applying a continuous microwave field and optical laser irradiation, the method provides a sensitivity of 2.5 µT/ Hz at room temperature. The experimental setup is very simple because it requires neither an external DC magnetic field nor a pulse sequence. These results pave the way to realize a practical and feasible AC magnetic field sensor. We thank H. Toida and K. Kakuyanagi for helpful discussions. This work was supported by JSPS KAKENHI Grant No. 15K17732. This work was also supported by MEXT KAKENHI Grants No. 15H05868, No. 15H05870, No. 15H03996, No. 26220602, and No. 26249108. This work was also supported by the Advanced Photon Science Alliance (APSA), JSPS Core-to-Core Program FY2013 Projects No.2, and Spin-NRJ.
2018-01-17T21:00:24.000Z
2018-01-17T00:00:00.000
{ "year": 2018, "sha1": "ff8197b4c8bc68fde24dd75eb99b73a30cf234ad", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1801.05865", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ff8197b4c8bc68fde24dd75eb99b73a30cf234ad", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics", "Materials Science" ] }
235027237
pes2o/s2orc
v3-fos-license
DEVELOPMENT OF DETERGENT RECIPE WITH IMPROVED ENVIRONMENTAL CHARACTERISTICS . The research on the development of an innovative formula of a synthetic detergent with improved environmental properties, which meet the environmental standard of SOU 08.002.12.065:2016 “Detergents and cleaning products. Environmental criteria for life cycle assessment” is carried out. The accumulated theoretical and practical experience is generalized, the general scheme of designing and development of new goods taking into account features of detergents with the improved ecological characteristics is created. Introduction Household chemicals are one of the most promising among other non-food products and have a certain trend of development due to the development of new raw materials and modern production technologies. The Ukrainian market offers a wide range of different tools, but the chemical composition of this type of product is quite similar. Detergents are usually based on surfactants and auxiliary components containing carcinogens, toxins, allergens, etc. At the same time, detergents on a natural basis are limited. Detergents can pollute the air in the room, irritating the mucous membranes of the eyes, nose and throat. They cause headache and dizziness. Most detergents contain hazardous ingredients that remain on the fabric even after rinsing. Besides, detergents can cause significant damage to the environment, disrupting aquatic ecosystems. When entering the reservoir, there is an intensive reproduction of algae, especially blue-green, which in the process of their biological development reduce the oxygen content in the water, form toxic substances and cause mass death of hydrofauna. Phosphates, which enter the sewage treatment plants of biological type in high concentration with sewage, completely suppress the biological functions of activated sludge microorganisms. A lot of chemicals that are a part of the product easily pass through water treatment systems and after getting into open water systems return to the city's water supply system. Taking into account all the consequences, the number of conscious consumers is growing every day and the safety indicators of detergents become a determining criterion when choosing a product. Therefore, the authors decided to develop an innovative formula of synthetic detergent with improved environmental properties that meet the environmental standard. The analysis and use of standard methods showed that for comparative studies for this group of products reference samples and standards DSTU and TU U 24.5-36385435-001: 2011 can also be used, as they allow to assess the basic properties of synthetic detergents. Theoretical part The theoretical and methodological basis of the study are scientific works in the field of technological processes of detergent production, by scientists K. Separate components of detergents are in the circle of scientific interests of domestic chemists. Scientists are studying the properties of compounds and developing alternative methods for their synthesis while drawing a parallel between the feasibility of using them for the production of detergents in the future. Thus, M. Platonov notes that derivatives of sulfonic acids attract attention of scientists because they are widely used in the production of antimicrobial drugs for antifungals and detergents. V. Donchak, having analyzed a number of topical scientific publications, noted that "gemini" surfactants (surface-active oligomers) are increasingly used as effective detergents. O. Dzevochko, was engaged in the creation of the diffusion-controlled process of oxidation of low-concentration SO2 under pressure to obtain a sulfating agent in the production of surfactants in the reactor. About 80 % of their number is used in synthetic detergents. Economists, as a rule, study the market of household chemicals and develop theoretical provisions, methodological approaches and practical recommendations for the formation of strategy for the development of chemical industry enterprises (NO Gritsyuk, PG Pererva, VV Oleshko, M. Yu. Barna). A separate area of research concerns the non-domestic use of detergents (in technology, agriculture) and their impact on the objects of this industry. A separate area is marketing and legal research, which examines the problems related to expanding the range of products, improving its composition and more. However, most research has focused on the environmental aspect, as required by the state-chosen security vector of the "Ukraine 2020" Sustainable Development Strategy, which calls for special attention to be paid to a safe environment and access to quality drinking water, safe food and industrial goods. The new ecological consciousness of citizens places appropriate demands not only on household products but also on all national producers who could compete in the foreign market. Properties of a detergent with improved environmental performance The development of a detergent with improved environmental performance can include almost all stages of design and formation of the product quality. Companies that have experience in creating new products, modify the product in their laboratories. New enterprises focus on independent organizations and suppliers of raw materials. Suppliers, as a rule, are interested in constant modification of products for the sake of its competitiveness and compliance with constantly changing consumers' requirements. It is possible to reasonably determine the need for modification and the optimal degree of novelty of the product only as a result of expert research. We conducted such research during the design and development of detergents with improved environmental performance. For this purpose, a number of properties of a detergent with improved environmental characteristics have been identified, which determine its main environmental, hygienic and functional values: Development of a basic recipe Most of the consumer properties that determine the basic values of detergents are formed during the development of the recipe. The development of a detergent formulation with improved environmental characteristics began with the formation of basic composition. Based on the analysis of literature data, raw material proposals, taking into account modern environmental standards for the production of synthetic detergents and the study of the properties of the components, the basic formulation of washing powder "Universal" was proposed (Table 1): In order to improve consumer properties, the composition was optimized. For this purpose, the influence of the main and a number of auxiliary components on the washing ability was studied, as it is a mandatory indicator and depends on the correctly selected system of components that are part of the detergents. The increase in functional properties can be achieved by introducing such components as complexing agents. In the course of the research, it became clear that a properly selected system of complexing agents can significantly increase the washing ability. Simultaneously with the strengthening of the washing effect, the complexing agents prevent the deposition of minerals on the heating elements of the washing machine, as well as on the laundry, which gives it a grey tint. In order to make a washing powder with the working name "Universal" and taking into account the above recipe and all technological, environmental and resource-saving requirements, the following sequence of actions and the course of the production process are proposed (Table 2). Table 2 Components and course of the technological process for manufacturing the washing powder "Universal" Powder with improved environmental performance Sodium tripolyphosphate is widely used in the production of powdered synthetic detergents. However, low solubility and poor environmental effect in water make it difficult to use the detergents with improved environmental performance. Therefore, instead of phosphorus-containing complexing agents, combinations of sodium gluconate, polycarboxylate and ethylenediaminetetraacetic acid derivatives were used. Based on the above data, the following recipe No. 2 of the powder with improved environmental performance is proposed ( Table 3). The course of the technological process of the production of this experimental brand of powder is presented in Table 4. During the process, it was found that ethylenediaminetetraacetic acid derivatives and sodium gluconate bind the salts that create water hardness and soluble complexes, and polycarboxylates prevent the redeposition of suspended dirt particles on the cleaned surface. The use of these components separately in the formulation of liquid detergents leads to an increase in the washing ability only up to 30-42 %. The combined action of both components allows for increasing the washing ability up to 72 %. Therefore, adding complexing agents and polycarboxylate to the system at the same time significantly increased the washing ability ( Fig. 1-2) The water is demineralized 2.0 Determination of proteolytic activity showed that soap is additionally a stabilizer of enzymes, thereby reducing their proteolytic decomposition. The introduction of baking soda prevents the "caking" of the powder, and enzymes are an important adjunct to most synthetic detergents. When developing a product containing enzymes, special attention should be paid to the problem of stabilization of these additives during long-term storage. The stability of the enzymes is affected by the acidity of the medium, the water content and the composition of the surfactants used. It is known that anionic surfactants reduce the stability of enzymes, nonionic surfactants have a certain stabilizing effect, and cationic surfactants are not compatible with enzymes. Recipe No. 3 with the replacement of alkynbenzenes and alkynphenols with environmentally friendly component. According to the requirements of SOU 065, environmentally friendly household chemicals should not contain surfactants based on alkylbenzenes and alkylphenols. Therefore, in the framework of the above recipe was the replacement of ABS and antiresorbent with more effective (Table 5). Based on this component modification, recipe No. 3 was developed. The course of the technological process for manufacturing detergents according to recipe No. 3 are presented in Table 6. Trilon Trilon Sokolan Trilon + Sokolan+ Soap soap The water is demineralized 1.0 By modifying the technological process, recipes, modification of raw materials, the use of energy-saving technologies, the maximum focus on environmental protection and ensuring sanitary and hygienic standards in accordance with accepted standards, we have proposed the technology of production of washing powders "Royal Powder". The technology and recipe were tested in the conditions of DeLaMark LLC, which became the experimental basis for this study, and fully complies with the concept of "green office" and technical, environmental and sanitary standards. Conclusions Therefore, in order to increase the functional efficiency and development of synthetic detergents with improved environmental performance several recipes have been developed by optimizing the composition. It has been proven that an appropriate change of complexing agents reduces the harmfulness of the product and increases its detergency. The result was the development of three experimental recipes. The developed formulations of experimental samples of powder and liquid detergents meet the standards of SOU OEM 08.002.12.065:2016 "Detergents and cleaning agents. Environmental Criteria for Life Cycle Assessment" and the national standards of DSTU developed in accordance with them. In terms of functional properties, the products developed and analyzed for compliance with standards outperform typical detergents with lower environmental performance.
2020-12-10T09:03:57.483Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "e4f857ceb5c8d8d36ef2863d684122c053c9d3e4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.23939/ep2020.04.223", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1d2fcb86d4f1dfc0f768618a8b0384a3ff124aa4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }